Their product seems to be offering "buildkit as a service" and I'd guess from their perspective the safest isolation boundary is at the VM level. Unknown why they don't boot up a bigger .metal instance and do their own virtualization but I'm sure there are fine reasons
probably: saves money vs a fleet of consistently running instances
> From a billing perspective, AWS does not charge for the EC2 instance itself when stopped, as there's no physical hardware being reserved; a stopped instance is just the configuration that will be used when the instance is started next. Note that you do pay for the root EBS volume though, as it's still consuming storage.
Though I would say for a lot of organizations, you aren't operating your builds at a scale where you need to be idling that many runners and bringing them down and up often enough to need this level of dynamic autoscaling. As the article indicates, there's a fair amount of configuration and tweaking to set something up like this. Of course, for the author it makes total sense to do that, because their entire product is based on being able to run other people's builds in a cost effective way.
If cost savings are a concern, write a 10 line cron script to scale your runners down to a single one when not in business hours or something. You'll spend way less time configuring that than trying to get dynamic autoscaling right. Heck, if your workloads are spiky and short enough, this kind of dynamic scaling isn't even that much better than just keeping them on all the time, because while this organization got their EC2 boot time down to 4 seconds, they are optimizing the heck out of it. I'll tell you in a vanilla configuration with the classic AMI's that they offer on AWS the cold boot time is closer to 40 seconds.
Why would you start an instance every time you build a container ?
I am missing something here ..
(You wouldn't)
Patting themselves on the back for 'fixing' a self-created problem... EC2 is the wrong abstraction for this use case imo
Their product seems to be offering "buildkit as a service" and I'd guess from their perspective the safest isolation boundary is at the VM level. Unknown why they don't boot up a bigger .metal instance and do their own virtualization but I'm sure there are fine reasons
probably: saves money vs a fleet of consistently running instances
> From a billing perspective, AWS does not charge for the EC2 instance itself when stopped, as there's no physical hardware being reserved; a stopped instance is just the configuration that will be used when the instance is started next. Note that you do pay for the root EBS volume though, as it's still consuming storage.
https://depot.dev/blog/faster-ec2-boot-time
That is precisely why.
Though I would say for a lot of organizations, you aren't operating your builds at a scale where you need to be idling that many runners and bringing them down and up often enough to need this level of dynamic autoscaling. As the article indicates, there's a fair amount of configuration and tweaking to set something up like this. Of course, for the author it makes total sense to do that, because their entire product is based on being able to run other people's builds in a cost effective way.
If cost savings are a concern, write a 10 line cron script to scale your runners down to a single one when not in business hours or something. You'll spend way less time configuring that than trying to get dynamic autoscaling right. Heck, if your workloads are spiky and short enough, this kind of dynamic scaling isn't even that much better than just keeping them on all the time, because while this organization got their EC2 boot time down to 4 seconds, they are optimizing the heck out of it. I'll tell you in a vanilla configuration with the classic AMI's that they offer on AWS the cold boot time is closer to 40 seconds.