Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's a reason if your production platform is Kubernetes:

---

The default pod limit per node 110. This is by design and considered a reasonable upper limit for the kubelet to reliably monitor and manage everything without falling over into NotReady/PLEG status.

If your node has a ton of cpu and memory, then 110 pods will not come close to utilizing all the metal. You can go down the path of tuning and increasing the pod limit, but this is risky and often triggers other issues on components that are designed for more sane defaults.

It also means that if your node goes NotReady (not a hardware failure), it's now a much bigger deal because you have fewer total nodes and many more pods to re-schedule at once.

This is solved by splitting up these massive nodes into smaller nodes via virtualization.

It's also nice having an api-driven layer to manage and upgrade the vms versus shoehorning a bare-metal solution. I would argue it also encourages immutable infrastructure by making it much more accessible.

There are bare-metal solutions but it's often more complicated and slower than launching/destroying vms.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: