If you want to KNOW the chain of custody for all of your OS and software, from the bootloader to the switch chip, and you want to run this virtualization platform airgapped, buying at rack-scale, you want Oxide. They are making basically everything in-house. That's government, energy, finance, etc. Customers that need descretion, security, performance, and something that works very reliably in a high-trust environment, with a pretty high level of performance.
If you need a basic "vm platform", VMware, Proxmox, Nutanix, etc. all fit the bill with varying levels of feature and cost. Nutanix has also been making some fairly solid kubernetes plays, which is nice on hyperconverged infrastructure.
Then if you need a container platform, you go the opposite direction - Kubernetes/OpenShift and run your VMs from your container platform instead of running your containers from your VM platform.
As far as "hyperconverged"...
"Traditionally" with something like VMware, you ran a 3-tier infrastructure: compute, a storage array, and network switching. If you needed to expand compute, you just threw in another 1U-4U on the shelf. Then you wire it up to the switch, provision the network to it, provision the storage, add it to the cluster, etc. This model has some limitations but it scales fairly well with mid-level performance. Those storage arrays can be expensive though!
As far as "hyperconverged", you get bigger boxes with better integration. One-click firmware upgrades for the hardware fleet, if desired. Add a node, it gets discovered, automatically provisions to the rest of the configuration options you've set. The network switching fabric is built into the box, as is the storage. This model brings everything local (with a certain amount of local redundancy in the hardware itself), which makes many workloads blazing fast. You may still on occasion need to connect to massive storage arrays somewhere if you have very large datasets, but it really depends on the application workloads your organization runs. Hyperconverged doesn't scale compute as cheaply, but in return you get much faster performance.
Also check this out: https://www.linkedin.com/posts/bryan-cantrill-b6a1_unbeknown...
If you need a basic "vm platform", VMware, Proxmox, Nutanix, etc. all fit the bill with varying levels of feature and cost. Nutanix has also been making some fairly solid kubernetes plays, which is nice on hyperconverged infrastructure.
Then if you need a container platform, you go the opposite direction - Kubernetes/OpenShift and run your VMs from your container platform instead of running your containers from your VM platform.
As far as "hyperconverged"...
"Traditionally" with something like VMware, you ran a 3-tier infrastructure: compute, a storage array, and network switching. If you needed to expand compute, you just threw in another 1U-4U on the shelf. Then you wire it up to the switch, provision the network to it, provision the storage, add it to the cluster, etc. This model has some limitations but it scales fairly well with mid-level performance. Those storage arrays can be expensive though!
As far as "hyperconverged", you get bigger boxes with better integration. One-click firmware upgrades for the hardware fleet, if desired. Add a node, it gets discovered, automatically provisions to the rest of the configuration options you've set. The network switching fabric is built into the box, as is the storage. This model brings everything local (with a certain amount of local redundancy in the hardware itself), which makes many workloads blazing fast. You may still on occasion need to connect to massive storage arrays somewhere if you have very large datasets, but it really depends on the application workloads your organization runs. Hyperconverged doesn't scale compute as cheaply, but in return you get much faster performance.