A Free Virtualization Odyssey, part 1: Proxmox VE

I’ve long been a skeptic about VMware’s enterprise level pricing, and with good reason. I’ve heard it said that it’s often more cost effective to take all your existing servers and plate them in gold, even after one takes into account the ‘cost and power savings in the long-run’ - there are always incidental needs that drive the infrastructure’s cost upward, not downward over time in a VMware deployment. Most SMB’s simply want to run a few virtual machines on under 10 host servers, and to have a good overall picture of those servers and the VM’s running within them (a central management console). VMware does recognize this, but when I spoke with their sales reps they gave me a quote of approximately $25,000 for the simple ability to view all 10 virtual machine servers from a single administration console. It was time to look elsewhere. There is no shortage of options. A short and incomplete list of these includes:

  • Proxmox VE
  • Ubuntu Enterprise Cloud
  • Citrix XenServer 5
  • Microsoft Hyper-V Server 2008

Note that the above list meets a set of requirements:

  1. The free hypervisor must support central management through a software console or web interface
  2. It should support live migration, as this is now a basic feature of all hypervisors (it’s one step away from suspend on host-a, resume on host-b, come on now.)
  3. It should not maintain a strict hardware compatibility list (HCL) which forces a lot of companies to throw out old servers and buy new ones in order to virtualize their assets.

Proxmox VE

The first solution I tried is Proxmox VE, and I found it very impressive. It supports all hardware supported by the Linux kernel, and the developers have designed their own method of clustering multiple Proxmox VE servers together in order to enable live migration and central management from the web interface. Performance is extremely good, certainly far better than that of VMware Server 2.0. Proxmox VE also supports virtual machine storage on a variety of network storage types, including NFS, CIFS, and iSCSI. Because the underpinnings of Proxmox VE are based on Debian, other solutions can be added with little difficulty for a Linux expert, such as glusterfs for clustered virtual machine storage, etc.

Proxmox VE also ships with OpenVZ capabilities, allowing one to create Linux virtual containers rather than virtual machines. These have much less I/O and memory overhead and allow one to run multiple Linux environments very quickly on the native hardware. Proxmox lacks an official means of converting various virtual machine formats into its most optimal QCOW2 (that used by the qemu-kvm backend), but does provide a wiki page with some insights on how to get this done. I was, however able to convert a Windows 2003 server from a VMware VMDK image to QCOW2 format and run it under Proxmox without issue.

Proxmox also shows a lack of polish in some areas which are critical -  its newest 1.5 version refuses to start a virtual machine if you attempt to start one using the newly supported VMware VMDK disk format. Hopefully this will be ironed out, as the ability to move VMDK images back and forth between VMware Server 2.0 and Proxmox VE during a migration would be excellent. The web interface is also fairly non-intuitive, and believe it or not, design plays a part in end-user confidence. Proxmox is, however, an excellent compilation of the state of the art in Linux virtualization - it ships with a 2.6.18 kernel containing the best tested virtualization technologies, as well as an intermediate and a bleeding edge version which supports deployment of many more virtual machines. This made Proxmox VE one of the contenders I’d roll into production at my company. Tune in soon to find out about the rest!