You are here
Qemu + KVM is the future of open source virtualization
Open source virtualization has been evolving dramatically over the last few years. Incumbent proprietary platforms such as VMWare still hold the throne in many areas but open source competitors are gaining ground fast on their way to ubiquity. Things are a-changing in the land of virtualization.
Right now we have three contenders to the (open source) throne battling it out for supremacy:
- Xen: backed by Citrix
- VirtualBox: backed by Oracle/Sun
- KVM: backed by RedHat
Neither Citrix nor Oracle have yet to establish a good track record with regards to open source so I recently took the time to dig a little deeper into KVM and qemu, two projects which I suspect are going to be a huge part of the future of open source virtualization.
-
KVM: at its essence KVM is just a kernel infrastructure. It's not a product. The developers forked qemu to hook into the KVM kernel infrastructure and take advantage of it. Eventually the fork was merged back into the qemu mainstream.
-
qemu: powers nearly all Linux virtualization technologies behind the scenes.
Qemu is a spectacular piece of software written largely by Fabrice Bellard, a programming genius who also wrote TCC (tiny c compiler), FFMpeg and many other less famous programs.
Highlights from my findings:
-
I'm pretty sure KVM is the future of open source virtualization. Not Xen or VirtualBox.
KVM is a latecomer to the virtualization game but its technical approach is superior and many believe it will win out eventually over other virtualization technologies and become the transparent virtualization standard of the future. Personally, I think that's a pretty likely outcome at this point.
The primary advantages going for KVM is simplicity and leverage. By taking advantage of the hardware-level support in new processors, KVM can be many times simpler than competing technologies such as Xen or VMware while achieving equivalent or superior performance.
KVM leverages the Linux kernel in a way that allows its developers to focus their resources more efficiently on actual virtualization related development. The proof is in the pudding. With just a fraction of the resources their competitors have, the KVM team implemented advanced features such as live migration, and SMP support (e.g., up to 255 virtual CPUs regardless of how many CPUs the host has). Even recursive virtualization should be possible (I know patches were written to get it to work).
-
libvirt: Redhat are spearheading development of a backend-agnostic management layer for virtualization (libvirt).
The fundamental design is pretty good, but for our purposes (I.e., TurnKey development) libvirt and its higher-level friends just get in the way. The really useful parts are the qemu-based primitives, which are easier to use directly than through the libvirt stuff.
-
qemu supports multiple mechanisms for plugging guest NICs together - via a TCP/UDP port, attaching to a tap/tun device or even plugging into VDE (virtual distributed ethernet), which allows arbitrarily complex virtual network topologies to be constructed that span multiple physical machines. I've played around with a few test configurations and it's pretty neat.
Qemu vs VirtualBox
Though it's hard to tell, VirtualBox was forked from an earlier version of qemu. The main thing VirtualBox added was sophisticated binary recompilation that allowed fast virtualization without requiring support from the underlying hardware. Today most modern PC processors from Intel and AMD have hardware support for virtualization which makes binary recompilation techniques unnecessary. This allows hypervisors such as KVM to achieve the same results using a simpler hardware-enabled approach with superior performance.
VirtualBox also added a user-friendly GUI shell that simplifies setting up virtual machines in a desktop environment.
In general, I was favorably surprised by qemu. If VirtualBox is the notepad of open source virtualization. Qemu is Vim. In other words if you're looking for a swiss-army virtualization knife you can get hacking at, qemu beats VirtualBox many times over.
Random insights:
-
Qemu has equivalent-better performance with KVM
-
If your hardware can't support KVM, then performance won't be as good as VirtualBox but if you use the KQEMU kernel acceleration module, it will be acceptable (e.g., on my system 30 seconds with KQEMU to boot lamp vs 20 seconds in KVM node)
-
Qemu is a better primitive than VirtualBox:
- Good defaults are chosen if you don't specify otherwise
- It's simpler to launch VMs from the cli:
qemu -cdrom appliance.iso # install appliance.iso to test.img qemu-img create /virtdisks/test.img 4G qemu -cdrom appliance.iso -hda /virtdisks/test.img
- Supports launching VMs as detached daemon processes
- Supports VNC graphics mode with optional encryption and authentication
- Easier to script
- Supports a scriptable VM monitor
- Can work without root privileges, unless you want to use tap/tun interfaces
Also, it wouldn't be too complicated to write a wrapper that allows users to safely run qemu + tap/tun without root privileges.
-
Easy to use default usermode networking
- built-in NAT, dhcp, tftp
- host can be accessed by guest on 10.0.2.2
- supports forwarding host ports to guest (-redir option)
-
Qemu can boot 64bit guests on a 32bit host using processor emulation. I tried this and it works, though it's pretty slow:
qemu-system-x86_64 -cdrom ./lenny-amd64-standard.iso
-
Qemu can be configured to allow transparent chrooting into non-native chroots without launching a VM (e.g., chroot into a 64bit debootstrap from a 32bit system)
It's a bit tricky to get it working though as the default Ubuntu package doesn't yet include the Debian patches that enable this very neat trick.
-
For usage scenarios where cli usage isn't optimal, you can use one of several GUI front-ends written for qemu. I liked qemuctl and qemu-launcher (for different purposes).
Comments
Perhaps a question better for the Proxmox forums or mailing list
Although I will give you some sort of answer... TBH I'm not completely clear exactly what you are asking though. Do you mean can PVE clusters share resources one hardware node to another? If so, AFAIK the answer is no. I'm fairly sure that each hardware node is basically independant, but VMs can be migrated from node to node very easily and with no downtime.
Having said that I'm no technical PVE expert. As I said I suggest you ask on the PVE forums or mailing list. The devs are pretty active on both, as well as there being a heap of pretty intellegent and knowledgable users.
You can use the default VM build
And I have also successfully used the OVF build in VirtualBox as well. Obviously installing from ISO is another option...
Probably unlikely to get an answer here
This is the TurnKey Linux forums and other than this blog post, it's unlikely that you'll be able to get any help on your issues here. I suggest posting somewhere else more directly relevant...
Pages
Add new comment