Qemu + KVM is the future of open source virtualization

Open source virtualization has been evolving dramatically over the last few years. Incumbent proprietary platforms such as VMWare still hold the throne in many areas but open source competitors are gaining ground fast on their way to ubiquity. Things are a-changing in the land of virtualization.

Right now we have three contenders to the (open source) throne battling it out for supremacy:

  • Xen: backed by Citrix
  • VirtualBox: backed by Oracle/Sun
  • KVM: backed by RedHat

Neither Citrix nor Oracle have yet to establish a good track record with regards to open source so I recently took the time to dig a little deeper into KVM and qemu, two projects which I suspect are going to be a huge part of the future of open source virtualization.

  1. KVM: at its essence KVM is just a kernel infrastructure. It's not a product. The developers forked qemu to hook into the KVM kernel infrastructure and take advantage of it. Eventually the fork was merged back into the qemu mainstream.

  2. qemu: powers nearly all Linux virtualization technologies behind the scenes.

    Qemu is a spectacular piece of software written largely by Fabrice Bellard, a programming genius who also wrote TCC (tiny c compiler), FFMpeg and many other less famous programs.

Highlights from my findings:

  • I'm pretty sure KVM is the future of open source virtualization. Not Xen or VirtualBox.

    KVM is a latecomer to the virtualization game but its technical approach is superior and many believe it will win out eventually over other virtualization technologies and become the transparent virtualization standard of the future. Personally, I think that's a pretty likely outcome at this point.

    The primary advantages going for KVM is simplicity and leverage. By taking advantage of the hardware-level support in new processors, KVM can be many times simpler than competing technologies such as Xen or VMware while achieving equivalent or superior performance.

    KVM leverages the Linux kernel in a way that allows its developers to focus their resources more efficiently on actual virtualization related development. The proof is in the pudding. With just a fraction of the resources their competitors have, the KVM team implemented advanced features such as live migration, and SMP support (e.g., up to 255 virtual CPUs regardless of how many CPUs the host has). Even recursive virtualization should be possible (I know patches were written to get it to work).

  • libvirt: Redhat are spearheading development of a backend-agnostic management layer for virtualization (libvirt).

    The fundamental design is pretty good, but for our purposes (I.e., TurnKey development) libvirt and its higher-level friends just get in the way. The really useful parts are the qemu-based primitives, which are easier to use directly than through the libvirt stuff.

  • qemu supports multiple mechanisms for plugging guest NICs together - via a TCP/UDP port, attaching to a tap/tun device or even plugging into VDE (virtual distributed ethernet), which allows arbitrarily complex virtual network topologies to be constructed that span multiple physical machines. I've played around with a few test configurations and it's pretty neat.

Qemu vs VirtualBox

Though it's hard to tell, VirtualBox was forked from an earlier version of qemu. The main thing VirtualBox added was sophisticated binary recompilation that allowed fast virtualization without requiring support from the underlying hardware. Today most modern PC processors from Intel and AMD have hardware support for virtualization which makes binary recompilation techniques unnecessary. This allows hypervisors such as KVM to achieve the same results using a simpler hardware-enabled approach with superior performance.

VirtualBox also added a user-friendly GUI shell that simplifies setting up virtual machines in a desktop environment.

In general, I was favorably surprised by qemu. If VirtualBox is the notepad of open source virtualization. Qemu is Vim. In other words if you're looking for a swiss-army virtualization knife you can get hacking at, qemu beats VirtualBox many times over.

Random insights:

  • Qemu has equivalent-better performance with KVM

  • If your hardware can't support KVM, then performance won't be as good as VirtualBox but if you use the KQEMU kernel acceleration module, it will be acceptable (e.g., on my system 30 seconds with KQEMU to boot lamp vs 20 seconds in KVM node)

  • Qemu is a better primitive than VirtualBox:

    • Good defaults are chosen if you don't specify otherwise
    • It's simpler to launch VMs from the cli:
      qemu -cdrom appliance.iso
      # install appliance.iso to test.img 
      qemu-img create /virtdisks/test.img 
      4G qemu -cdrom appliance.iso -hda /virtdisks/test.img
    • Supports launching VMs as detached daemon processes
    • Supports VNC graphics mode with optional encryption and authentication
    • Easier to script
    • Supports a scriptable VM monitor
    • Can work without root privileges, unless you want to use tap/tun interfaces

      Also, it wouldn't be too complicated to write a wrapper that allows users to safely run qemu + tap/tun without root privileges.

  • Easy to use default usermode networking

    • built-in NAT, dhcp, tftp
    • host can be accessed by guest on
    • supports forwarding host ports to guest (-redir option)
  • Qemu can boot 64bit guests on a 32bit host using processor emulation. I tried this and it works, though it's pretty slow:

    qemu-system-x86_64 -cdrom ./lenny-amd64-standard.iso
  • Qemu can be configured to allow transparent chrooting into non-native chroots without launching a VM (e.g., chroot into a 64bit debootstrap from a 32bit system)

    It's a bit tricky to get it working though as the default Ubuntu package doesn't yet include the Debian patches that enable this very neat trick.

  • For usage scenarios where cli usage isn't optimal, you can use one of several GUI front-ends written for qemu. I liked qemuctl and qemu-launcher (for different purposes).



peerx's picture

I use Turnkey Deki, lamp, joomla, gallery and Core virtualized on a Proxmox host. The Proxmox host is clustered with a second host.

The Turnkey distributions  are very easy to install as KVM virtual machines. The GUI of the Proxmox host makes installation and monitoring very easy.

The Proxmox VE is free open source. Web site: http://pve.proxmox.com/wiki/Main_Page

I think Proxmox and Turnkey makes an excellent combination.


Liraz Siri's picture

Thanks for the feedback! I'm hearing more and more good things about ProxMox. One of the neat things we're planning to look into in the future is to make a meta TurnKey appliance for running the other appliances. The main challenge here is surveying the different open source solutions, integrating the best of them and then testing that everything works. Oh, and it wouldn't hurt to put together some documentation/tutorial on how to use it.

If you're interested in this sort of thing, check out this thread on the forum.

GarryPB's picture

Interested in your comment 'The Turnkey distributions  are very easy to install as KVM virtual machines' just hoping you would have some instructions on howto do this or could point me in the right direction of where to look.  I have a few vmware vm's I'd like to try on proxmox - kvm, one been a Turnkey VM.  Also, have you tried to convert any to vz containers?



Jeremy Davis's picture

I can say from experience that to install to Proxmox is as easy as using any other GUI driven virtualisation solution (think VirtualBox, VMware, etc). Only difference is that you use a WebUI (from a web browser on a remote machine), rather than using a desktop GUI.

I assume you have ProxmoxVE already installed, but just in case (and for the benifit of others reading this post) there is all the info you should need (including videos) here. This page also includes clear instructions on loading ISOs (KVM) and templates (the term used to describe OpenVZ VM images).

IMO one of the best things about using ProxmoxVE is that not only do you have access to KVM, but you can also use OpenVZ (container virtualisation). I only use KVM for Windows OS virtualisation. Whilst OpenVZ only supports Linux OSs and some of the more advanced features of KVM are also missing, I find it great for maximising hardware use (such as overcommitting RAM - although this is also available if you use the latest ProxmoxVE kernel - but you lose OpenVZ). As I have mentioned elsewhere on this forum I have a TKL fileserver appliance running under OpenVZ on my Proxmox server and it uses about 40MBs RAM usage under load! Try doing that anywhere else! You can make your own OpenVZ templates using the instructions I posted on the TKL wiki here, or another TKL user; xtrac568 has posted some precreated templates, have a look here.

Alternatively, if you're not scared of the CLI and/or you don't yet have ProxmoxVE installed and want to have a play with TKL appliances in KVM have a look at Claes' "recipe" over here. Also applicable if you would like to do it on your ProxmoxVE server via the CLI locally (or via SSH) although I think that somewhat defeats the purpose of Proxmox. I'm also not sure if a KVM VM will show in the Proxmox UI if installed in this fashion? If you try this route I'd be interested to know if the CLI created VMs are automatically populated in the WebUI.

GarryPB's picture

Hi Jed,  thanks for the reply.

I have been using vmware server for around 3yrs to host a radius & accounting server, web hosting server and secondary dns server.  I recently upgrade to vmware2 and starting to get tired of reinstall every time linux kernel is changed.  On top of this, the fact that vmware2 server console is very flaky in a modem browser (non windows IE) has lead me to research other virtualization software.  

After many hrs of reading, I came to the conclusion that OpenVZ would be the first I tested & as I was also looking at KVM - qemu, for any windose OS, Proxmox sounded ideal.  When I found the reference here at TKL, I was hoping that TKL appliances could be easily converted to OpenVZ containers, and it appears they can be, specially for those perhaps a little more teckie than me. indecision.  

I have been playing with Proxmox for 2 weeks now (when I can find the time) and I can certainly vouch for ProxmoxVE, it covers both OpenVZ & KVM, it then wraps them into a tightly interrogated OS with a purpose modified kernel, all packaged with a convenient and functional web browser management interface.  You also need CLI, but that's good too, as I learn more when I use CLI.  It would also have to be one of the easiest OS I've ever installed, and is Open Source.

I've also,  just recently started looking at the TKL appliances, over my normal server install - base OS - Ubuntu or Debian server & build up to what we require or customer requires.  I really like the idea of the TKL appliance, for me, being something that I could quickly deploy, that I know will work, as it has been tried and tested.  Plus it could provide uniformity, for when I or my staff are supporting customers. 

So far, 4 me, everything is looking promising, TKL - Ubuntu/Debian, Proxmox - Debian base - OpenVZ & KVM.  But as I'm a grass roots ISP, since 1997, I mainly provide Internet access and basic web hosting & email.  This is a very tight market today, has been for quite a while now, so I'm looking at other services I can offer my clients, that may help keep me in business for -- another 10yrs frown    

My biggest problem is: I'm a sports fanatic, I compete in IM Triathlons, Cycling & Adventure racing, and this has meant I have never devoted the time, that I would need, to become more IT proficient.  So I'm always looking for things that will make my job easier. laugh

Cheers Garry

Srivathsan M's picture

I really don't care if a software is open-source or not.  All I do care is it should be legally free to use.  I had played with KVM on Ubuntu 8.04  while I was able to install and run my guest OSes, I really felt the the user experience bit cumbersome and nerdy

Contrast that with VirtualBox 3.1.x and the latest VMWare Player 3.0.x; I could get my guest OS up in a jiffy.  With QEMU+KVM combo, I had to struggle a bit to get my Windows XP guest up and running and also there was an issue with specifying RAM available to the guest as 512MB exactly.  I didn't have any such hassles with VirtualBox - almost everything was out-of-the box

I should also admit that my KVM + QEMU experience is atleast 1.5 years back after which I hadn't tried that.  So, there could have been improvements.

While I agree with you that technical superiority is important, user experience, especially that of the novice user is also equally important.  Otherwise, we have another OS/2 in the making.

Liraz Siri's picture

I think you may be comparing apples to oranges. Because open source is collaborative in nature it encourages modularity in design and that can make it tricky to compare "products" that are directly analogous. You can have one group of experts develop virtualization back-ends (e.g., KVM, Xen, OpenVZ, etc.), and another group of people with user interface skills develop front-ends that tie it all together to provide a good user experience. With the right software architecture it becomes possible to mix and match different components to align them best for a specific type of user and user scenario. That last part is what we do at TurnKey BTW. Research which components are available and mix and match a selection that we think best fits our user's needs. .

This way you have an entire eco-system of people developing or contributing (e.g., with patches) to the development of software solutions without having to start from scratch and reinvent the wheel every time. Then innovation happens faster, to everyone's advantage - except maybe the proprietary vendors which this process disrupts.

I agree that many users only care about the results and not the process BUT keep in mind they are tightly related.

Craig's picture

Qemu's new slogan: "Qemu -- it's not for retards".

Thomas Goirand's picture

Just one thing. When you wrote:

  • Xen: backed by Citrix
  • VirtualBox: backed by Oracle/Sun
  • KVM: backed by RedHat

it's not really fair. Xen is backed by: Intel, AMD, Oracle, Samsung, Fujitsu at least (the last Xen submit was at Intel Shanghai, the next one will be at AMD in USA). For example, Intel engineers have developped special drivers for the new type of hardware supported network I/O.


Also, Xen does use Qemu when using hardware emulation (for anything that is not Linux, OpenSolaris or NetBSD). To be exact, it uses the hardware device emulation of it. Many of the Xen patches to Qemu went back to the upstream authors of Qemu also.

There's support for nested virtualization in Xen (what you call here "recursive" virtualization). There's even some optimization that have been made. You can see this in the last Xen Summit conference here: http://www.xen.org/xensummit/xensummit_fall_2009.html


Xen is now in sync with the kernel from kernel.org (everyone is moving to use the pv_ops tree that Jeremy maintains). There's a good hope that Linux from kernel.org will be dom0 capable soon. Finally, KVM and Xen are really similar internally, using the same functions in the Linux kernel (as Linus and others insist on the fact that having 2 APIs doing the same thing is a bad idea). I believe both KVM and Xen are going to continue being there for a very long time and compete in the market of virtualization, Xen continuing to lead on the server side.


All this being said, I respect your point of view that KVM will be the future (many people said the same), but it's not my view on it.

Liraz Siri's picture

Thanks for the feedback Thomas. It's great to hear from someone who really knows their Xen! (and happens to be running a big installation)

As you point out, I think what's already happening behind the scenes is that these two open source projects are converging somewhat. That's probably a good thing.

I remember reading that Xen would have a hard time getting into the mainline Linux kernel due to its dependency on the Mach micro-kernel. If Xen does make it into the Linux kernel, it will be on equal footing with KVM, otherwise I think it's at a big disadvantage. If you are running a machine dedicated for virtualization it may not be a big deal to have to run it with a dom0 kernel but otherwise it can be a drag. That's one of the reasons I personally don't have as much hands-on experience with Xen as with KVM. It doesn't run on my workstation while KVM does. Because it's easier to get started with KVM, I think you'll see more people prefer to continue with it when they're getting serious about virtualization.

But that may change if Xen makes it into the mainline kernel...

Evan's picture

Can someone please tell me if there is a way on proxmox to share resources between machines? I've been digging all over and can't seem to find a straight answer.

Jeremy Davis's picture

Although I will give you some sort of answer... TBH I'm not completely clear exactly what you are asking though. Do you mean can PVE clusters share resources one hardware node to another? If so, AFAIK the answer is no. I'm fairly sure that each hardware node is basically independant, but VMs can be migrated from node to node very easily and with no downtime.

Having said that I'm no technical PVE expert. As I said I suggest you ask on the PVE forums or mailing list. The devs are pretty active on both, as well as there being a heap of pretty intellegent and knowledgable users.

xen's picture


Do you provide template for virtualbox?




Kind regards



Jeremy Davis's picture

And I have also successfully used the OVF build in VirtualBox as well. Obviously installing from ISO is another option...

G Raju's picture


I have installed KVm and installed some windows and linux vm's locally

but when i am crateing nfs storage pool i am getting this error..


Error creating pool: Could not build storage pool: cannot create path '/mnt/nfs': Unknown error 18446744073709551615

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/createpool.py", line 480, in _async_pool_create
    poolobj = self._pool.install(create=True, meter=meter, build=build)
  File "/usr/lib/python2.6/site-packages/virtinst/Storage.py", line 489, in install
    raise RuntimeError(errmsg)
RuntimeError: Could not build storage pool: cannot create path '/mnt/nfs': Unknown error 18446744073709551615


help me to come out from this problem..?

i will be waiting for your response......



G Raju's picture


i am getting this error while importing vm from local drive to gluster voume...


Unable to complete install: 'internal error process exited while connecting to monitor: char device redirected to /dev/pts/1
qemu-kvm: -drive file=/mnt/rvol/RHEL_VM_Test.img,if=none,id=drive-virtio-disk0,format=raw,cache=none: could not open disk image /mnt/rvol/RHEL_VM_Test.img: Permission denied

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/create.py", line 1910, in do_install
    guest.start_install(False, meter=meter)
  File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 1223, in start_install
  File "/usr/lib/python2.6/site-packages/virtinst/Guest.py", line 1291, in _create_guest
    dom = self.conn.createLinux(start_xml or final_xml, 0)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2622, in createLinux
    if ret is None:raise libvirtError('virDomainCreateLinux() failed', conn=self)
libvirtError: internal error process exited while connecting to monitor: char device redirected to /dev/pts/1
qemu-kvm: -drive file=/mnt/rvol/RHEL_VM_Test.img,if=none,id=drive-virtio-disk0,format=raw,cache=none: could not open disk image /mnt/rvol/RHEL_VM_Test.img: Permission denied


help me to come out from this problem..?

i will be waiting for your response......


Jeremy Davis's picture

This is the TurnKey Linux forums and other than this blog post, it's unlikely that you'll be able to get any help on your issues here. I suggest posting somewhere else more directly relevant...

Wyatt Ward's picture

For years I'd been trying to get a virtualised Darwin kernel running (the underpinnings of OS X) - virtualbox would always fail at some point in the process, but I was finally today able to get qemu to do it amazingly smoothly. QEMU's my new favorite!


Add new comment