FAQ - Is the disk speed of `Openstack`/`KVM-QEMU` sufficient for Aerospike?

FAQ - Is the disk speed of Openstack/KVM-QEMU sufficient for Aerospike?


Openstack is an API frontend to a range of back-end virtualization solutions. The most common deployment uses KVM-QEMU for virtualization, but Openstack is capable of managing other back ends, such as Docker and Xen virtualization. This article describes some of the common pitfalls and potential optimizations when running the standard KVM-QEMU virtualization with Openstack.

For background, prior to CPUs supporting resource separation, using VT-x technology, virtualizing meant simulating an entire processor and RAM within software. This was extremely slow. The Linux implementation of this was called QEMU. As CPUs began to support the VT-x virtualization extension, another solution, which sits on top of QEMU was created. This solution, called KVM, makes use of VT-x wherever possible and, where not, allows QEMU to continue virtualizing the rest of the stack.

There is another extension, called VT-d. This extension allows the system to perform PCI pass through. In other words, with VT-d, PCI(e) cards can be passed through from the host operating system directly to the virtual machine. This means that the virtual machine can have an almost direct access path to the CPU and RAM (VT-x) as well as certain PCI(e) cards via VT-d. By combining these technologies virtual machines can obtain near native speed.

While the networking stack, even when virtualized, is usually fast, the common bottlebeck of virtualization is the emulated disk access speed. Native network access speeds can be attained if the system has got multiple network cards, it can retain one card for the host OS, and dedicate another, via PCI-passthrough to the virtual machine.

When network bottlenecks have been optimized, the question remains, is the disk speed of Openstack/KVM-QEMU sufficient for Aerospike?


KVM-QEMU supports a large number of disk formats. This section focuses on the most common formats, and how to best utilize them.

It should be noted that copy-on-write filesystems, such as btrfs are inherently slow. When using btrfs, it is necessary to disable the CoW functionality on the directory where VM disk images are held. Further optimisations in access speed to the VM disk image can be obtained by disabling filesystem features, such as last file access timestamp and last file modification timestamp.

Format Description
qcow2 default format, stands for qemu copy-on-write, as any copy-on-write system, it is drastically slow. Should not be used for high speed applications.
raw-file the raw disk image, if created with very small fragmentation of the disk image file, this option is significantly faster than qcow2.
raw-disk Raw access to a mapped physical disk or partition, due to the fact there is no file or filesystem to use, this makes the disk access inside the VM as close to native as possible. Can pass raw LVM LV (logical volume). The KVM-QEMU will translate either a partition/disk or LV to a disk within the VM.
VT-d Uses PCI-passthrough in order to pass a disk controller through to the VM. This is the fastest option possible, as the VM gets direct access to the controller and attached disks.

Any formats using a translation layer are inherently slow. Formats not designed for QEMU are forced to use such a later. Examples of these formats are listed below. These formats are not considered suitable for high speed applications such as Aerospike:

  • vdi - VirtualBox Virtual Disk Image
  • vhd - VirtualPC Virtual Hard Disk
  • Virtual VFAT
  • vmdk - VMWare Virtual Machine Disk


  • While this list is not exhaustive, as seen above, any CoW filesystem and file format should be avoided as they do not provide adequate levels of storage performance. RAW formats are the most efficient (with VT-d being the most direct and therefore the most efficient).

  • Using virtual disks over the network is going to be slow, regardless of the format, as it is limited by the network speed to the host hardware. It is therefore not recommended for Aerospike.

  • All storage should be locally connected where possible to allow for optimal speeds. Using disks over the network may result in network failures affecting disk access and resulting storage timeouts, latency and, in extremis, crashes.

  • For more information on supported libvirt formats, refer to this page.

  • Libvirt is a virtualization library which manages KVM-QEMU. It sits between KVM-QEMU and Openstack.




24 June 2019

© 2015 Copyright Aerospike, Inc. | All rights reserved. Creators of the Aerospike Database.