FAQ - Is the disk speed of
KVM-QEMU sufficient for Aerospike?
Openstack is an API frontend to a range of back-end virtualization solutions. The most common deployment uses
KVM-QEMU for virtualization, but
Openstack is capable of managing other back ends, such as
Xen virtualization. This article describes some of the common pitfalls and potential optimizations when running the standard
KVM-QEMU virtualization with
For background, prior to CPUs supporting resource separation, using
VT-x technology, virtualizing meant simulating an entire processor and RAM within software. This was extremely slow. The Linux implementation of this was called
QEMU. As CPUs began to support the
VT-x virtualization extension, another solution, which sits on top of
QEMU was created. This solution, called
KVM, makes use of
VT-x wherever possible and, where not, allows
QEMU to continue virtualizing the rest of the stack.
There is another extension, called
VT-d. This extension allows the system to perform PCI pass through. In other words, with
PCI(e) cards can be passed through from the host operating system directly to the virtual machine. This means that the virtual machine can have an almost direct access path to the CPU and RAM (
VT-x) as well as certain
PCI(e) cards via
VT-d. By combining these technologies virtual machines can obtain near native speed.
While the networking stack, even when virtualized, is usually fast, the common bottlebeck of virtualization is the emulated disk access speed. Native network access speeds can be attained if the system has got multiple network cards, it can retain one card for the host OS, and dedicate another, via
PCI-passthrough to the virtual machine.
When network bottlenecks have been optimized, the question remains, is the disk speed of
KVM-QEMU sufficient for Aerospike?
KVM-QEMU supports a large number of disk formats. This section focuses on the most common formats, and how to best utilize them.
It should be noted that copy-on-write filesystems, such as
btrfs are inherently slow. When using
btrfs, it is necessary to disable the CoW functionality on the directory where VM disk images are held. Further optimisations in access speed to the VM disk image can be obtained by disabling filesystem features, such as last file access timestamp and last file modification timestamp.
||default format, stands for
||the raw disk image, if created with very small fragmentation of the disk image file, this option is significantly faster than
||Raw access to a mapped physical disk or partition, due to the fact there is no file or filesystem to use, this makes the disk access inside the VM as close to native as possible. Can pass raw
Any formats using a translation layer are inherently slow. Formats not designed for
QEMU are forced to use such a later. Examples of these formats are listed below. These formats are not considered suitable for high speed applications such as Aerospike:
vdi- VirtualBox Virtual Disk Image
vhd- VirtualPC Virtual Hard Disk
vmdk- VMWare Virtual Machine Disk
While this list is not exhaustive, as seen above, any
CoWfilesystem and file format should be avoided as they do not provide adequate levels of storage performance. RAW formats are the most efficient (with
VT-dbeing the most direct and therefore the most efficient).
Using virtual disks over the network is going to be slow, regardless of the format, as it is limited by the network speed to the host hardware. It is therefore not recommended for Aerospike.
All storage should be locally connected where possible to allow for optimal speeds. Using disks over the network may result in network failures affecting disk access and resulting storage timeouts, latency and, in extremis, crashes.
For more information on supported libvirt formats, refer to this page.
Libvirt is a virtualization library which manages
KVM-QEMU. It sits between
KVM QEMU KVM-QEMU COW BTRFS VDI VHD VMDK VT-X VT-D RAW OPENSTACK DISK SPEED
24 June 2019