Memory backings in Linux Virtualization

4 minute read

Background

When diving into high-performance virtualization, memory is often an area of focus. In this article, I will attempt to explain simply why memory is a concern, and the various ways that are available to you to fix it. This way, you may pick which solution fits your needs.

Memory management is a key responsibility of your operating system (OS). Your OS takes the memory in your system and divides it up into pages. As programs request memory, your OS assigns pages to said programs.

This method of dividing up memory pages works for many applications. However, it can drastically slow down programs which require a large chunk of memory, such as virtual machines (VMs). The reason for this is the default size of your memory pages. Most systems default to memory pages between 2 KB and 8 KB. Assuming an average page size of 4 KB, let’s find out how many pages an 8 GB virtual machine would need.

  1. 8 GB = 8192 MB = 8192000 KB # Convert 8 GB to KB for matching units
  2. 8192000 KB (VM memory) / 4 KB (page size) = 2,048,000 (# of pages needed) # Check how many 4 KB pages would it take to cover an 8 GB VM

So, after some basic arithmetic, we can see that your system kernel would need to allocate over 2 million pages for the VM! This is a huge number of memory pages, and the overhead of maintaining so many pages results in stuttering and slowdown in our VMs as the Linux kernel attempt to map all of these pages. So what’s the solution to this problem? Enter:

What are huge pages?

Huge Pages refers to the practice of instructing your kernel to allocate larger maps than normal. By increasing the size of the pages allocated to your VM, you reduce the number of pages needed. What we’re doing is increasing the size of the page size in the equation above, which will reduce the number of pages needed. You can increase your page size to as small as 4 MB, or as large as 1 GB, but each size has both pros and cons.

Considerations

Without diving too much into the intricacies of memory management, please note that applications are always assigned whole pages. Basically, this means that if you divide your system into all 1 GB memory pages and your web browser asks for 100 MB of memory, the OS will be forced to assign it an entire 1 GB of memory. Your system will quickly run out of memory, and you’ll be banging your head against the wall trying to figure out why your system boots into using 10 GB of memory.1

Types of huge pages

There are three main ways to implement huge pages on your system: transparent huge pages, dynamic huge pages, and static huge pages. Each one of these affects your system differently, and has their own unique set of pros and cons, so you should read about all three of them before deciding which one to use. However, I do recommend and use dynamic huge pages myself.

Transparent huge pages

What are transparent huge pages? 2 Transparent huge pages are huge pages that applications are able to request dynamically. A huge benefit to transparent huge pages is that, should there not be enough room in memory for the larger pages, your application will be automatically assigned a mixture of larger and smaller pages. This is also enabled by default on most systems. tl;dr:

Pros

  • Your VM should only fail to start on a system without enough memory left.
  • Very little/no setup.
  • Memory is still available to the host OS while the guest is shutdown.

Cons

  • Not as high performance as the other two huge page options.
  • Can cause stuttering in the guest OS as the Linux kernel attempts to defragment memory.

Static huge pages

Static huge pages are huge pages which are allocated on startup of the host OS. These are not recommended for the single reason that most of the allocated memory will become unavailable to applications running on the host OS. In my case, allocating 8 GB of static huge pages caused my 16 GB PC to report 10 GB of memory used on boot. tl;dr:

Pros

  • The fastest option for high-performance VMs due to the memory basically being reserved at boot.
  • No worries about enough memory being available on VM startup.

Cons

  • You can expect to lose access to all of the memory you’ve allocated.

Dynamic huge pages

Dynamic huge pages are like static huge pages, but are not allocated at boot. This is what I recommend using as you’re afforded much of the performance of static huge pages, but are able to use the memory while your VM is not running. Note that with this method you may have trouble starting the VM once your host has been running for a while. The arch wiki has a paragraph on compacting memory, which will help with this problem.3

Pros

  • High performance.
  • Doesn’t suffer from stuttering when the Linux Kernel cleans up memory.
  • Memory is still available to the host OS while the guest is shutdown.

Cons

  • The guest may fail to start if the host memory is fragmented, and may require manual work to correct.

  1. Source: me. 

  2. https://www.reddit.com/r/linux/comments/58poww/eli5_transparent_huge_pages/d92n6lw?utm_source=share&utm_medium=web2x 

  3. https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Dynamic_huge_pages 

Leave a comment

Your email address will not be published. Required fields are marked *

Loading...