Citrix Hypervisor

Storage read caching

Read caching improves a VM’s disk performance as, after the initial read from external disk, data is cached within the host’s free memory. It improves performance in situations where many VMs are cloned off a single base VM, as it drastically reduces the number of blocks read from disk. For example, in Citrix Virtual Desktops environment Machine Creation Services (MCS) environments.

The performance improvement can be seen whenever data is read from disk more than once, as it gets cached in memory. This change is most noticeable in the degradation of service that occurs during heavy I/O situations. For example, in the following situations:

  • When a significant number of end users boot up within a very narrow time frame (boot storm)
  • When a significant number of VMs are scheduled to run malware scans at the same time (antivirus storms).

Read caching is enabled by default when you have the appropriate license type.

Note:

Storage Read Caching is available for Citrix Hypervisor Premium Edition customers.

Storage Read Caching is also available for customers who access Citrix Hypervisor through their Citrix Virtual Apps and Desktops entitlement or Citrix DaaS entitlement.

Enable and disable read caching

For file-based SRs, such as NFS and EXT3/EXT4 SR types, read-caching is enabled by default. Read-caching is disabled for all other SRs.

To disable read caching for a specific SR by using the xe CLI, run the following command:

xe sr-param-set uuid=sr-uuid other-config:o_direct=true
<!--NeedCopy-->

To disable read caching for a specific SR by using XenCenter, go to the Properties dialog for the SR. In the Read Caching tab, you can select to enable or disable read caching.

For more information, see Changing SR Properties.

Limitations

  • Read caching is available only for NFS and EXT3/EXT4 SRs. It is not available for other SR Types.

  • Read caching only applies to read-only VDIs and VDI parents. These VDIs exist where VMs are created from ‘Fast Clone’ or disk snapshots. The greatest performance improvements can be seen when many VMs are cloned from a single ‘golden’ image.

  • Performance improvements depend on the amount of free memory available in the host’s Control Domain (dom0). Increasing the amount of dom0 memory allows more memory to be allocated to the read-cache. For information on how to configure dom0 memory, see CTX134951.

  • When memory read caching is turned on, a cache miss causes I/O to become serialized. This can sometimes be more expensive than having read caching turned off, because with read caching turned off I/O can be parallelized. To reduce the impact of cache misses, increase the amount of available dom0 memory or disable read caching for the SR.

Comparison with IntelliCache

IntelliCache and memory based read caching are to some regards complementary. IntelliCache not only caches on a different tier, but it also caches writes in addition to reads. IntelliCache caches reads from the network onto a local disk. In-memory read caching caches the reads from network or disk into host memory. The advantage of in-memory read caching, is that memory is still an order of magnitude faster than a solid-state disk (SSD). Performance in boot storms and other heavy I/O situations improves.

Both read-caching and IntelliCache can be enabled simultaneously. In this case, IntelliCache caches the reads from the network to a local disk. Reads from that local disk are cached in memory with read caching.

Set the read cache size

The read cache performance can be optimized, by giving more memory to Citrix Hypervisor’s control domain (dom0).

Important:

Set the read cache size on ALL hosts in the pool individually for optimization. Any subsequent changes to the size of the read cache must also be set on all hosts in the pool.

On the Citrix Hypervisor server, open a local shell and log on as root.

To set the size of the read cache, run the following command:

/opt/xensource/libexec/xen-cmdline --set-xen dom0_mem=nnM,max:nnM
<!--NeedCopy-->

Set both the initial and maximum values to the same value. For example, to set dom0 memory to 20,480 MiB:

/opt/xensource/libexec/xen-cmdline --set-xen dom0_mem=20480M,max:20480M
<!--NeedCopy-->

Important:

Reboot all hosts after changing the read cache size.

How to view the current dom0 memory allocation?

To view the current dom0 memory settings, enter:

free -m
<!--NeedCopy-->

The output of free -m shows the current dom0 memory settings. The value may be less than expected due to various overheads. The example table below shows the output from a host with dom0 set to 2.6 GiB

|                     | Total  | Used | Free  | Shared | Buffer/cache | Available |
|---------------------|--------|------|-------|--------|--------------|-----------|
| Mem:                | 2450   | 339  | 1556  | 9      | 554          | 2019      |
| Swap:               | 1023   | 0    | 1023  |        |              |           |
<!--NeedCopy-->

What Range of Values Can be Used?

As the Citrix Hypervisor Control Domain (dom0) is 64-bit, large values can be used, for example 32768 MiB. However, we recommend that you do not reduce the dom0 memory below 1 GiB.

XenCenter display notes

The entire host’s memory can be considered to comprise the Xen hypervisor, dom0, VMs, and free memory. Even though dom0 and VM memory is usually of a fixed size, the Xen hypervisor uses a variable amount of memory. The amount of memory used depends on various factors. These factors include the number of VMs running on the host at any time and how those VMs are configured. It is not possible to limit the amount of memory that Xen uses. Limiting the amount of memory can cause Xen to run out of memory and prevent new VMs from starting, even when the host had free memory.

To view the memory allocated to a host, in XenCenter select the host, and then click the Memory tab.

The Citrix Hypervisor field displays the sum of the memory allocated to dom0 and Xen memory. Therefore, the amount of memory displayed might be higher than specified by the administrator. The memory size can vary when starting and stopping VMs, even when the administrator has set a fixed size for dom0.

Storage read caching