Citrix Hypervisor

Prepare host for graphics

This section provides step-by-step instructions on how to prepare Citrix Hypervisor for supported graphical virtualization technologies. The offerings include NVIDIA vGPU, AMD MxGPU, and Intel GVT-d and GVT-g.

NVIDIA vGPU

NVIDIA vGPU enables multiple Virtual Machines (VM) to have simultaneous, direct access to a single physical GPU. It uses NVIDIA graphics drivers deployed on non-virtualized Operating Systems. NVIDIA physical GPUs can support multiple virtual GPU devices (vGPUs). To provide this support, the physical GPU must be under the control of NVIDIA Virtual GPU Manager running in Citrix Hypervisor Control Domain (dom0). The vGPUs can be assigned directly to VMs.

VMs use virtual GPUs like a physical GPU that the hypervisor has passed through. An NVIDIA driver loaded in the VM provides direct access to the GPU for performance critical fast paths. It also provides a paravirtualized interface to the NVIDIA Virtual GPU Manager.

To ensure that you always have the latest security and functional fixes, ensure that you install any updates provided by NVIDIA for the drivers on your VMs and the NVIDIA Virtual GPU Manager running in your host server.

Important:

If you are using NVIDIA A16/A2 cards, ensure that you have the following files installed on your Citrix Hypervisor 8.2 hosts:

  • The latest version of the NVIDIA host driver (NVIDIA-vGPU-CitrixHypervisor-8.2-535.42.x86_64 or later)
  • Hotfix XS82ECU1027 (or a later hotfix that includes XS82ECU1027)

NVIDIA vGPU is compatible with the HDX 3D Pro feature of Citrix Virtual Apps and Desktops or Citrix DaaS. For more information, see HDX 3D Pro.

Licensing note

NVIDIA vGPU is available for Citrix Hypervisor Premium Edition customers, or customers who have access to Citrix Hypervisor through their Citrix Virtual Apps and Desktops entitlement or Citrix DaaS entitlement. To learn more about Citrix Hypervisor editions, and to find out how to upgrade, visit the Citrix website. For more information, see Licensing.

Depending on the NVIDIA graphics card used, you might need NVIDIA subscription or a license.

For information on licensing NVIDIA cards, see the NVIDIA website.

Available NVIDIA vGPU types

NVIDIA GRID cards contain multiple Graphics Processing Units (GPU). For example, TESLA M10 cards contain four GM107GL GPUs, and TESLA M60 cards contain two GM204GL GPUs. Each physical GPU can host several different types of virtual GPU (vGPU). vGPU types have a fixed amount of frame buffer, number of supported display heads and maximum resolutions, and are targeted at different classes of workload.

For a list of the most recently supported NVIDIA cards, see the Hardware Compatibility List and the NVIDIA product information.

Note:

The vGPUs hosted on a physical GPU at the same time must all be of the same type. However, there is no corresponding restriction for physical GPUs on the same card. This restriction is automatic and can cause unexpected capacity planning issues.

For example, a TESLA M60 card has two physical GPUs, and can support 11 types of vGPU:

  • GRID M60-1A
  • GRID M60-2A
  • GRID M60-4A
  • GRID M60-8A
  • GRID M60-0B
  • GRID M60-1B
  • GRID M60-0Q
  • GRID M60-1Q
  • GRID M60-2Q
  • GRID M60-4Q
  • GRID M60-8Q

In the case where you start both a VM that has vGPU type M60-1A and a VM that has vGPU type M60-2A:

  • One physical GPU only supports M60-1A instances
  • The other only supports M60-2A instances

You cannot start any M60-4A instances on that single card.

NVIDIA vGPU system requirements

  • NVIDIA GRID card:

  • Depending on the NVIDIA graphics card used, you might need an NVIDIA subscription or a license. For more information, see the NVIDIA product information.

  • Depending on the NVIDIA graphics card, you might need to ensure that the card is set to the correct mode. For more information, see the NVIDIA documentation.

  • Citrix Hypervisor Premium Edition (or access to Citrix Hypervisor through a Citrix Virtual Apps and Desktops entitlement or Citrix DaaS entitlement).

  • A server capable of hosting Citrix Hypervisor and the supported NVIDIA cards.

    Note:

    Some NVIDIA GPUs do not support hosts with more than 1 TB of memory. If you are using the following GPUs based on the Maxwell architecture: Tesla M6, Tesla M10, and Tesla M60, ensure that your server has less than 1 TB memory. For more information, see the NVIDIA documentation.

    In general, we recommend that for NVIDIA vGPUs, you use a server with less than 768 GB of memory.

  • NVIDIA vGPU software package for Citrix Hypervisor, consisting of the NVIDIA Virtual GPU Manager for Citrix Hypervisor, and NVIDIA drivers.

  • To run Citrix Virtual Desktops with VMs running NVIDIA vGPU, you also need: Citrix Virtual Desktops 7.6 or later, full installation.

    Note:

    Review the NVIDIA Virtual GPU User Guide (Ref: DU-06920-001) available from the NVIDIA website. Register with NVIDIA to access these components.

  • For NVIDIA Ampere vGPUs and all future generations, you have to enable SR-IOV in your system BIOS.

vGPU live migration

Citrix Hypervisor enables the use of live migration, storage live migration, and the ability to suspend and resume for NVIDIA vGPU-enabled VMs.

To use the vGPU live migration, storage live migration, or Suspend features, satisfy the following requirements:

  • An NVIDIA GRID card, Maxwell family or later.

  • An NVIDIA Virtual GPU Manager for Citrix Hypervisor with live migration enabled. For more information, see the NVIDIA Documentation.

  • A Windows VM that has NVIDIA live migration-enabled vGPU drivers installed.

vGPU live migration enables the use of live migration within a pool, live migration between pools, storage live migration, and Suspend/Resume of vGPU-enabled VMs.

Preparation overview

  1. Install Citrix Hypervisor

  2. Install the NVIDIA Virtual GPU Manager for Citrix Hypervisor

  3. Restart the Citrix Hypervisor server

Installation on Citrix Hypervisor

Citrix Hypervisor is available for download from the Citrix Hypervisor Downloads page.

Install the following:

  • Citrix Hypervisor Base Installation ISO

  • XenCenter Windows Management Console

For more information, see Install.

Licensing note

vGPU is available for Citrix Hypervisor Premium Edition customers, or customers who have access to Citrix Hypervisor through their Citrix Virtual Apps and Desktops entitlement or Citrix DaaS entitlement. To learn more about Citrix Hypervisor editions, and to find out how to upgrade, visit the Citrix website. For more information, see Licensing.

Depending on the NVIDIA graphics card used, you might need NVIDIA subscription or a license. For more information, see NVIDIA product information.

For information about licensing NVIDIA cards, see the NVIDIA website.

Install the NVIDIA vGPU Manager for Citrix Hypervisor

Install the NVIDIA Virtual GPU software that is available from NVIDIA. The NVIDIA Virtual GPU software consists of:

  • NVIDIA Virtual GPU Manager

  • Windows Display Driver (The Windows display driver depends on the Windows version)

The NVIDIA Virtual GPU Manager runs in the Citrix Hypervisor Control Domain (dom0). It is provided as either a supplemental pack or an RPM file. For more information about installation, see the User Guide included in the NVIDIA vGPU Software.

The Update can be installed in one of the following methods:

  • Use XenCenter (Tools > Install Update > Select update or supplemental pack from disk)
  • Use the xe CLI command xe-install-supplemental-pack.

Note:

If you are installing the NVIDIA Virtual GPU Manager using an RPM file, ensure that you copy the RPM file to dom0 and then install.

  1. Use the rpm command to install the package:

    rpm -iv <vgpu_manager_rpm_filename>
    <!--NeedCopy-->
    
  2. Restart the Citrix Hypervisor server:

    shutdown -r now
    <!--NeedCopy-->
    
  3. After you restart the Citrix Hypervisor server, verify that the software has been installed and loaded correctly by checking the NVIDIA kernel driver:

    [root@xenserver ~]#lsmod |grep nvidia
        nvidia            8152994 0
    <!--NeedCopy-->
    
  4. Verify that the NVIDIA kernel driver can successfully communicate with the NVIDIA physical GPUs in your host. Run the nvidia-smi command to produce a listing of the GPUs in your platform similar to:

    [root@xenserver ~]# nvidia-smi
    
        Thu Jan 26 13:48:50 2017
        +----------------------------------------------------------+|
        NVIDIA-SMI 367.64  Driver Version: 367.64                  |
        -------------------------------+----------------------+
         GPU Name    Persistence-M| Bus-Id   Disp.A | Volatile Uncorr. ECC|
        Fan Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute M.|
        ===============================+======================+======================|
        |  0 Tesla M60       On | 0000:05:00.0    Off|   Off |
        | N/A  33C  P8    24W / 150W |   7249MiB /  8191MiB |      0%      Default  |
        +-------------------------------+----------------------+----------------------+
        |  1 Tesla M60       On | 0000:09:00.0    Off |  Off |
        | N/A  36C  P8    24W / 150W |   7249MiB /  8191MiB |      0%      Default  |
        +-------------------------------+----------------------+----------------------+
        |  2 Tesla M60       On | 0000:85:00.0    Off |  Off |
        | N/A  36C  P8    23W / 150W |   19MiB /  8191MiB |        0%      Default  |
        +-------------------------------+----------------------+----------------------+
        |  3 Tesla M60       On | 0000:89:00.0    Off |  Off |
        | N/A  37C    P8    23W / 150W |     14MiB /  8191MiB |    0%      Default  |
        +-------------------------------+----------------------+----------------------+
        +-----------------------------------------------------------------------------+
        | Processes:                 GPU Memory |
        | GPU    PID  Type  Process name    Usage   |
        |=============================================================================|
        | No running compute processes found |
        +-----------------------------------------------------------------------------+
    <!--NeedCopy-->
    

AMD MxGPU

AMDs MxGPU enables multiple Virtual Machines (VM) to have direct access to a portion of a single physical GPU, using Single Root I/O Virtualization. The same AMD graphics driver deployed on non-virtualized operating systems can be used inside the guest.

VMs use MxGPU GPUs in the same manner as a physical GPU that the hypervisor has passed through. An AMD graphics driver loaded in the VM provides direct access to the GPU for performance critical fast paths.

To ensure that you always have the latest security and functional fixes, ensure that you install any updates provided by AMD for the drivers on your VMs.

For more information about using AMD MxGPU with Citrix Hypervisor, see the AMD Documentation.

Licensing note

MxGPU is available for Citrix Hypervisor Premium Edition customers, or customers who have access to Citrix Hypervisor through their Citrix Virtual Apps and Desktops entitlement or Citrix DaaS entitlement. To learn more about Citrix Hypervisor editions, and to find out how to upgrade, visit the Citrix website. For detailed information on Licensing, see the Citrix Hypervisor Licensing FAQ.

Available AMD MxGPU vGPU types

AMD MxGPU cards can contain multiple GPUs. For example, S7150 cards contain one physical GPU and S7150x2 cards contain two GPUs. Each physical GPU can host several different types of virtual GPU (vGPU). vGPU types split a physical GPU into a pre-defined number of vGPUs. Each of these vGPUs has an equal share of the frame buffer and graphics processing abilities. The different vGPU types are targeted at different classes of workload. vGPU types that split a physical GPU into fewer pieces are more suitable for intensive workloads.

Note:

The vGPUs hosted on a physical GPU at the same time must all be of the same type. However, there is no corresponding restriction on physical GPUs on the same card. This restriction is automatic and can cause unexpected capacity planning issues.

AMD MxGPU system requirements

  • AMD FirePro S7100-series GPUs

  • Citrix Hypervisor Premium Edition (or access to Citrix Hypervisor through a Citrix Virtual Desktops or Citrix Virtual Apps entitlement or Citrix DaaS entitlement)

  • A server capable of hosting Citrix Hypervisor and AMD MxGPU cards. The list of servers validated by AMD can be found on the AMD website.

  • AMD MxGPU host drivers for Citrix Hypervisor. These drivers are available from the AMD download site.

  • AMD FirePro in-guest drivers, suitable for MxGPU on Citrix Hypervisor. These drivers are available from the AMD download site.

  • To run Citrix Virtual Desktops with VMs running AMD MxGPU, you also need Citrix Virtual Desktops 7.13 or later, full installation.

  • System BIOS configured to support SR-IOV and the MxGPU configured as the secondary adapter

Preparation overview

  1. Install Citrix Hypervisor

  2. Install the AMD MxGPU host drivers for Citrix Hypervisor

  3. Restart the Citrix Hypervisor server

Installation on Citrix Hypervisor

Citrix Hypervisor is available for download from the Citrix Hypervisor Downloads page.

Install the following:

  • Citrix Hypervisor 8.2 Cumulative Update

  • XenCenter 8.2 Windows Management Console

For more information about installation, see the Citrix Hypervisor Installation Guide.

Install the AMD MxGPU host driver for Citrix Hypervisor

Complete the following steps to install the host driver.

  1. The update that contains the driver can be installed by using XenCenter or by using the xe CLI.

    • To install by using XenCenter, go to Tools > Install Update > Select update or supplemental pack from disk

    • To install by using the xe CLI, copy the update to the host and run the following command in the directory where the update is located:

       xe-install-supplemental-pack mxgpu-1.0.5.amd.iso
       <!--NeedCopy-->
      
  2. Restart the Citrix Hypervisor server.

  3. After restarting the Citrix Hypervisor server, verify that the MxGPU package has been installed and loaded correctly. Check whether the gim kernel driver is loaded by running the following commands in the Citrix Hypervisor server console:

        modinfo gim
        modprobe gim
    <!--NeedCopy-->
    
  4. Verify that the gim kernel driver has successfully created MxGPU Virtual Functions, which are provided to the guests. Run the following command:

    lspci | grep "FirePro S7150"
    <!--NeedCopy-->
    

    The output from the command shows Virtual Functions that have the “S7150V” identifier.

  5. Use the GPU tab in XenCenter to confirm that MxGPU Virtual GPU types are listed as available on the system.

After the AMD MxGPU drivers are installed, the Passthrough option is no longer available for the GPUs. Instead use the MxGPU.1 option for pass-through.

The following options are also supported: MxGPU.2 and MxGPU.4.

Create a MxGPU enabled VM

Before configuring a VM to use MxGPU, install the VM. Ensure that AMD MxGPU supports the VM operating system. For more information, see Guest support and constraints.

After the VM is installed, complete the configuration by following the instructions in Create vGPU enabled VMs.

Intel GVT-d and GVT-g

Citrix Hypervisor supports Intel’s virtual GPU (GVT-g), a graphics acceleration solution that requires no additional hardware. It uses the Intel Iris Pro feature embedded in certain Intel processors, and a standard Intel GPU driver installed within the VM.

To ensure that you always have the latest security and functional fixes, ensure that you install any updates provided by Intel for the drivers on your VMs and the firmware on your host server.

Intel GVT-d and GVT-g are compatible with the HDX 3D Pro features of Citrix Virtual Apps and Desktops or Citrix DaaS. For more information, see HDX 3D Pro.

Note:

Because the Intel Iris Pro graphics feature is embedded within the processors, CPU-intensive applications can cause power to be diverted from the GPU. As a result, you might not experience full graphics acceleration as you do for purely GPU-intensive workloads.

Intel GVT-g system requirements and configuration

To use Intel GVT-g, your Citrix Hypervisor server must have the following hardware:

  • A CPU that has Iris Pro graphics. This CPU must be listed as supported for Graphics on the Hardware Compatibility List
  • A motherboard that has a graphics-enabled chipset. For example, C226 for Xeon E3 v4 CPUs or C236 for Xeon E3 v5 CPUs.

Note:

Ensure that you restart the hosts when switching between Intel GPU pass-through (GVT-d) and Intel Virtual GPU (GVT-g).

When configuring Intel GVT-g, the number of Intel virtual GPUs supported on a specific Citrix Hypervisor server depends on its GPU bar size. The GPU bar size is called the ‘Aperture size’ in the BIOS. We recommend that you set the Aperture size to 1,024 MB to support a maximum of seven virtual GPUs per host.

If you configure the Aperture size to 256 MB, only one VM can start on the host. Setting it to 512 MB can result in only three VMs being started on the Citrix Hypervisor server. An Aperture size higher than 1,024 MB is not supported and does not increase the number of VMs that start on a host.

Enable Intel GPU pass-through

Citrix Hypervisor supports the GPU pass-through feature for Windows VMs using an Intel integrated GPU device.

  • For more information on Windows versions supported with Intel GPU pass-through, see Graphics.
  • For more information on supported hardware, see the Hardware Compatibility List.

When using Intel GPU on Intel servers, the Citrix Hypervisor server’s Control Domain (dom0) has access to the integrated GPU device. In such cases, the GPU is available for pass-through. To use the Intel GPU Pass-through feature on Intel servers, disable the connection between dom0 and the GPU before passing through the GPU to the VM.

To disable this connection, complete the following steps:

  1. On the Resources pane, choose the Citrix Hypervisor server.

  2. On the General tab, click Properties, and in the left pane, click GPU.

  3. In the Integrated GPU passthrough section, select This server will not use the integrated GPU.

    Integrated GPU pass-through interface

    This step disables the connection between dom0 and the Intel integrated GPU device.

  4. Click OK.

  5. Restart the Citrix Hypervisor server for the changes to take effect.

    The Intel GPU is now visible on the GPU type list during new VM creation, and on the VM’s Properties tab.

    Note:

    The Citrix Hypervisor server’s external console output (for example, VGA, HDMI, DP) will not be available after disabling the connection between dom0 and the GPU.

Prepare host for graphics