Linux VMs

When you want to create a Linux VM, create the VM using a template for the operating system you want to run on the VM. You can use a template that Citrix Hypervisor provides for your operating system, or one that you created previously. You can create the VM from either XenCenter or the CLI. This section focuses on using the CLI.


To create a VM of a newer minor update of a RHEL release than is supported for installation by Citrix Hypervisor, complete the following steps:

  • Install from the latest supported media
  • Use yum update to bring the VM up-to-date

This process also applies to RHEL derivatives such as CentOS and Oracle Linux.

We recommend that you install the Citrix VM Tools immediately after installing the operating system. For more information, see Install the Linux Guest Agent. For some operating systems, the Citrix VM Tools include a kernel specific to Citrix Hypervisor, which replaces the kernel provided by the vendor. Other operating systems, require you to install a specific version of a vendor-provided kernel.

The overview for creating a Linux VM is as following:

  1. Create the VM for your target operating system using XenCenter or the CLI.

  2. Install the operating system using vendor installation media.

  3. Install the Citrix VM Tools (recommended).

  4. Configure the correct time and time zone on the VM and VNC as you would in a normal non-virtual environment.

Citrix Hypervisor supports the installation of many Linux distributions as VMs. There are three installation mechanisms:


The Other install media template is for advanced users who want to attempt to install VMs running unsupported operating systems. Citrix Hypervisor has been tested running only the supported distributions and specific versions covered by the standard supplied templates. Any VMs installed using the Other install media template are not supported.

VMs created using the Other install media template are created as HVM guests. This behavior might mean that some Linux VMs use slower emulated devices rather than the higher performance I/O drivers.

For information regarding specific Linux distributions, see Installation notes for Linux distributions.

Supported Linux distributions

The supported Linux distributions are:

  • Debian Jessie 8 (32-/64-bit)
  • Debian Stretch 9 (32-/64-bit)
  • Red Hat Enterprise Linux 7.x (64-bit)
  • Red Hat Enterprise Linux 8.x (64-bit)
  • CentOS 7.x (64-bit)
  • Oracle Enterprise Linux 7.x (64-bit)
  • Scientific Linux 7.x (64-bit)
  • SUSE Linux Enterprise Server 12 SP3, 12 SP4 (64-bit)
  • SUSE Linux Enterprise Desktop 12 SP3, 12 SP4 (64-bit)
  • SUSE Linux Enterprise Server 15, 15 SP1 (64-bit)
  • SUSE Linux Enterprise Desktop 15, 15 SP1 (64-bit)
  • Ubuntu 16.04 (32-/64-bit)
  • Ubuntu 18.04 (64-bit)
  • CoreOS Stable (64-bit)

Other Linux distributions are not supported. However, distributions that use the same installation mechanism as Red Hat Enterprise Linux (for example, Fedora Core) might be successfully installed using the same template.

Create a Linux VM by installing from an internet repository

This section shows the xe CLI procedure for creating a Linux VM, using a Debian Squeeze example, by installing the OS from an internet repository.

  1. Create a VM from the Debian Squeeze template. The UUID of the VM is returned:

    xe vm-install template=template-name new-name-label=squeeze-vm
  2. Specify the installation repository. This repository is a Debian mirror with the packages required to install the base system and the extra that you select during the Debian installer:

    xe vm-param-set uuid=UUID other-config:install-repository=path_to_repository

    An example of a valid repository path is where xx is your country code (see the Debian mirror list for a list of these codes). For multiple installations, we recommend using a local mirror or apt proxy to avoid generating excessive network traffic or load on the central repositories.


    The Debian installer supports only HTTP and FTP apt repos. NFS is not supported.

  3. Find the UUID of the network that you want to connect to. For example, if it is the one attached to xenbr0:

    xe network-list bridge=xenbr0 --minimal
  4. Create a VIF to connect the new VM to this network:

    xe vif-create vm-uuid=vm_uuid network-uuid=network_uuid mac=random device=0
  5. Start the VM. It boots straight into the Debian installer:

    xe vm-start uuid=UUID
  6. Follow the Debian Installer procedure to install the VM in the configuration you require.

  7. Install the guest agent and configure graphical display. For more information, see Install the Linux Guest Agent.

Create a Linux VM by installing from a physical CD or DVD

This section shows the CLI procedure for creating a Linux VM, using a Debian Squeeze example, by installing the OS from a physical CD/DVD.

  1. Create a VM from the Debian Squeeze template. The UUID of the VM is returned:

    xe vm-install template=template-name new-name-label=vm-name
  2. Get the UUID of the root disk of the new VM:

    xe vbd-list vm-uuid=vm_uuid userdevice=0 params=uuid --minimal
  3. Using the UUID returned, set the root disk not to be bootable:

    xe vbd-param-set uuid=root_disk_uuid bootable=false
  4. Get the name of the physical CD drive on the Citrix Hypervisor server:

    xe cd-list

    The result of this command gives you something like SCSI 0:0:0:0 for the name-label field.

  5. Add a virtual CD-ROM to the new VM using the Citrix Hypervisor server CD drive name-label parameter as the cd-name parameter:

    xe vm-cd-add vm=vm_name cd-name="host_cd_drive_name_label" device=3
  6. Get the UUID of the VBD corresponding to the new virtual CD drive:

    xe vbd-list vm-uuid=vm_uuid type=CD params=uuid --minimal
  7. Make the VBD of the virtual CD bootable:

    xe vbd-param-set uuid=cd_drive_uuid bootable=true
  8. Set the install repository of the VM to be the CD drive:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=cdrom
  9. Insert the Debian Squeeze installation CD into the CD drive on the Citrix Hypervisor server.

  10. Open a console to the VM with XenCenter or an SSH terminal and follow the steps to perform the OS installation.

  11. Start the VM. It boots straight into the Debian installer:

    xe vm-start uuid=UUID
  12. Install the guest utilities and configure graphical display. For more information, see Install the Linux Guest Agent.

Create a Linux VM by installing from an ISO image

This section shows the CLI procedure for creating a Linux VM, by installing the OS from network-accessible ISO.

  1. Run the command

    xe vm-install template=template new-name-label=name_for_vm sr-uuid=storage_repository_uuid

    This command returns the UUID of the new VM.

  2. Find the UUID of the network that you want to connect to. For example, if it is the one attached to xenbr0:

    xe network-list bridge=xenbr0 --minimal
  3. Create a VIF to connect the new VM to this network:

    xe vif-create vm-uuid=vm_uuid network-uuid=network_uuid mac=random device=0
  4. Set the install-repository key of the other-config parameter to the path of your network repository. For example, to use as the URL of the vendor media:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=
  5. Start the VM

    xe vm-start uuid=vm_uuid
  6. Connect to the VM console using XenCenter or VNC and perform the OS installation.

Network installation notes

The Citrix Hypervisor guest installer allows you to install an operating system from a network-accessible ISO image onto a VM. To prepare for installing from an ISO, make an exploded network repository of your vendor media (not ISO images). Export it over NFS, HTTP, or FTP so that it is accessible to the Citrix Hypervisor server administration interface.

The network repository must be accessible from the control domain of the Citrix Hypervisor server, normally using the management interface. The URL must point to the base of the CD/DVD image on the network server, and be of the form:

  • HTTP: http://<server>/<path>
  • FTP: ftp://<server>/<path>
  • NFS: nfs://<server>/<path>
  • NFS: nfs:<server>/<path>

See your vendor installation instructions for information about how to prepare for a network-based installation, such as where to unpack the ISO.


When using the NFS installation method from XenCenter, always use the nfs:// style of path.

When creating VMs from templates, the XenCenter New VM wizard prompts you for the repository URL. When using the CLI, install the template as normal using vm-install and then set the other-config:install-repository parameter to the value of the URL. When the VM is then started, it begins the network installation process.


When installing a new Linux-based VM, it is important to complete the installation and reboot it before performing any other operations on it. This process is analogous to not interrupting a Windows installation – which would leave you with a non-functional VM.

Advanced operating system boot parameters

When creating a VM, you can specify advanced operating system boot parameters using XenCenter or the xe CLI. Specifying advanced parameters can be helpful when you are, for example, configuring automated installations of paravirtualized guests. For example, you might use a Debian preseed or RHEL kickstart file as follows.

To install Debian by using a preseed file:

  1. Create a preseed file. For information on creating preseed files, see the Debian documentation for details.

  2. Set the kernel command-line correctly for the VM before starting it. Use the New VM wizard in XenCenter or execute an xe CLI command like the following:

    xe vm-param-set uuid=uuid PV-args=preseed_arguments

To install RHEL by using a Kickstart File:


A Red Hat Kickstart file is an automated installation method, similar to an answer file, you can use to provide responses to the RHEL installation prompts. To create this file, install RHEL manually. The kickstart file is located in /root/anaconda-ks.cfg.

  1. In XenCenter, choose the appropriate RHEL template.

  2. Specify the kickstart file to use as a kernel command-line argument in the XenCenter New VM Wizard. Specify this value exactly as it would be specified in the PXE config file. For example:

    ks=http://server/path ksdevice=eth0
  3. On the command line, use vm-param-set to set the PV-args parameter to use a Kickstart file

    xe vm-param-set uuid=vm_uuid PV-args="ks=http://server/path ksdevice=eth0"
  4. Set the repository location so Citrix Hypervisor knows where to get the kernel and initrd from for the installer boot:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=http://server/path


To install using a kickstart file without the New VM wizard, you can add the appropriate argument to the Advanced OS boot parameters text box.

Install the Linux guest agent

Although all supported Linux distributions are natively paravirtualized (and don’t need special drivers for full performance), Citrix Hypervisor includes a guest agent. This guest agent provides extra information about the VM to the host. Install the guest agent on each Linux VM to enable Dynamic Memory Control (DMC).

It is important to keep the Linux guest agent up-to-date as you upgrade your Citrix Hypervisor server. For more information, see Update Linux kernels and guest utilities.


Before installing the guest agent on a SUSE Linux Enterprise Desktop or Server 15 guest, ensure that insserv-compat-0.1-2.15.noarch.rpm is installed on the guest.

To install the guest agent:

  1. The files required are present on the built-in guest-tools.iso CD image, or alternatively can be installed by selecting VM and then Install Citrix VM Tools option in XenCenter.

  2. Mount the image onto the guest by running the command:

    mount -o ro,exec /dev/disk/by-label/Citrix VM Tools /mnt


    If mounting the image fails, you can locate the image by running the following:

    blkid -t LABEL="Citrix VM Tools"
  3. Execute the installation script as the root user:

  4. Unmount the image from the guest by running the command:

    umount /mnt
  5. If the kernel has been upgraded, or the VM was upgraded from a previous version, reboot the VM now.


    CD-ROM drives and ISOs attached to Linux Virtual Machines appear as devices, such as /dev/xvdd or /dev/sdd, instead of as /dev/cdrom as you might expect. This behavior is because they are not true CD-ROM devices, but normal devices. When you use either XenCenter or the CLI to eject the CD, it hot-unplugs the device from the VM and the device disappears. In Windows VMs, the behavior is different and the CD remains in the VM in an empty state.

Installation notes for Linux distributions

This following section lists vendor-specific, configuration information to consider before creating the specified Linux VMs.

For more detailed release notes on all distributions, see Linux VM Release Notes.

Red Hat Enterprise Linux* 7.x (32-/64-bit)

The new template for these guests specifies 2 GB RAM. This amount of RAM is a requirement for a successful install of v7.4 and later. For v7.0 - v7.3, the template specifies 2 GB RAM, but as with previous versions of Citrix Hypervisor, 1 GB RAM is sufficient.


This information applies to both Red Hat and Red Hat derivatives.

Apt repositories (Debian)

For infrequent or one-off installations, it is reasonable to use a Debian mirror directly. However, if you intend to do several VM installations, we recommend that you use a caching proxy or local mirror. Either of the following tools can be installed into a VM.

  • Apt-cacher: An implementation of proxy server that keeps a local cache of packages
  • debmirror: A tool that creates a partial or full mirror of a Debian repository

Prepare to clone a Linux VM

Typically, when cloning a VM or a computer, unless you generalize the cloned image, attributes unique to that machine are duplicated in your environments. Some of the unique attributes that are duplicated when cloning are the IP address, SID, or MAC address.

As a result, Citrix Hypervisor automatically changes some virtual hardware parameters when you clone a Linux VM. When you copy the VM using XenCenter, XenCenter automatically changes the MAC address and IP address for you. If these interfaces are configured dynamically in your environment, you might not need to modify the cloned VM. However, if the interfaces are statically configured, you might need to modify their network configurations.

The VM may need to be customized to be made aware of these changes. For instructions for specific supported Linux distributions, see Linux VM Release Notes.

Machine name

A cloned VM is another computer, and like any new computer in a network, it must have a unique name within the network domain.

IP address

A cloned VM must have a unique IP address within the network domain it is part of. Generally, this requirement is not a problem when DHCP is used to assign addresses. When the VM boots, the DHCP server assigns it an IP address. If the cloned VM had a static IP address, the clone must be given an unused IP address before being booted.

MAC address

There are two situations when we recommend disabling MAC address rules before cloning:

  1. In some Linux distributions, the MAC address for the virtual network interface of a cloned VM is recorded in the network configuration files. However, when you clone a VM, XenCenter assigns the new cloned VM a different MAC address. As a result, when the new VM is started for the first time, the network does recognize the new VM and does not come up automatically.

  2. Some Linux distributions use udev rules to remember the MAC address of each network interface, and persist a name for that interface. This behavior is intended so that the same physical NIC always maps to the same ethn interface, which is useful with removable NICs (like laptops). However, this behavior is problematic in the context of VMs.

    For example, consider the behavior in the following case:

    1.  Configure two virtual NICs when installing a VM
    1.  Shut down the VM
    1.  Remove the first NIC

    When the VM reboots, XenCenter shows just one NIC, but calls it eth0. Meanwhile the VM is deliberately forcing this NIC to be eth1. The result is that networking does not work.

For VMs that use persistent names, disable these rules before cloning. If you do not want to turn off persistent names, you must reconfigure networking inside the VM (in the usual way). However, the information shown in XenCenter does not match the addresses actually in your network.

Update Linux kernels and guest utilities

The Linux guest utilities can be updated by rerunning the Linux/ script from the built-in guest-tools.iso CD image (see Install the Linux Guest Agent).

For yum-enabled distributions, CentOS and RHEL, xe-guest-utilities installs a yum configuration file to enable subsequent updates to be done using yum in the standard manner.

For Debian, /etc/apt/sources.list is populated to enable updates using apt by default.

When upgrading, we recommend that you always rerun Linux/ This script automatically determines if your VM needs any updates and installs if necessary.

Upgrade from PV to HVM guests

To upgrade existing unsupported PV Linux guests to supported versions that operate in HVM mode, perform an in-guest upgrade. At this point, the upgraded guest only runs in PV mode - which is not supported and has known issues. Run the following script to convert the newly upgraded guest to the supported HVM mode.

On the Citrix Hypervisor server, open a local shell, log on as root, and enter the following command:

/opt/xensource/bin/pv2hvm vm_name


/opt/xensource/bin/pv2hvm vm_uuid

Restart the VM to complete the process.

Linux VM release notes

Most modern Linux distributions support Xen paravirtualization directly, but have different installation mechanisms and some kernel limitations.

RHEL graphical install support

To use the graphical installer, in XenCenter step through the New VM wizard. In the Installation Media page, in the Advanced OS boot parameters section, add vnc to the list parameters:

graphical utf8 vnc

A screenshot of the New VM wizard. On the Installation Media page, the value `graphical utf8 vnc` is entered in the Advanced OS Boot Parameters field.

You are prompted to provide networking configuration for the new VM to enable VNC communication. Work through the remainder of the New VM wizard. When the wizard completes, in the Infrastructure view, select the VM, and click Console to view a console session of the VM. At this point, it uses the standard installer. The VM installation initially starts in text mode, and may request network configuration. Once provided, the Switch to Graphical Console button is displayed in the top right corner of the XenCenter window.

Red Hat Enterprise Linux 7

After migrating or suspending the VM, RHEL 7.x guests might freeze during resume. For more information, see Red Hat issue 1141249.

CentOS 7

For the list of CentOS 7.x release notes, see Red Hat Enterprise Linux 7.

Oracle Linux 7

For the list of Oracle Linux 7.x release notes, see Red Hat Enterprise Linux 7.

Scientific Linux 7

For the list of Scientific Linux 7.x release notes, see Red Hat Enterprise Linux 7.

Debian 10

If you install Debian 10 (Buster) by using PXE network boot, do not add console=tty0 to the boot parameters. This parameter can cause issues with the installation process. Use only console=hvc0 in the boot parameters. For more information, see Debian issues 944106 and 944125.

SUSE Linux Enterprise 12

Prepare a SLES guest for cloning


Before you prepare a SLES guest for cloning, ensure that you clear the udev configuration for network devices as follows:

cat< /dev/null > /etc/udev/rules.d/30-net_persistent_names.rules

To prepare a SLES guest for cloning:

  1. Open the file /etc/sysconfig/network/config

  2. Edit the line that reads:



  3. Save the changes and reboot the VM.

For more information, see Prepare to Clone a Linux VM.

Ubuntu 18.04

Ubuntu 18.04 offers the following types of kernel:

  • The General Availability (GA) kernel, which is not updated at point releases
  • The Hardware Enablement (HWE) kernel, which is updated at point releases

Some minor versions of Ubuntu 18.04 (for example 18.04.2 amd 18.04.3) use a HWE kernel by default that can experience issues when running the graphical console. To work around these issues, you can choose to run these minor versions of Ubuntu 18.04 with the GA kernel or to change some of the graphics settings. For more information, see CTX265663 - Ubuntu 18.04.2 VMs can fail to boot on Citrix Hypervisor.