Creating Linux VMs

This article discusses how to create Linux VMs, either by installing them or cloning them. This article also contains vendor-specific installation instructions.

When you want to create a VM, you must create the VM using a template for the operating system you want to run on the VM. You can use a template Citrix provides for your operating system, or one that you created previously. You can create the VM from either XenCenter or the CLI. This article focuses on using the CLI.

Note

Customers who want to create VM of a newer minor update of a Red Hat Enterprise Linux (RHEL release, than is supported for installation by XenServer, should install from the latest supported media and then use yum update to bring the VM up-to-date. This also applies to RHEL derivatives such as CentOS and Oracle Linux.

For example, RHEL 5.10 is supported for release with XenServer 7.1; customers who want to use RHEL v5.11, should first install RHEL v5.10, and then use yum update to update to RHEL 5.11.

We recommend that you install the XenServer PV Tools immediately after installing the operating system. For some operating systems, the XenServer PV Tools include a XenServer specific kernel, which replaces the kernel provided by the vendor. Other operating systems, such as RHEL 5.x require you to install a specific version of a vendor provided kernel.

The overview for creating a Linux VM is as following:

  1. Create the VM for your target operating system using XenCenter or the CLI.

  2. Install the operating system using vendor installation media.

  3. Install the XenServer PV Tools (recommended).

  4. Configure the correct time and time zone on the VM and VNC as you would in a normal non-virtual environment.

XenServer supports the installation of many Linux distributions as VMs. There are three installation mechanisms:

  1. Installing from an internet repository

  2. Installing from a physical CD

  3. Installing from an ISO library

Warning

The Other install media template is for advanced users who want to attempt to install VMs running unsupported operating systems. XenServer has been tested running only the supported distributions and specific versions covered by the standard supplied templates, and any VMs installed using the Other install media template are not supported.

VMs created using the Other install media template is created as HVM guests, which may mean that some Linux VMs use slower emulated devices rather than the higher performance I/O drivers.

For information regarding specific Linux distributions, see Release Notes.

PV Linux distributions

The supported PV Linux distributions are:

Distribution Vendor Install from CD Vendor Install from network repository Notes
Debian Squeeze 6.0 (32-/64-bit) X X  
Debian Wheezy 7 (32-/64-bit) X X  
Red Hat Enterprise Linux 4.5–4.8 (32-bit) X X Requires installing XenServer Tools after installing RHEL to apply the Citrix RHEL 4.8 kernel.
Red Hat Enterprise Linux 5.0–5.11 (32-/64-bit) X X Supported provided you use the 5.4 or later kernel.
Red Hat Enterprise Linux 6.0–6.8 (32-/64-bit) X X  
CentOS 4.5–4.8 (32-bit) X X  
CentOS 5.0–5.11 (32-/64-bit) X X  
CentOS 6.0–6.8 (32-/64-bit) X X  
Oracle Linux 5.0–5.11 (32-/64-bit) X X  
Oracle Linux 6.0–6.8 (32-/64-bit) X X  
Scientific Linux 5.11 (32-/64-bit) X X Supported provided you use the 5.4 or later kernel.
Scientific Linux 6.6–6.8 (32-/64-bit) X X  
SUSE Linux Enterprise Server 10 SP1, SP2, SP4 (32-/64-bit) X X  
SUSE Linux Enterprise Server 10 SP3 (32-bit)     Supported only if upgrading from SLES 10 SP2
SUSE Linux Enterprise Server 10 SP3 (64-bit) X X  
SUSE Linux Enterprise Server 11, 11 SP1, 11 SP2, 11 SP3, 11 SP4 (32-/64-bit) X X  
SUSE Linux Enterprise Server 12, 12 SP1, 12 SP2 (64-bit) X X  
SUSE Linux Enterprise Desktop 11 SP3 (64-bit) X X  
SUSE Linux Enterprise Desktop 12, 12 SP1, 12 SP2 (64-bit) X X  
Ubuntu 10.04 (32-/64-bit)   X  
Ubuntu 12.04 (32-/64-bit) X X  
NeoKylin Linux Advanced Server 6.5 (64-bit) X X  

Other PV Linux distributions not present in the above list are not supported. However, distributions that use the same installation mechanism as Red Hat Enterprise Linux (for example, Fedora Core) might be successfully installed using the same template.

Note

  • Running 32-bit PV Linux VMs on a host that has more than 128 GB of memory is not supported.

  • XenServer hardware security features can reduce the overall performance of 32-bit PV VMs. If this issue impacts you, you can do one of the following things:

    • Run a 64-bit version of the PV Linux VM

    • Boot Xen with the no-smep no-smap option.

      We do not recommend this option as it can reduce the depth of security of the host.

HVM Linux distributions

The HVM Linux VMs can take advantage of the x86 virtual container technologies in newer processors for improved performance. Network and storage access from these guests still operate in PV mode, using drivers built-in to the kernels.

The supported HVM Linux distributions are:

Distribution Vendor Install from CD Vendor Install from network repository
Debian Jessie 8.0 (32-/64-bit) X X
Debian Stretch 9.0 (32-/64-bit) X X
Red Hat Enterprise Linux 7.x (64-bit) X X
CentOS 7.x (64-bit) X X
Oracle Linux 7.x (64-bit) X X
Scientific Linux 7.x (64-bit) X X
SUSE Linux Enterprise Server 12 SP3 (64-bit) X X
SUSE Linux Enterprise Desktop 12 SP3 (64-bit) X X
Ubuntu 14.04 (64-bit) X X
Ubuntu 16.04 (64-bit) X X
CoreOS Stable (64-bit) X X
Linx Linux V6.0 (64-bit) X X
Linx Linux V8.0 (64-bit) X X
Yinhe Kylin (64-bit) X X

Creating a Linux VM by Installing from an Internet Repository

This section shows the xe CLI procedure for creating a Linux VM, using a Debian Squeeze example, by installing the OS from an internet repository.

Example: Installing a Debian Squeeze VM from a network repository

  1. Create a VM from the Debian Squeeze template. The UUID of the VM is returned:

    xe vm-install template=template-name new-name-label=squeeze-vm
    
  2. Specify the installation repository - this should be a Debian mirror with at least the packages required to install the base system and the additional packages you plan to select during the Debian installer:

    xe vm-param-set uuid=UUID other-config:install-repository=path_to_repository
    

    An example of a valid repository path is http://ftp.xx.debian.org/debian where xx is your country code (see the Debian mirror list for a list of these). For multiple installations Citrix recommends using a local mirror or apt proxy to avoid generating excessive network traffic or load on the central repositories.

    Note

    The Debian installer supports only HTTP and FTP apt repos, NFS is NOT supported.

  3. Find the UUID of the network that you want to connect to. For example, if it is the one attached to xenbr0:

    xe network-list bridge=xenbr0 --minimal
    
  4. Create a VIF to connect the new VM to this network:

    xe vif-create vm-uuid=vm_uuid network-uuid=network_uuid mac=random device=0
    
  5. Start the VM. It boots straight into the Debian installer:

    xe vm-start uuid=UUID
    
  6. Follow the Debian Installer procedure to install the VM in the configuration you require.

  7. See below for instructions on how to install the guest utilities and how to configure graphical display.

Creating a Linux VM by Installing from a Physical CD/DVD

This section shows the CLI procedure for creating a Linux VM, using a Debian Squeeze example, by installing the OS from a physical CD/DVD.

Example: Installing a Debian Squeeze VM from CD/DVD (using the CLI)

  1. Create a VM from the Debian Squeeze template. The UUID of the VM is returned:

    xe vm-install template=template-name new-name-label=vm-name
    
  2. Get the UUID of the root disk of the new VM:

    xe vbd-list vm-uuid=vm_uuid userdevice=0 params=uuid --minimal
    
  3. Using the UUID returned, set the root disk to not be bootable:

    xe vbd-param-set uuid=root_disk_uuid bootable=false
    
  4. Get the name of the physical CD drive on the XenServer host:

    xe cd-list
    

    The result of this command should give you something like SCSI 0:0:0:0 for the name-label field.

  5. Add a virtual CD-ROM to the new VM using the XenServer host CD drive name-label parameter as the cd-name parameter:

    xe vm-cd-add vm=vm_name cd-name="host_cd_drive_name_label" device=3
    
  6. Get the UUID of the VBD corresponding to the new virtual CD drive:

    xe vbd-list vm-uuid=vm_uuid type=CD params=uuid --minimal
    
  7. Make the VBD of the virtual CD boot-able:

    xe vbd-param-set uuid=cd_drive_uuid bootable=true
    
  8. Set the install repository of the VM to be the CD drive:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=cdrom
    
  9. Insert the Debian Squeeze installation CD into the CD drive on the XenServer host.

  10. Open a console to the VM with XenCenter or an SSH terminal and follow the steps to perform the OS installation.

  11. Start the VM. It boots straight into the Debian installer:

    xe vm-start uuid=UUID
    

See the sections that follow for instructions on how to install the guest utilities and how to configure graphical display.

Creating a Linux VM by Installing From an ISO Image

This section shows the CLI procedure for creating a Linux VM, by installing the OS from network-accessible ISO.

Example: Installing a Linux VM from a Network-Accessible ISO Image

  1. Run the command

    xe vm-install template=template new-name-label=name_for_vm  sr-uuid=storage_repository_uuid
    

    This command returns the UUID of the new VM.

  2. Find the UUID of the network that you want to connect to. For example, if it is the one attached to xenbr0:

    xe network-list bridge=xenbr0 --minimal
    
  3. Create a VIF to connect the new VM to this network:

    xe vif-create vm-uuid=vm_uuid network-uuid=network_uuid mac=random device=0
    
  4. Set the install-repository key of the other-config parameter to the path of your network repository. For example, to use http://mirror.centos.org/centos/6/os/x86_64 as the URL of the vendor media:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=http://mirror.centos.org/centos/6/os/x86_64
    
  5. Start the VM

    xe vm-start uuid=vm_uuid
    
  6. Connect to the VM console using XenCenter or VNC and perform the OS installation.

Network Installation Notes

The XenServer guest installer allows you to install an operating system from a network-accessible ISO image onto a VM. To prepare for installing from an ISO, make an exploded network repository of your vendor media (not ISO images) and export it over NFS, HTTP, or FTP so that it is accessible to the XenServer host administration interface.

The network repository must be accessible from the control domain of the XenServer host, normally using the management interface. The URL must point to the base of the CD/DVD image on the network server, and be of the form:

  • HTTP.

     http://*<server>*/*<path>*
    
  • FTP.

     ftp://*<server>*/*<path>*
    
  • NFS.

     nfs://*<server>*/*<path>*
    
  • NFS.

     nfs:*<server>*:/*<path>*
    

See your vendor installation instructions for information about how to prepare for a network-based installation, such as where to unpack the ISO.

Note

When using the NFS installation method from XenCenter, the nfs:// style of path should always be used.

When creating VMs from templates, the XenCenter New VM wizard prompts you for the repository URL. When using the CLI, install the template as normal using vm-install and then set the other-config:install-repository parameter to the value of the URL. When the VM is subsequently started, it begins the network installation process.

Warning

When installing a new Linux-based VM, it is important to fully finish the installation and reboot it before performing any other operations on it. This is analogous to not interrupting a Windows installation - which would leave you with a non-functional VM.

Advanced Operating System Boot Parameters

When creating a VM, you can specify advanced operating system boot parameters using XenCenter or the xe CLI. Specifying advanced parameters may be helpful if you are, for example, configuring automated installations of paravirtualized guests. For example, you might use a Debian preseed or RHEL kickstart file as follows.

To install Debian using a preseed file

  1. Create a preseed file. For information on creating preseed files, see the Debian documentation for details.

  2. Set the kernel command-line correctly for the VM before starting it. This can be done using the New VM wizard in XenCenter or by executing an xe CLI command like the following:

    xe vm-param-set uuid=uuid PV-args=preseed_arguments
    

To install RHEL Using a Kickstart File

Note

A Red Hat Kickstart file is an automated installation method, similar to an answer file, you can use to provide responses to the RHEL installation prompts. To create this file, install RHEL manually. The kickstart file is located in /root/anaconda-ks.cfg.

  1. In XenCenter, choose the appropriate RHEL template

  2. Specify the kickstart file to use as a kernel command-line argument in the XenCenter New VM Wizard, exactly as it would be specified in the PXE config file, for example:

    ks=http://server/path ksdevice=eth0
    
  3. On the command line, use vm-param-set to set the PV-args parameter to make use of a Kickstart file

    xe vm-param-set uuid=vm_uuid PV-args="ks=http://server/path ksdevice=eth0"
    
  4. Set the repository location so XenServer knows where to get the kernel and initrd from for the installer boot:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=http://server/path
    

    Note

    To install using a kickstart file without the New VM wizard, you can add the appropriate argument to the Advanced OS boot parameters text box.

Installing the Linux Guest Agent

Although all the supported Linux distributions are natively paravirtualized (and therefore do not need special drivers for full performance), XenServer includes a guest agent which provides additional information about the VM to the host. You must install the guest agent on each Linux VM to enable Dynamic Memory Control (DMC).

It is important to keep the Linux guest agent up-to-date (see Updating VMs) as you upgrade your XenServer host.

To install the guest agent

  1. The files required are present on the built-in guest-tools.iso CD image, or alternatively can be installed by selecting VM and then Install XenServer PV Tools option in XenCenter.

  2. Mount the image onto the guest by running the command:

    mount -o ro,exec /dev/disk/by-label/XenServerincloudsphere\\x20Tools /mnt
    

    Note

    If mounting the image fails, you can locate the image by running the following:

    blkid -t LABEL="XenServer PV Tools"
    
  3. Execute the installation script as the root user:

    /mnt/Linux/install.sh
    
  4. Unmount the image from the guest by running the command:

    umount /mnt
    
  5. If the kernel has been upgraded, or the VM was upgraded from a previous version, reboot the VM now.

Note

CD-ROM drives and ISOs attached to Linux Virtual Machines appear as devices, such as /dev/xvdd (or /dev/sdd in Ubuntu 10.10 and later) instead of as /dev/cdrom as you might expect. This is because they are not true CD-ROM devices, but normal devices. When the CD is ejected by either XenCenter or the CLI, it hot-unplugs the device from the VM and the device disappears. This is different from Windows Virtual Machines, where the CD remains in the VM in an empty state.

Additional Installation Notes for Linux Distributions

This following table lists additional, vendor-specific, configuration information that you should be aware of before creating the specified Linux VMs.

Important

For detailed release notes on all distributions, see Release Notes.

CentOS 5.x (32-/64-bit)

For a CentOS 5.x VM, you must ensure that the operating system is using the CentOS 5.4 kernel or later, which is available from the distribution vendor. Enterprise Linux kernel versions prior to 5.4 contain issues that prevent XenServer VMs from running properly. Upgrade the kernel using the vendor’s normal kernel upgrade procedure.

Red Hat Enterprise Linux 5.x (32-/64-bit)

For a RHEL 5.x VM, you must ensure that the operating system is using the RHEL 5.4 kernel (2.6.18-164.el5) or later, which is available from the distribution vendor. Enterprise Linux kernel versions prior to 5.4 contain issues that prevent XenServer VMs from running properly. Upgrade the kernel using the vendor’s normal kernel upgrade procedure.

Red Hat Enterprise Linux7.x (32-/64-bit)

This information applies to both Red Hat and Red Hat derivatives.

The new template for these guests specifies 2 GB RAM. This amount of RAM is a requirement for a successful install of v7.4 and later. For v7.0 - v7.3, the template specifies 2 GB RAM, but as with previous versions of XenServer, 1 GB RAM is sufficient.

Oracle Linux 5.x (32-/64-bit)

For an OEL 5.x VM, you must ensure that the operating system is using the OEL 5.4 kernel or later, which is available from the distribution vendor. Enterprise Linux kernel versions prior to 5.4 contain issues that prevent XenServer VMs from running properly.

Upgrade the kernel using the vendor’s normal kernel upgrade procedure.

For OEL 5.6 64-bit, the Unbreakable Enterprise Kernel (UEK) does not support the Xen platform. If you attempt to use UEK with this operating system, the kernel fails to boot properly.

Oracle Linux 6.9 (64-bit)

For OEL 6.9 VMs with more that 2 GB memory, set the boot parameter <crashkernel=no> to disable the crash kernel. The VM reboot successfully only when this parameter is set. If you use an earlier version of OEL 6.x, set this boot parameter before updating to OEL 6.9. To set the parameter by using XenCenter, add it to the Advanced OS boot parameters field in the Installation Media page of the New VM wizard. To modify an existing VM by using XenCenter, right-click on the VM and select Properties > Boot Options > OS boot parameters.

Debian 6.0 (Squeeze) (32-/64-bit)

When a private mirror is specified in XenCenter this is only used to retrieve the installer kernel. Once the installer is running you will again need to enter the address of the mirror to be used for package retrieval.

Debian 7 (Wheezy) (32-/64-bit)

When a private mirror is specified in XenCenter this is only used to retrieve the installer kernel. Once the installer is running you will again need to enter the address of the mirror to be used for package retrieval.

Ubuntu 10.04 (32-/64-bit)

For Ubuntu 10.04 VMs with multiple vCPUs, Citrix strongly recommends that you update the guest kernel to “2.6.32-32 #64”. For details on this issue, see the Knowledge Base article CTX129472 Ubuntu 10.04 Kernel Bug Affects SMP Operation.

Asianux Server 4.5

Installation must be performed with a graphical installer. In the Installation Media tab, add “VNC” in the Advanced OS boot parameters field.

Linx Linux v6.0

Supports up to 6 vCPUs. To add disks to the Linx Linux V6.0 VMs, set the device ID greater than 3 using the following steps:

  1. Get the usable device ID:

    xe vm-param-get param-name=allowed-VBD-devices uuid=VM-uuid
    
  2. Use the ID in the list that is bigger than 3:

    xe vbd-param-set userdevice=Device-UD uuid=VM-uuid>
    

Yinhe Kylin 4.0

For guest tools installation, enable root user in the grub menu and install the guest tools as root user.

NeoKylin Linux Security OS V5.0 (64-bit)

By default NeoKylin Linux Security OS 5 (64-bit) disables settings in /etc/init/controlalt-delete.conf. Thus, it cannot be rebooted by xe command or XenCenter. To resolve this issue, do one of the following:

  • Specify the force=1 option when running xe to reboot VM:

     xe vm-reboot force=1 uuid=<vm uuid>
    

    Or, click Force Reboot button after clicking Reboot in XenCenter.

  • Ensure that the following two lines are enabled in /etc/init/control-altdelete.conf file of the guest OS:

     start on control-alt-delete
     exec /sbin/shutdown -r now "Control-Alt-Delete pressed"
    

By default SELinux is enabled in the OS. So, the user cannot log in into the VM through XenCenter. To resolve this issue, do the following:

  1. Disable Selinux by adding selinux=0 to Boot Options through XenCenter:
  2. After accessing the VM, note the IP address of the VM.
  3. After obtaining the IP address from the above step, use any third party software (for example, Xshell) to connect to the VM and remove selinux=0.

    Note:

    You can access VM using XenCenter only if you disable SELinux.

  4. If you don’t need access to VM using XenCenter, enable SELinux again by removing the options you previously added.

Debian Apt Repositories

For infrequent or one-off installations, it is reasonable to directly use a Debian mirror. However, if you intend to do several VM installations, we recommend that you use a caching proxy or local mirror. Apt-cacher is an implementation of proxy server that will keep a local cache of packages. debmirror is a tool that creates a partial or full mirror of a Debian repository. Either of these tools can be installed into a VM.

Preparing to Clone a Linux VM

Typically, when cloning a VM or a computer, unless you “generalize” the cloned image, attributes unique to that machine, such as the IP address, SID, or MAC address, will be duplicated in your environments.

As a result, XenServer automatically changes some virtual hardware parameters when you clone a Linux VM. If you copy the VM using XenCenter, XenCenter automatically changes the MAC address and IP address for you. If these interfaces are configured dynamically in your environment, you might not need to make any modifications to the cloned VM. However, if the interfaces are statically configured, you might need to modify their network configurations.

The VM may need to be customized to be made aware of these changes.

Machine Name

A cloned VM is another computer, and like any new computer in a network, it must have a unique name within the network domain it is part of.

IP address

A cloned VM must have a unique IP address within the network domain it is part of. Generally, this is not a problem if DHCP is used to assign addresses; when the VM boots, the DHCP server assigns it an IP address. If the cloned VM had a static IP address, the clone must be given an unused IP address before being booted.

MAC address

There are two situations when Citrix recommends disabling MAC address rules before cloning:

  1. In some Linux distributions, the MAC address for the virtual network interface of a cloned VM is recorded in the network configuration files. However, when you clone a VM, XenCenter assigns the new cloned VM a different MAC address. As a result, when the new VM is started for the first time, the network does recognize the new VM and does not come up automatically.

  2. Some Linux distributions use udev rules to remember the MAC address of each network interface, and persist a name for that interface. This is intended so that the same physical NIC always maps to the same ethn interface, which is useful with removable NICs (like laptops). However, this behavior is problematic in the context of VMs. For example, if you configure two virtual NICs when you install a VM, and then shut it down and remove the first NIC, on reboot XenCenter shows just one NIC, but calls it eth0. Meanwhile the VM is deliberately forcing this to be eth1. The result is that networking does not work.

If the VM uses persistent names, Citrix recommends disabling these rules before cloning. If for some reason you do not want to turn persistent names off, you must reconfigure networking inside the VM (in the usual way). However, the information shown in XenCenter will not match the addresses actually in your network.

Linux VM Release Notes

Most modern Linux distributions support Xen paravirtualization directly, but have different installation mechanisms and some kernel limitations.

Red Hat Enterprise Linux 4.5–4.8

The following issues have been reported to Red Hat and are already fixed in the Xen kernel (which can be installed by using the /mnt/Linux/install.sh script in the built-in guest-tools.iso CD image):

  • The Xen kernel in RHEL 4.8 can occasionally enter tickless mode when an RCU is pending. When this triggers, it is usually in synchronize_kernel() which means the guest essentially hangs until some external event (such as a SysRQ) releases it (Red Hat Bugzilla 427998)

  • Live migration can occasionally crash the kernel under low memory conditions (Red Hat Bugzilla 249867)

  • Guest kernel can occasionally hang due to other XenStore activity (Red Hat Bugzilla 250381)

  • RHEL 4.7 contains a bug which normally prevents it from booting on a host with more than 64 GiB of RAM (Red Hat Bugzilla 311431). For this reason XenServer RHEL 4.7 guests are only allocated RAM addresses in the range below 64 GiB by default. This may cause RHEL 4.7 guests to fail to start even if RAM appears to be available, in which case rebooting or shutting down other guests can cause suitable RAM to become available. If all else fails, temporarily shut down other guests until your RHEL 4.7 VM can boot.

    Once you have succeeded in booting your RHEL 4.7 VM, install the XenServer PV Tools and run the command:

     xe vm-param-remove uuid=vm_uuid param-name=other-config \
     param-key=machine-address-size
    

    to remove the memory restriction.

  • On some hardware (usually newer systems), the CPU generates occasional spurious page faults which the OS should ignore. Unfortunately versions of RHEL 4.5–4.7 fail to ignore the spurious fault and it causes them to crash (Red Hat Bugzilla 465914).

    This has been fixed in our kernel. The RHEL 4 VM templates have been set with the suppress-spurious-page-faults parameter. This assures that the installation continues safely to the point that the standard kernel is replaced with the Citrix-provided kernel.

    There is a performance impact with this parameter set, so, after the VM installation is complete, at the VM command prompt, run the command:

     xe vm-param-remove uuid=vm_uuid other-config: \
     param-key=suppress-spurious-page-faults
    
  • In RHEL 4.5–4.7, if a xenbus transaction end command fails it is possible for the suspend_mutex to remain locked preventing any further xenbus traffic. Applying the Citrix RHEL 4.8 kernel resolves this issue. [EXT-5]

  • In RHEL 4.5–4.8, use of the XFS filesystem can lead to kernel panic under exceptional circumstances. Applying the Citrix RHEL 4.8 kernel resolves this issue. [EXT-16 ]

  • In RHEL 4.5–4.8, the kernel can enter no tick idle mode with RCU pending; this leads to a guest operating system lock up. Applying the Citrix RHEL 4.8 kernel resolves this issue. [EXT-21]

  • In RHEL 4.7, 4.8, VMs may crash when a host has 64 GiB RAM or higher configured. Applying the Citrix RHEL 4.8 kernel resolves this issue. [EXT-30]

  • In RHEL 4.5–4.8, the network driver contains an issue that can, in rare circumstances, lead to a kernel deadlock. Applying the Citrix RHEL 4.8 kernel resolves this issue. [EXT-45]

Additional Notes:

  • RHEL 4.7, 4.8, sometimes when there are many devices attached to a VM, there is not enough time for all of these devices to connect and startup fails. [EXT-17]

  • If you try to install RHEL 4.x on a VM that has more than two virtual CPUs (which RHEL 4.x does not support), an error message incorrectly reports the number of CPUs detected.

Preparing a RHEL 4.5–4.8 guest for cloning

To prepare a RHEL 4.5–4.8 guest for cloning (see Preparing to Clone a Linux VM), edit /etc/sysconfig/network-scripts/ifcfg-eth0 before converting the VM into a template, and remove the HWADDR line.

Note

Red Hat recommends the use of Kickstart to perform automated installations, instead of directly cloning disk images (see Red Hat KB Article 1308).

RHEL Graphical Install Support

To perform a graphical installation, in XenCenter step through the New VM wizard. In the Installation Media page, in the Advanced OS boot parameters section, add vnc to the list parameters:

graphical utf8 vnc

You will then be prompted to provide networking configuration for the new VM to enable VNC communication. Work through the remainder of the New VM wizard. When the wizard completes, in the Infrastructure view, select the VM, and click Console to view a console session of the VM; at this point it uses the standard installer. The VM installation will initially start in text mode, and may request network configuration. Once provided, the Switch to Graphical Console button is displayed in the top right corner of the XenCenter window.

Red Hat Enterprise Linux 5

XenServer requires that you run the RHEL 5.4 kernel or higher. Older kernels have the following known issues:

  • RHEL 5.0 64-bit guest operating systems with their original kernels fail to boot on XenServer 7.1. Before attempting to upgrade a XenServer host to version 7.1, customers should update the kernel to version 5.4 (2.6.18-164.el5xen) or later. Customers running these guests who have already upgraded their host to XenServer 7.1, should refer to CTX134845 for information on upgrading the kernel.

  • During the resume operation on a suspended VM, allocations can be made that can cause swap activity which cannot be performed because the swap disk is still being reattached. This is a rare occurrence. (Red Hat Bugzilla 429102).

  • Customers running RHEL 5.3 or 5.4 (32/64-bit) should not use Dynamic Memory Control (DMC) as this may cause the guest to crash. If you want to use DMC, Citrix recommends that customers upgrade to more recent versions of RHEL or CentOS. [EXT-54]

  • In RHEL 5.3, sometimes when there are many devices attached to a VM, there is not enough time for all of these devices to connect and startup fails. [EXT-17]

  • In RHEL 5.0–5.3, use of the XFS file system can lead to kernel panic under exceptional circumstances. Applying the Red Hat RHEL 5.4 kernel onwards resolves this issue. [EXT-16]

  • In RHEL 5.2, 5.3, VMs may crash when a host has 64 GiB RAM or higher configured. Applying the Red Hat RHEL 5.4 kernel onwards resolves this issue. [EXT-30]

  • In RHEL 5.0–5.3, the network driver contains an issue that can, in rare circumstances, lead to a kernel deadlock. Applying the Red Hat RHEL 5.4 kernel onwards resolves this issue. [EXT-45]

Note

In previous releases, XenServer included a replacement RHEL 5 kernel that fixed critical issues that prevented RHEL 5 from running effectively as a virtual machine. Red Hat has resolved these issues in RHEL 5.4 and higher. Therefore, XenServer no longer includes a RHEL 5 specific kernel.

Preparing a RHEL 5.x guest for cloning

To prepare a RHEL 5.x guest for cloning (see Preparing to Clone a Linux VM), edit /etc/sysconfig/network-scripts/ifcfg-eth0 before converting the VM into a template and remove the HWADDR line.

Note

Red Hat recommends the use of Kickstart to perform automated installations, instead of directly cloning disk images (see Red Hat KB Article 1308).

Red Hat Enterprise Linux 6

Note

Red Hat Enterprise Linux 6.x also includes Red Hat Enterprise Linux Workstation 6.6 (64-bit) and Red Hat Enterprise Linux Client 6.6 (64-bit).

  • The RHEL 6.0 kernel has a bug which affects disk I/O on multiple virtualization platforms. This issue causes VMs running RHEL 6.0 to lose interrupts. For more information, see Red Hat Bugzilla 681439, 603938, and 652262.

  • Attempts to detach a Virtual Disk Image (VDI) from a running a RHEL 6.1 and 6.2 (32-/64-bit) VM, may be unsuccessful and can result in a guest kernel crash with a NULL pointer dereference at <xyz>error message. Customers should update the kernel to version 6.3 (2.6.32-238.el6) or later to resolve this issue. For more information, see Red Hat Bugzilla 773219.

Red Hat Enterprise Linux 7

After migrating or suspending, RHEL 7.x guests may freeze during resume. For more information, see Red Hat Bugzilla 1141249.

CentOS 4

Refer to RHEL 4 Limitations for the list of CentOS 4 release notes.

CentOS 5

Refer to RHEL 5 Limitations for the list of CentOS 5.x release notes.

CentOS 6

Refer to RHEL 6 Limitations for the list of CentOS 6.x release notes.

CentOS 7

Refer to RHEL 7 Limitations for the list of CentOS 7.x release notes.

Oracle Linux 5

Refer to RHEL 5 Limitations for the list of Oracle Linux 5.x release notes.

Oracle Linux 6

Oracle Linux 6.x guests which were installed on the XenServer host running versions earlier than v6.5, will continue to run the Red Hat kernel following an upgrade to v6.5. To switch to the UEK kernel (the default with a clean installation) delete the /etc/pygrub/rules.d/oracle-5.6 file in dom0. You can choose which kernel to use for an individual VM by editing the bootloader configuration within the VM.

Refer to RHEL 6 Limitations for a list of OEL 6.x release notes.

Oracle Linux 7

Refer to RHEL 7 Limitations for the list of Oracle Linux 7.x release notes.

Scientific Linux 5

Refer to RHEL 5 Limitations for the list of Scientific Linux 5.x release notes.

Scientific Linux 6

Refer to RHEL 6 Limitations for the list of Scientific Linux 6.x release notes.

Scientific Linux 7

Refer to RHEL 7 Limitations for the list of Scientific Linux 7.x release notes.

SUSE Enterprise Linux 10 SP1

XenServer uses the standard Novell kernel supplied with SLES 10 SP2 as the guest kernel. Any bugs found in this kernel are reported upstream to Novell and listed below:

  • A maximum of 3 virtual network interfaces is supported.

  • Disks sometimes do not attach correctly on boot. (Novell Bugzilla 290346).

SUSE Enterprise Linux 10 SP3

Due to a defect in the packaging of Novell SUSE Linux Enterprise Server 10 SP3 (32-bit) edition, users will not be able to create a VM of this edition. As a workaround, you must install SLES 10 SP2 and then upgrade it to SLES SP3 using, for example, yast within the VM. For more information, refer to the Novell documentation 7005079.

SUSE Enterprise Linux 11

XenServer uses the standard Novell kernel supplied with SLES 11 as the guest kernel. Any bugs found in this kernel are reported upstream to Novell and listed below:

  • Live migration of a SLES 11 VM which is under high load may fail with the message An error occurred during the migration process. This is due to a known issue with the SLES 11 kernel which has been reported to Novell. It is expected that kernel update 2.6.27.23-0.1.1 and later from Novell will resolve this issue.

SUSE Enterprise Linux 11 SP2

Creating a SLES 11 SP2 (32-bit) VM can cause the SLES installer or the VM to crash due to a bug in the SLES 11 SP2 kernel. To work around this issue, customers should allocate at least 1 GB memory to the VM. The amount of assigned memory can be reduced after installing updates to the VM. For more information, see Novell Bugzilla 809166.

Preparing a SLES guest for cloning

Note

Before you prepare a SLES guest for cloning, ensure that you clear the udev configuration for network devices as follows:

cat< /dev/null > /etc/udev/rules.d/30-net_persistent_names.rules

To prepare a SLES guest for cloning (see Preparing to Clone a Linux VM):

  1. Open the file /etc/sysconfig/network/config

  2. Edit the line that reads:

    FORCE_PERSISTENT_NAMES=yes
    

    to

    FORCE_PERSISTENT_NAMES=no
    
  3. Save the changes and reboot the VM.

Ubuntu 10.04

On an Ubuntu 10.04 (64-bit) VM, attempts to set the value of maximum number of vCPUs available to a VM (VCPUs-max), higher than the vCPUs available during boot (VCPUs-at-startup), can cause the VM to crash during boot. For more information, see Ubuntu Launchpad 1007002.

Ubuntu 12.04

Ubuntu 12.04 VMs with original kernel can crash during boot. To work around this issue, customers should create Ubuntu 12.04 VMs using the latest install media supported by the vendor, or update an existing VM to the latest version using in-guest update mechanism.

Ubuntu 14.04

Attempts to boot a PV guest may cause the guest to crash with the following error: kernel BUG at /build/buildd/linux-3.13.0/arch/x86/kernel/paravirt.c:239!. This is caused by improperly calling a non-atomic function from interrupt context. Customers should update the linux-image package to version 3.13.0-35.62 to fix this issue. For more information, see Ubuntu Launchpad 1350373.