Linux VMs

When you want to create a Linux VM, create the VM using a template for the operating system you want to run on the VM. You can use a template Citrix provides for your operating system, or one that you created previously. You can create the VM from either XenCenter or the CLI. This section focuses on using the CLI.

Note:

To create a VM of a newer minor update of a RHEL release than is supported for installation by XenServer, complete the following steps:

  • Install from the latest supported media
  • Use yum update to bring the VM up-to-date

This process also applies to RHEL derivatives such as CentOS and Oracle Linux.

We recommend that you install the XenServer Tools immediately after installing the operating system. For more information, see Install the Linux Guest Agent. For some operating systems, the XenServer Tools include a kernel specific to XenServer, which replaces the kernel provided by the vendor. Other operating systems, such as RHEL 5.x require you to install a specific version of a vendor-provided kernel.

The overview for creating a Linux VM is as following:

  1. Create the VM for your target operating system using XenCenter or the CLI.

  2. Install the operating system using vendor installation media.

  3. Install the XenServer Tools (recommended).

  4. Configure the correct time and time zone on the VM and VNC as you would in a normal non-virtual environment.

XenServer supports the installation of many Linux distributions as VMs. There are three installation mechanisms:

Warning:

The Other install media template is for advanced users who want to attempt to install VMs running unsupported operating systems. XenServer has been tested running only the supported distributions and specific versions covered by the standard supplied templates. Any VMs installed using the Other install media template are not supported.

VMs created using the Other install media template are created as HVM guests. This behavior might mean that some Linux VMs use slower emulated devices rather than the higher performance I/O drivers.

For information regarding specific Linux distributions, see Installation notes for Linux distributions.

PV Linux distributions

The supported PV Linux distributions are:

  • Debian Squeeze 6 (32-/64-bit)
  • Debian Wheezy 7 (32-/64-bit)
  • Red Hat Enterprise Linux 5.x (32-/64-bit)

    Supported provided you use the 5.4 or later kernel.

  • Red Hat Enterprise Linux 6.x (32-/64-bit)
  • CentOS 5.x (32-/64-bit)
  • CentOS 6.x (32-/64-bit)
  • Oracle Linux 5.x (32-/64-bit)
  • Oracle Linux 6.x (32-/64-bit)
  • Scientific Linux 6.6–6.9 (32-/64-bit)
  • SUSE Linux Enterprise Server 11 SP3, SP4 (32-/64-bit)
  • SUSE Linux Enterprise Server 12, 12 SP1, 12 SP2 (64-bit)
  • SUSE Linux Enterprise Desktop 11 SP3 (64-bit)
  • SUSE Linux Enterprise Desktop 12, 12 SP1, 12 SP2 (64-bit)
  • Ubuntu 12.04 (32-/64-bit)
  • NeoKylin Linux Advanced Server 6.5 (64-bit)
  • Asianux Server 4.2 (64-bit)
  • Asianux Server 4.4 (64-bit)
  • Asianux Server 4.5 (64-bit)
  • GreatTurbo Enterprise Server 12.2 (64-bit)
  • NeoKylin Linux Security OS V5.0 (64-bit)

Other PV Linux distributions are not supported. However, distributions that use the same installation mechanism as Red Hat Enterprise Linux (for example, Fedora Core) might be successfully installed using the same template.

Notes:

  • Running 32-bit PV Linux VMs on a host that has more than 128 GB of memory is not supported.

  • XenServer hardware security features can reduce the overall performance of 32-bit PV VMs. If this issue impacts you, you can do one of the following things:

    • Run a 64-bit version of the PV Linux VM
    • Boot Xen with the no-smep no-smap option.

    We do not recommend this option as it can reduce the depth of security of the host

HVM Linux distributions

These VMs can take advantage of the x86 virtual container technologies in newer processors for improved performance. Network and storage access from these guests still operate in PV mode, using drivers built-in to the kernels.

The supported HVM Linux distributions are:

  • Debian Jessie 8 (32-/64-bit)
  • Debian Stretch 9 (32-/64-bit)
  • Red Hat Enterprise Linux 7.x (64-bit)
  • CentOS 7.x (64-bit)
  • Oracle Enterprise Linux 7.x (64-bit)
  • Scientific Linux 7.x (64-bit)
  • SUSE Linux Enterprise Server 12 SP3 (64-bit)
  • SUSE Linux Enterprise Desktop 12 SP3 (64-bit)
  • Ubuntu 14.04 (32-/64-bit)
  • Ubuntu 16.04 (32-/64-bit)
  • CoreOS Stable (64-bit)
  • Linx Linux V6.0 (64-bit)
  • Linx Linux V8.0 (64-bit)
  • Yinhe Kylin 4.0 (64-bit)

Other HVM distributions are not supported. However, distributions that use the same installation mechanism as Red Hat Enterprise Linux (for example, Fedora Core) might be successfully installed using the same template.

Create a Linux VM by installing from an internet repository

This section shows the xe CLI procedure for creating a Linux VM, using a Debian Squeeze example, by installing the OS from an internet repository.

  1. Create a VM from the Debian Squeeze template. The UUID of the VM is returned:

    xe vm-install template=template-name new-name-label=squeeze-vm
    
  2. Specify the installation repository. This repository is a Debian mirror with the packages required to install the base system and the extra that you select during the Debian installer:

    xe vm-param-set uuid=UUID other-config:install-repository=path_to_repository
    

    An example of a valid repository path is http://ftp.xx.debian.org/debian where xx is your country code (see the Debian mirror list for a list of these codes). For multiple installations Citrix recommends using a local mirror or apt proxy to avoid generating excessive network traffic or load on the central repositories.

    Note:

    The Debian installer supports only HTTP and FTP apt repos. NFS is not supported.

  3. Find the UUID of the network that you want to connect to. For example, if it is the one attached to xenbr0:

    xe network-list bridge=xenbr0 --minimal
    
  4. Create a VIF to connect the new VM to this network:

    xe vif-create vm-uuid=vm_uuid network-uuid=network_uuid mac=random device=0
    
  5. Start the VM. It boots straight into the Debian installer:

    xe vm-start uuid=UUID
    
  6. Follow the Debian Installer procedure to install the VM in the configuration you require.

  7. Install the guest agent and configure graphical display. For more information, see Install the Linux Guest Agent.

Create a Linux VM by installing from a physical CD or DVD

This section shows the CLI procedure for creating a Linux VM, using a Debian Squeeze example, by installing the OS from a physical CD/DVD.

  1. Create a VM from the Debian Squeeze template. The UUID of the VM is returned:

    xe vm-install template=template-name new-name-label=vm-name
    
  2. Get the UUID of the root disk of the new VM:

    xe vbd-list vm-uuid=vm_uuid userdevice=0 params=uuid --minimal
    
  3. Using the UUID returned, set the root disk not to be bootable:

    xe vbd-param-set uuid=root_disk_uuid bootable=false
    
  4. Get the name of the physical CD drive on the XenServer host:

    xe cd-list
    

    The result of this command gives you something like SCSI 0:0:0:0 for the name-label field.

  5. Add a virtual CD-ROM to the new VM using the XenServer host CD drive name-label parameter as the cd-name parameter:

    xe vm-cd-add vm=vm_name cd-name="host_cd_drive_name_label" device=3
    
  6. Get the UUID of the VBD corresponding to the new virtual CD drive:

    xe vbd-list vm-uuid=vm_uuid type=CD params=uuid --minimal
    
  7. Make the VBD of the virtual CD bootable:

    xe vbd-param-set uuid=cd_drive_uuid bootable=true
    
  8. Set the install repository of the VM to be the CD drive:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=cdrom
    
  9. Insert the Debian Squeeze installation CD into the CD drive on the XenServer host.

  10. Open a console to the VM with XenCenter or an SSH terminal and follow the steps to perform the OS installation.

  11. Start the VM. It boots straight into the Debian installer:

    xe vm-start uuid=UUID
    
  12. Install the guest utilities and configure graphical display. For more information, see Install the Linux Guest Agent.

Create a Linux VM by installing from an ISO image

This section shows the CLI procedure for creating a Linux VM, by installing the OS from network-accessible ISO.

  1. Run the command

    xe vm-install template=template new-name-label=name_for_vm sr-uuid=storage_repository_uuid
    

    This command returns the UUID of the new VM.

  2. Find the UUID of the network that you want to connect to. For example, if it is the one attached to xenbr0:

    xe network-list bridge=xenbr0 --minimal
    
  3. Create a VIF to connect the new VM to this network:

    xe vif-create vm-uuid=vm_uuid network-uuid=network_uuid mac=random device=0
    
  4. Set the install-repository key of the other-config parameter to the path of your network repository. For example, to use http://mirror.centos.org/centos/6/os/x86_64 as the URL of the vendor media:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=http://mirror.centos.org/centos/6/os/x86_64
    
  5. Start the VM

    xe vm-start uuid=vm_uuid
    
  6. Connect to the VM console using XenCenter or VNC and perform the OS installation.

Network installation notes

The XenServer guest installer allows you to install an operating system from a network-accessible ISO image onto a VM. To prepare for installing from an ISO, make an exploded network repository of your vendor media (not ISO images). Export it over NFS, HTTP, or FTP so that it is accessible to the XenServer host administration interface.

The network repository must be accessible from the control domain of the XenServer host, normally using the management interface. The URL must point to the base of the CD/DVD image on the network server, and be of the form:

  • HTTP: http://<server>/<path>
  • FTP: ftp://<server>/<path>
  • NFS: nfs://<server>/<path>
  • NFS: nfs:<server>/<path>

See your vendor installation instructions for information about how to prepare for a network-based installation, such as where to unpack the ISO.

Note:

When using the NFS installation method from XenCenter, always use the nfs:// style of path.

When creating VMs from templates, the XenCenter New VM wizard prompts you for the repository URL. When using the CLI, install the template as normal using vm-install and then set the other-config:install-repository parameter to the value of the URL. When the VM is then started, it begins the network installation process.

Warning:

When installing a new Linux-based VM, it is important to complete the installation and reboot it before performing any other operations on it. This process is analogous to not interrupting a Windows installation – which would leave you with a non-functional VM.

Advanced operating system boot parameters

When creating a VM, you can specify advanced operating system boot parameters using XenCenter or the xe CLI. Specifying advanced parameters can be helpful when you are, for example, configuring automated installations of paravirtualized guests. For example, you might use a Debian preseed or RHEL kickstart file as follows.

To install Debian by using a preseed file:

  1. Create a preseed file. For information on creating preseed files, see the Debian documentation for details.

  2. Set the kernel command-line correctly for the VM before starting it. Use the New VM wizard in XenCenter or execute an xe CLI command like the following:

    xe vm-param-set uuid=uuid PV-args=preseed_arguments
    

To install RHEL by using a Kickstart File:

Note:

A Red Hat Kickstart file is an automated installation method, similar to an answer file, you can use to provide responses to the RHEL installation prompts. To create this file, install RHEL manually. The kickstart file is located in /root/anaconda-ks.cfg.

  1. In XenCenter, choose the appropriate RHEL template.

  2. Specify the kickstart file to use as a kernel command-line argument in the XenCenter New VM Wizard. Specify this value exactly as it would be specified in the PXE config file. For example:

    ks=http://server/path ksdevice=eth0
    
  3. On the command line, use vm-param-set to set the PV-args parameter to use a Kickstart file

    xe vm-param-set uuid=vm_uuid PV-args="ks=http://server/path ksdevice=eth0"
    
  4. Set the repository location so XenServer knows where to get the kernel and initrd from for the installer boot:

    xe vm-param-set uuid=vm_uuid other-config:install-repository=http://server/path
    

Note:

To install using a kickstart file without the New VM wizard, you can add the appropriate argument to the Advanced OS boot parameters text box.

Install the Linux guest agent

Although all supported Linux distributions are natively paravirtualized (and don’t need special drivers for full performance), XenServer includes a guest agent. This guest agent provides extra information about the VM to the host. Install the guest agent on each Linux VM to enable Dynamic Memory Control (DMC).

It is important to keep the Linux guest agent up-to-date as you upgrade your XenServer host. For more information, see Update Linux kernels and guest utilities.

To install the guest agent:

  1. The files required are present on the built-in guest-tools.iso CD image, or alternatively can be installed by selecting VM and then Install XenServer Tools option in XenCenter.

  2. Mount the image onto the guest by running the command:

    mount -o ro,exec /dev/disk/by-label/XenServer Tools /mnt
    

    Note:

    If mounting the image fails, you can locate the image by running the following:

    blkid -t LABEL="XenServer Tools"
    
  3. Execute the installation script as the root user:

    /mnt/Linux/install.sh
    
  4. Unmount the image from the guest by running the command:

    umount /mnt
    
  5. If the kernel has been upgraded, or the VM was upgraded from a previous version, reboot the VM now.

    Note:

    CD-ROM drives and ISOs attached to Linux Virtual Machines appear as devices, such as /dev/xvdd or /dev/sdd, instead of as /dev/cdrom as you might expect. This behavior is because they are not true CD-ROM devices, but normal devices. When you use either XenCenter or the CLI to eject the CD, it hot-unplugs the device from the VM and the device disappears. In Windows VMs, the behavior is different and the CD remains in the VM in an empty state.

Installation notes for Linux distributions

This following section lists vendor-specific, configuration information to consider before creating the specified Linux VMs.

For more detailed release notes on all distributions, see Linux VM Release Notes.

CentOS 5.x (32-/64-bit)

For a CentOS 5.x VM, ensure that the operating system uses the CentOS 5.4 kernel or later, which is available from the distribution vendor. Enterprise Linux kernel versions earlier than 5.4 contain issues that prevent XenServer VMs from running properly. Upgrade the kernel using the vendor’s normal kernel upgrade procedure.

Red Hat Enterprise Linux 5.x (32-/64-bit)

For RHEL 5.x VMs, ensure that the operating system uses the RHEL 5.4 kernel (2.6.18-164.el5) or later, which is available from the distribution vendor. Enterprise Linux kernel versions earlier than 5.4 contain issues that prevent XenServer VMs from running properly. Upgrade the kernel using the vendor’s normal kernel upgrade procedure.

Red Hat Enterprise Linux* 7.x (32-/64-bit)

The new template for these guests specifies 2 GB RAM. This amount of RAM is a requirement for a successful install of v7.4 and later. For v7.0 - v7.3, the template specifies 2 GB RAM, but as with previous versions of XenServer, 1 GB RAM is sufficient.

Note:

This information applies to both Red Hat and Red Hat derivatives.

Oracle Linux 5.x (32-/64- bit)

For an OEL 5.x VM, ensure that the operating system uses the OEL 5.4 kernel or later, which is available from the distribution vendor. Enterprise Linux kernel versions before 5.4 contain issues that prevent XenServer VMs from running properly. Upgrade the kernel using the vendor’s normal kernel upgrade procedure.

For OEL 5.6 64-bit, the Unbreakable Enterprise Kernel (UEK) does not support the Xen platform. If you attempt to use UEK with this operating system, the kernel fails to boot properly.

Oracle Linux 6.9 (64-bit)

For OEL 6.9 VMs with more that 2 GB memory, set the boot parameter crashkernel=no to disable the crash kernel. The VM reboot successfully only when this parameter is set. If you use an earlier version of OEL 6.x, set this boot parameter before updating to OEL 6.9.

To set the parameter by using XenCenter, add it to the Advanced OS boot parameters field in the Installation Media page of the New VM wizard.

To modify an existing VM by using XenCenter, right-click on the VM and select Properties > Boot Options > OS boot parameters.

Debian 6.0 (Squeeze) (32-/64-bit)

When a private mirror is specified in XenCenter, this mirror is only used to retrieve the installer kernel. When the installer is running, you must enter again the address of the mirror to be used for package retrieval.

Debian 7 (Wheezy) (32-/64-bit)

When a private mirror is specified in XenCenter, this mirror is only used to retrieve the installer kernel. When the installer is running, you must enter again the address of the mirror to be used for package retrieval.

Asianux Server 4.5

Installation must be performed with a graphical installer. In the Installation Media tab, add “VNC” in the Advanced OS boot parameters field.

Linx Linux V6.0

Supports up to 6 vCPUs. To add disks to the Linx Linux V6.0 VMs, set the device ID greater than 3 using the following steps:

  1. Get the usable device ID:

    xe vm-param-get param-name=allowed-VBD-devices uuid=<VM uuid>

  2. Use the ID in the list that is bigger than 3:

    xe vbd-param-set userdevice=<Device UD> uuid=<VM uuid>

Yinhe Kylin 4.0

For guest tools installation, enable root user in the grub menu and install the guest tools as root user.

NeoKylin Linux Security OS V5.0 (64-bit)

By default NeoKylin Linux Security OS 5 (64-bit) disables settings in /etc/init/control-alt-delete.conf. Thus, you cannot use the xe command or XenCenter to reboot it. To resolve this issue, do one of the following:

  • Specify the force=1 option when running xe to reboot VM: xe vm-reboot force=1 uuid=<vm uuid>
  • Or, click Force Reboot button after clicking Reboot in XenCenter.
  • Or, ensure that the following two lines are enabled in /etc/init/control-alt-delete.conf file of the guest OS: start on control-alt-delete exec /sbin/shutdown -r now "Control-Alt-Delete pressed"

By default Selinux is enabled in the OS. So, the user cannot log in into the VM through XenCenter. To resolve this issue, do the following:

  1. Disable Selinux by adding selinux=0 to Boot Options through XenCenter:
  2. After accessing the VM, note the IP address of the VM.
  3. After obtaining the IP address from the above step, use any third-party software (for example, Xshell) to connect to the VM and remove selinux=0.

    Note:

    You can access VM using XenCenter only if you disable selinux.

  4. If you don’t need access to VM using XenCenter, enable Selinux again by removing the options you previously added.

Apt repositories (Debian)

For infrequent or one-off installations, it is reasonable to use a Debian mirror directly. However, if you intend to do several VM installations, we recommend that you use a caching proxy or local mirror. Either of the following tools can be installed into a VM.

  • Apt-cacher: An implementation of proxy server that keeps a local cache of packages
  • debmirror: A tool that creates a partial or full mirror of a Debian repository

Prepare to clone a Linux VM

Typically, when cloning a VM or a computer, unless you generalize the cloned image, attributes unique to that machine are duplicated in your environments. Some of the unique attributes that are duplicated when cloning are the IP address, SID, or MAC address.

As a result, XenServer automatically changes some virtual hardware parameters when you clone a Linux VM. When you copy the VM using XenCenter, XenCenter automatically changes the MAC address and IP address for you. If these interfaces are configured dynamically in your environment, you might not need to modify the cloned VM. However, if the interfaces are statically configured, you might need to modify their network configurations.

The VM may need to be customized to be made aware of these changes. For instructions for specific supported Linux distributions, see Linux VM Release Notes.

Machine name

A cloned VM is another computer, and like any new computer in a network, it must have a unique name within the network domain.

IP address

A cloned VM must have a unique IP address within the network domain it is part of. Generally, this requirement is not a problem when DHCP is used to assign addresses. When the VM boots, the DHCP server assigns it an IP address. If the cloned VM had a static IP address, the clone must be given an unused IP address before being booted.

MAC address

There are two situations when Citrix recommends disabling MAC address rules before cloning:

  1. In some Linux distributions, the MAC address for the virtual network interface of a cloned VM is recorded in the network configuration files. However, when you clone a VM, XenCenter assigns the new cloned VM a different MAC address. As a result, when the new VM is started for the first time, the network does recognize the new VM and does not come up automatically.

  2. Some Linux distributions use udev rules to remember the MAC address of each network interface, and persist a name for that interface. This behavior is intended so that the same physical NIC always maps to the same ethn interface, which is useful with removable NICs (like laptops). However, this behavior is problematic in the context of VMs.

    For example, consider the behavior in the following case:

    1.  Configure two virtual NICs when installing a VM
    1.  Shut down the VM
    1.  Remove the first NIC
    

    When the VM reboots, XenCenter shows just one NIC, but calls it eth0. Meanwhile the VM is deliberately forcing this NIC to be eth1. The result is that networking does not work.

For VMs that use persistent names, disable these rules before cloning. If you do not want to turn off persistent names, you must reconfigure networking inside the VM (in the usual way). However, the information shown in XenCenter does not match the addresses actually in your network.

Update Linux kernels and guest utilities

The Linux guest utilities can be updated by rerunning the Linux/install.sh script from the built-in guest-tools.iso CD image (see Install the Linux Guest Agent).

For yum-enabled distributions CentOS 5.x, RHEL 5.x and higher, xe-guest-utilities installs a yum configuration file to enable subsequent updates to be done using yum in the standard manner.

For Debian, /etc/apt/sources.list is populated to enable updates using apt by default.

When upgrading, Citrix recommends that you always rerun Linux/install.sh. This script automatically determines if your VM needs any updates and installs if necessary.

Upgrade to Ubuntu 14.04, RHEL 7, and CentOS 7 guests

To upgrade existing Linux guests to versions that operate in HVM mode (for example, RHEL 7.x, CentOS 7.x, and Ubuntu 14.04), perform an in-guest upgrade. At this point, the upgraded guest only runs in PV mode - which is not supported and has known issues. Run the following script to convert the newly upgraded guest to the supported HVM mode.

On the XenServer host, open a local shell, log on as root, and enter the following command:

/opt/xensource/bin/pv2hvm vm_name

Or

/opt/xensource/bin/pv2hvm vm_uuid

Restart the VM to complete the process.

Linux VM release notes

Most modern Linux distributions support Xen paravirtualization directly, but have different installation mechanisms and some kernel limitations.

RHEL graphical install support

To use the graphical installer, in XenCenter step through the New VM wizard. In the Installation Media page, in the Advanced OS boot parameters section, add vnc to the list parameters:

graphical utf8 vnc

A screenshot of the New VM wizard. On the Installation Media page, the value `graphical utf8 vnc` is entered in the Advanced OS Boot Parameters field.

You are prompted to provide networking configuration for the new VM to enable VNC communication. Work through the remainder of the New VM wizard. When the wizard completes, in the Infrastructure view, select the VM, and click Console to view a console session of the VM. At this point, it uses the standard installer. The VM installation initially starts in text mode, and may request network configuration. Once provided, the Switch to Graphical Console button is displayed in the top right corner of the XenCenter window.

Red Hat Enterprise Linux 5

XenServer requires that you run the RHEL 5.4 kernel or higher. Older kernels have the following known issues:

  • RHEL 5.0 64-bit guest operating systems with their original kernels fail to boot on XenServer 7.6. Before attempting to upgrade the XenServer host to version 7.6, update the kernel to version 5.4 (2.6.18-164.el5xen) or later. If you run these guests and have already upgraded your host to XenServer 7.6, see CTX134845 for information about upgrading the kernel.

  • When resuming a suspended VM, allocations can be made that can cause swap activity that cannot be performed because the swap disk is still being reattached. This occurrence is rare. (Red Hat issue 429102).

  • If you are running RHEL 5.3 or 5.4 (32/64-bit), do not use Dynamic Memory Control (DMC) as this feature can cause the guest to crash. If you want to use DMC, Citrix recommends that you upgrade to more recent versions of RHEL or CentOS. [EXT-54]

  • In RHEL 5.3, sometimes when there are many devices attached to a VM, there is not enough time for all of these devices to connect. In this case, startup fails. [EXT-17]

  • In RHEL 5.0–5.3, use of the XFS file system can lead to kernel panic under exceptional circumstances. Applying the Red Hat RHEL 5.4 kernel onwards resolves this issue. [EXT-16]

  • In RHEL 5.2, 5.3, VMs may crash when a host has 64 GiB RAM or higher configured. Applying the Red Hat RHEL 5.4 kernel onwards resolves this issue. [EXT-30]

  • In RHEL 5.0–5.3, the network driver contains an issue that can, in rare circumstances, lead to a kernel deadlock. Applying the Red Hat RHEL 5.4 kernel onwards resolves this issue. [EXT-45]

Note:

In previous releases, XenServer included a replacement RHEL 5 kernel that fixed critical issues that prevented RHEL 5 from running effectively as a virtual machine. Red Hat has resolved these issues in RHEL 5.4 and higher. Therefore, XenServer no longer includes a RHEL 5 specific kernel.

Prepare a RHEL 5 guest for cloning

To prepare a RHEL 5.x guest for cloning, edit /etc/sysconfig/network-scripts/ifcfg-eth0 before converting the VM into a template and remove the HWADDR line. For more information, see Prepare to clone a Linux VM.

Note:

Red Hat recommends the use of Kickstart to perform automated installations, instead of directly cloning disk images (see Red Hat KB Article 1308).

Red Hat Enterprise Linux 6

Note:

Red Hat Enterprise Linux 6.x also includes Red Hat Enterprise Linux Workstation 6.6 (64-bit) and Red Hat Enterprise Linux Client 6.6 (64-bit).

  • The RHEL 6.0 kernel has a bug which affects disk I/O on multiple virtualization platforms. This issue causes VMs running RHEL 6.0 to lose interrupts. For more information, see Red Hat issues 681439, 603938, and 652262.

  • Attempts to detach a Virtual Disk Image (VDI) from a running a RHEL 6.1 and 6.2 (32-/64-bit) VM, might be unsuccessful. These unsuccessful attempts result in a guest kernel crash with a NULL pointer dereference at <xyz> error message. Update the kernel to version 6.3 (2.6.32-238.el6) or later to resolve this issue. For more information, see Red Hat issue 773219.

Red Hat Enterprise Linux 7

After migrating or suspending the VM, RHEL 7.x guests might freeze during resume. For more information, see Red Hat issue 1141249.

CentOS 5

For the list of CentOS 5.x release notes, see Red Hat Enterprise Linux 5.

CentOS 6

For the list of CentOS 6.x release notes, see Red Hat Enterprise Linux 6.

CentOS 7

For the list of CentOS 7.x release notes, see Red Hat Enterprise Linux 7.

Oracle Linux 5

For the list of Oracle Linux 5.x release notes, see Red Hat Enterprise Linux 5.

Oracle Linux 6

Oracle Linux 6.x guests installed on a host running versions earlier than v6.5, continue to run the Red Hat kernel following an upgrade to v6.5. To switch to the UEK kernel (the default with a clean installation), delete the /etc/pygrub/rules.d/oracle-5.6 file in dom0. You can choose which kernel to use for an individual VM by editing the bootloader configuration within the VM.

For OEL 6.9 VMs with more that 2 GB memory, set the boot parameter crashkernel=no to disable the crash kernel. The VM only reboots successfully when this parameter is set. If you use an earlier version of OEL 6.x, set this boot parameter before updating to OEL 6.9. For more information, see Installation notes for Linux distributions

For the list of Oracle Linux 6.x release notes, see Red Hat Enterprise Linux 6.

Oracle Linux 7

For the list of Oracle Linux 7.x release notes, see Red Hat Enterprise Linux 7.

Scientific Linux 6

For the list of Scientific Linux 6.x release notes, see Red Hat Enterprise Linux 6.

Scientific Linux 7

For the list of Scientific Linux 7.x release notes, see Red Hat Enterprise Linux 7.

SUSE Linux Enterprise 12

SUSE Linux Enterprise 12 VMs are supported in the following modes by default:

PV mode:

  • SUSE Linux Enterprise Desktop 12, 12 SP1, and 12 SP2

  • SUSE Linux Enterprise Server 12, 12 SP1, and 12 SP2

HVM mode:

  • SUSE Linux Enterprise Desktop 12 SP3

  • SUSE Linux Enterprise Server 12 SP3

Prepare a SLES guest for cloning

Note:

Before you prepare a SLES guest for cloning, ensure that you clear the udev configuration for network devices as follows:

cat< /dev/null > /etc/udev/rules.d/30-net_persistent_names.rules

To prepare a SLES guest for cloning:

  1. Open the file /etc/sysconfig/network/config

  2. Edit the line that reads:

    FORCE_PERSISTENT_NAMES=yes
    

    To

    FORCE_PERSISTENT_NAMES=no
    
  3. Save the changes and reboot the VM.

For more information, see Prepare to Clone a Linux VM.

Ubuntu 12.04

Ubuntu 12.04 VMs with original kernel can crash during boot. To work around this issue, you can do one of the following actions:

  • Create Ubuntu 12.04 VMs using the latest install media supported by the vendor
  • Update an existing VM to the latest version using in-guest update mechanism

Ubuntu 14.04

Attempts to boot a PV guest can cause the guest to crash with the following error: kernel BUG at /build/buildd/linux-3.13.0/arch/x86/kernel/paravirt.c:239!. This error is caused when improperly calling a non-atomic function from interrupt context. Update the linux-image package to version 3.13.0-35.62 to fix this issue. For more information, see Ubuntu Launchpad 1350373.