Citrix Hypervisor

Create a storage repository

You can use the New Storage Repository wizard in XenCenter to create storage repositories (SRs). The wizard guides you through the configuration steps. Alternatively, use the CLI, and the sr-create command. The sr-create command creates an SR on the storage substrate (potentially destroying any existing data). It also creates the SR API object and a corresponding PBD record, enabling VMs to use the storage. On successful creation of the SR, the PBD is automatically plugged. If the SR shared=true flag is set, a PBD record is created and plugged for every Citrix Hypervisor in the resource pool.

If you are creating an SR for IP-based storage (iSCSI or NFS), you can configure one of the following as the storage network: the NIC that handles the management traffic or a new NIC for the storage traffic. To assign an IP address to a NIC, see Configure a dedicated storage NIC.

All Citrix Hypervisor SR types support VDI resize, fast cloning, and snapshot. SRs based on the LVM SR type (local, iSCSI, or HBA) provide thin provisioning for snapshot and hidden parent nodes. The other SR types (EXT3/EXT4, NFS, GFS2) support full thin provisioning, including for virtual disks that are active.

Warnings:

  • When VHD VDIs are not attached to a VM, for example for a VDI snapshot, they are stored as thinly provisioned by default. If you attempt to reattach the VDI, ensure that there is sufficient disk-space available for the VDI to become thickly provisioned. VDI clones are thickly provisioned.

  • Citrix Hypervisor does not support snapshots at the external SAN-level of a LUN for any SR type.

  • Do not attempt to create an SR where the LUN ID of the destination LUN is greater than 255. Ensure that your target exposes the LUN with a LUN ID that is less than or equal to 255 before using this LUN to create an SR.

  • If you use thin provisioning on a file-based SR, ensure that you monitor the free space on your SR. If the SR usage grows to 100%, further writes from VMs fail. These failed writes can cause the VM to freeze or crash.

The maximum supported VDI sizes are:

Storage Repository Format Maximum VDI size
EXT3/EXT4 2 TiB
GFS2 (with iSCSI or HBA) 16 TiB
LVM 2 TiB
LVMoFCOE (deprecated) 2 TiB
LVMoHBA 2 TiB
LVMoiSCSI 2 TiB
NFS 2 TiB
SMB 2 TiB

Local LVM

The Local LVM type presents disks within a locally attached Volume Group.

By default, Citrix Hypervisor uses the local disk on the physical host on which it is installed. The Linux Logical Volume Manager (LVM) is used to manage VM storage. A VDI is implemented in VHD format in an LVM logical volume of the specified size.

Note:

The block size of an LVM LUN must be 512 bytes. To use storage with 4 KB native blocks, the storage must also support emulation of 512 byte allocation blocks.

LVM performance considerations

The snapshot and fast clone functionality for LVM-based SRs comes with an inherent performance overhead. When optimal performance is required, Citrix Hypervisor supports creation of VDIs in the raw format in addition to the default VHD format. The Citrix Hypervisor snapshot functionality is not supported on raw VDIs.

Warning:

Do not try to snapshot a VM that has type=raw disks attached. This action can result in a partial snapshot being created. In this situation, you can identify the orphan snapshot VDIs by checking the snapshot-of field and then deleting them.

Creating a local LVM SR

An LVM SR is created by default on host install.

Device-config parameters for LVM SRs are:

Parameter Name Description Required?
device Device name on the local host to use for the SR. You can also provide a comma-separated list of names. Yes

To create a local LVM SR on /dev/sdb, use the following command.

    xe sr-create host-uuid=valid_uuid content-type=user \
    name-label="Example Local LVM SR" shared=false \
    device-config:device=/dev/sdb type=lvm
<!--NeedCopy-->

Local EXT3/EXT4

Using EXT3/EXT4 enables thin provisioning on local storage. However, the default storage repository type is LVM as it gives a consistent write performance and, prevents storage over-commit. If you use EXT3/EXT4, you might see reduced performance in the following cases:

  • When carrying out VM lifecycle operations such as VM create and suspend/resume
  • When creating large files from within the VM

Local disk EXT3/EXT4 SRs must be configured using the Citrix Hypervisor CLI.

Whether a local EXT SR uses EXT3 or EXT4 depends on what version of Citrix Hypervisor created it:

  • If you created the local EXT SR on an earlier version of XenServer or Citrix Hypervisor and then upgraded to Citrix Hypervisor 8.2, it uses EXT3.
  • If you created the local EXT SR on Citrix Hypervisor 8.2, it uses EXT4.

Note:

The block size of an EXT3/EXT4 disk must be 512 bytes. To use storage with 4 KB native blocks, the storage must also support emulation of 512 byte allocation blocks.

Creating a local EXT4 SR (ext)

Device-config parameters for EXT SRs:

Parameter Name Description Required?
device Device name on the local host to use for the SR. You can also provide a comma-separated list of names. Yes

To create a local EXT4 SR on /dev/sdb, use the following command:

    xe sr-create host-uuid=valid_uuid content-type=user \
       name-label="Example Local EXT4 SR" shared=false \
       device-config:device=/dev/sdb type=ext
<!--NeedCopy-->

udev

The udev type represents devices plugged in using the udev device manager as VDIs.

Citrix Hypervisor has two SRs of type udev that represent removable storage. One is for the CD or DVD disk in the physical CD or DVD-ROM drive of the Citrix Hypervisor server. The other is for a USB device plugged into a USB port of the Citrix Hypervisor server. VDIs that represent the media come and go as disks or USB sticks are inserted and removed.

ISO

The ISO type handles CD images stored as files in ISO format. This SR type is useful for creating shared ISO libraries.

The following ISO SR types are available:

  • nfs_iso: The NFS ISO SR type handles CD images stored as files in ISO format available as an NFS share.
  • cifs: The Windows File Sharing (SMB/CIFS) SR type handles CD images stored as files in ISO format available as a Windows (SMB/CIFS) share.

If you do not specify the storage type to use for the SR, Citrix Hypervisor uses the location device config parameter to decide the type.

Device-config parameters for ISO SRs:

Parameter Name Description Required?
location Path to the mount. Yes
type Storage type to use for the SR: cifs or nfs_iso. No
nfsversion Specifies the version of NFS to use. If you specify nfsversion="4", the SR uses NFS v4.0, v4.1 or v4.2, depending on what is available. If you want to select a more specific version of NFS, you can specify nfsversion="4.0" and so on. Only one value can be specified for nfsversion. No
vers For the storage type CIFS/SMB, the version of SMB to use: 1.0 or 3.0. The default is 3.0. No
username For the storage type CIFS/SMB, if a username is required for the Windows file server. No
cifspassword_secret (Recommended) For the storage type CIFS/SMB, you can pass a secret instead of a password for the Windows file server. No
cifspassword For the storage type CIFS/SMB, if a password is required for the Windows file server. We recommend you use the cifspassword_secret parameter instead. No

Note:

When running the sr-create command, we recommend that you use the device-config:cifspassword_secret argument instead of specifying the password on the command line. For more information, see Secrets.

For storage repositories that store a library of ISOs, the content-type parameter must be set to iso, for example:

    xe sr-create host-uuid=valid_uuid content-type=iso  type=iso name-label="Example ISO SR" \
      device-config:location=<server:/path> device-config:type=nfs_iso
<!--NeedCopy-->

You can use NFS or SMB to mount the ISO SR. For more information about using these SR types, see NFS and SMB.

We recommend that you use SMB version 3 to mount ISO SR on Windows file server. Version 3 is selected by default because it is more secure and robust than SMB version 1.0. However, you can mount ISO SR using SMB version 1 using the following command:

     xe sr-create content-type=iso type=iso shared=true device-config:location=<\\IP\path>
     device-config:username=<username> device-config:cifspassword_secret=<password_secret> \
     device-config:type=cifs device-config:vers=1.0 name-label="Example ISO SR"
<!--NeedCopy-->

Software iSCSI support

Citrix Hypervisor supports shared SRs on iSCSI LUNs. iSCSI is supported using the Open-iSCSI software iSCSI initiator or by using a supported iSCSI Host Bus Adapter (HBA). The steps for using iSCSI HBAs are identical to the steps for Fibre Channel HBAs. Both sets of steps are described in Create a Shared LVM over Fibre Channel / Fibre Channel over Ethernet / iSCSI HBA or SAS SR.

Shared iSCSI support using the software iSCSI initiator is implemented based on the Linux Volume Manager (LVM). This feature provides the same performance benefits provided by LVM VDIs in the local disk case. Shared iSCSI SRs using the software-based host initiator can support VM agility using live migration: VMs can be started on any Citrix Hypervisor server in a resource pool and migrated between them with no noticeable downtime.

iSCSI SRs use the entire LUN specified at creation time and may not span more than one LUN. CHAP support is provided for client authentication, during both the data path initialization and the LUN discovery phases.

Note:

The block size of an iSCSI LUN must be 512 bytes. To use storage with 4 KB native blocks, the storage must also support emulation of 512 byte allocation blocks.

Citrix Hypervisor server iSCSI configuration

All iSCSI initiators and targets must have a unique name to ensure they can be uniquely identified on the network. An initiator has an iSCSI initiator address, and a target has an iSCSI target address. Collectively these names are called iSCSI Qualified Names, or IQNs.

Citrix Hypervisor servers support a single iSCSI initiator which is automatically created and configured with a random IQN during host installation. The single initiator can be used to connect to multiple iSCSI targets concurrently.

iSCSI targets commonly provide access control using iSCSI initiator IQN lists. All iSCSI targets/LUNs that your Citrix Hypervisor server accesses must be configured to allow access by the host’s initiator IQN. Similarly, targets/LUNs to be used as shared iSCSI SRs must be configured to allow access by all host IQNs in the resource pool.

Note:

iSCSI targets that do not provide access control typically default to restricting LUN access to a single initiator to ensure data integrity. If an iSCSI LUN is used as a shared SR across multiple servers in a pool, ensure that multi-initiator access is enabled for the specified LUN.

The Citrix Hypervisor server IQN value can be adjusted using XenCenter, or using the CLI with the following command when using the iSCSI software initiator:

    xe host-param-set uuid=valid_host_id other-config:iscsi_iqn=new_initiator_iqn
<!--NeedCopy-->

Warning:

  • Each iSCSI target and initiator must have a unique IQN. If a non-unique IQN identifier is used, data corruption or denial of LUN access can occur.
  • Do not change the Citrix Hypervisor server IQN with iSCSI SRs attached. Doing so can result in failures connecting to new targets or existing SRs.

Software FCoE storage (deprecated)

Software FCoE provides a standard framework to which hardware vendors can plug in their FCoE-capable NIC and get the same benefits of a hardware-based FCoE. This feature eliminates the need for using expensive HBAs.

Note:

Software FCoE is deprecated and will be removed in a future release.

Before you create a software FCoE storage, manually complete the configuration required to expose a LUN to the host. This configuration includes configuring the FCoE fabric and allocating LUNs to your SAN’s public world wide name (PWWN). After you complete this configuration, the available LUN is mounted to the host’s CNA as a SCSI device. The SCSI device can then be used to access the LUN as if it were a locally attached SCSI device. For information about configuring the physical switch and the array to support FCoE, see the documentation provided by the vendor.

Note:

Software FCoE can be used with Open vSwitch and Linux bridge as the network back-end.

Create a Software FCoE SR

Before creating a Software FCoE SR, customers must ensure that there are FCoE-capable NICs attached to the host.

Device-config parameters for FCoE SRs are:

Parameter Name Description Required?
SCSIid The SCSI bus ID of the destination LUN Yes

Run the following command to create a shared FCoE SR:

    xe sr-create type=lvmofcoe \
    name-label="FCoE SR" shared=true device-config:SCSIid=SCSI_id
<!--NeedCopy-->

Hardware host bus adapters (HBAs)

This section covers various operations required to manage SAS, Fibre Channel, and iSCSI HBAs.

Sample QLogic iSCSI HBA setup

For details on configuring QLogic Fibre Channel and iSCSI HBAs, see the Cavium website.

Once the HBA is physically installed into the Citrix Hypervisor server, use the following steps to configure the HBA:

  1. Set the IP networking configuration for the HBA. This example assumes DHCP and HBA port 0. Specify the appropriate values if using static IP addressing or a multi-port HBA.

    /opt/QLogic_Corporation/SANsurferiCLI/iscli -ipdhcp 0
    <!--NeedCopy-->
    
  2. Add a persistent iSCSI target to port 0 of the HBA.

    /opt/QLogic_Corporation/SANsurferiCLI/iscli -pa 0 iscsi_target_ip_address
    <!--NeedCopy-->
    
  3. Use the xe sr-probe command to force a rescan of the HBA controller and display available LUNs. For more information, see Probe an SR and Create a Shared LVM over Fibre Channel / Fibre Channel over Ethernet / iSCSI HBA or SAS SR.

Remove HBA-based SAS, FC, or iSCSI device entries

Note:

This step is not required. We recommend that only power users perform this process if it is necessary.

Each HBA-based LUN has a corresponding global device path entry under /dev/disk/by-scsibus in the format <SCSIid>-<adapter>:<bus>:<target>:<lun> and a standard device path under /dev. To remove the device entries for LUNs no longer in use as SRs, use the following steps:

  1. Use sr-forget or sr-destroy as appropriate to remove the SR from the Citrix Hypervisor server database. See Remove SRs for details.

  2. Remove the zoning configuration within the SAN for the desired LUN to the desired host.

  3. Use the sr-probe command to determine the ADAPTER, BUS, TARGET, and LUN values corresponding to the LUN to be removed. For more information, Probe an SR.

  4. Remove the device entries with the following command:

    echo "1" > /sys/class/scsi_device/adapter:bus:target:lun/device/delete
    <!--NeedCopy-->
    

Warning:

Make sure that you are certain which LUN you are removing. Accidentally removing a LUN required for host operation, such as the boot or root device, renders the host unusable.

Shared LVM storage

The Shared LVM type represents disks as Logical Volumes within a Volume Group created on an iSCSI (FC or SAS) LUN.

Note:

The block size of an iSCSI LUN must be 512 bytes. To use storage with 4 KB native blocks, the storage must also support emulation of 512 byte allocation blocks.

Create a shared LVM over iSCSI SR by using the Software iSCSI initiator

Device-config parameters for LVMoiSCSI SRs:

Parameter Name Description Required?
target The IP address or host name of the iSCSI filer that hosts the SR. This can also be a comma-separated list of values. Yes
targetIQN The IQN target address of iSCSI filer that hosts the SR Yes
SCSIid The SCSI bus ID of the destination LUN Yes
chapuser The user name to be used for CHAP authentication No
chappassword_secret (Recommended) Secret ID for the password to be used for CHAP authentication. Pass a secret instead of a password. No
chappassword The password to be used for CHAP authentication. We recommend you use the chappassword_secret parameter instead. No
port The network port number on which to query the target No
usediscoverynumber The specific iSCSI record index to use No
incoming_chapuser The user name that the iSCSI filter uses to authenticate against the host No
incoming_chappassword The password that the iSCSI filter uses to authenticate against the host No

Note:

When running the sr-create command, we recommend that you use the device-config:chappassword_secret argument instead of specifying the password on the command line. For more information, see Secrets.

To create a shared LVMoiSCSI SR on a specific LUN of an iSCSI target, use the following command.

    xe sr-create host-uuid=valid_uuid content-type=user \
    name-label="Example shared LVM over iSCSI SR" shared=true \
    device-config:target=target_ip= device-config:targetIQN=target_iqn= \
    device-config:SCSIid=scsci_id \
    type=lvmoiscsi
<!--NeedCopy-->

Create a Shared LVM over Fibre Channel / Fibre Channel over Ethernet / iSCSI HBA or SAS SR

SRs of type LVMoHBA can be created and managed using the xe CLI or XenCenter.

Device-config parameters for LVMoHBA SRs:

Parameter name Description Required?
SCSIid Device SCSI ID Yes

To create a shared LVMoHBA SR, perform the following steps on each host in the pool:

  1. Zone in one or more LUNs to each Citrix Hypervisor server in the pool. This process is highly specific to the SAN equipment in use. For more information, see your SAN documentation.

  2. If necessary, use the HBA CLI included in the Citrix Hypervisor server to configure the HBA:

    • Emulex: /bin/sbin/ocmanager

    • QLogic FC: /opt/QLogic_Corporation/SANsurferCLI

    • QLogic iSCSI: /opt/QLogic_Corporation/SANsurferiCLI

    For an example of QLogic iSCSI HBA configuration, see Hardware host bus adapters (HBAs) in the previous section. For more information on Fibre Channel and iSCSI HBAs, see the Broadcom and Cavium websites.

  3. Use the sr-probe command to determine the global device path of the HBA LUN. The sr-probe command forces a rescan of HBAs installed in the system to detect any new LUNs that have been zoned to the host. The command returns a list of properties for each LUN found. Specify the host-uuid parameter to ensure that the probe occurs on the desired host.

    The global device path returned as the <path> property is common across all hosts in the pool. Therefore, this path must be used as the value for the device-config:device parameter when creating the SR.

    If multiple LUNs are present use the vendor, LUN size, LUN serial number, or the SCSI ID from the <path> property to identify the desired LUN.

        xe sr-probe type=lvmohba \
        host-uuid=1212c7b3-f333-4a8d-a6fb-80c5b79b5b31
        Error code: SR_BACKEND_FAILURE_90
        Error parameters: , The request is missing the device parameter, \
        <?xml version="1.0" ?>
        <Devlist>
            <BlockDevice>
                <path>
                    /dev/disk/by-id/scsi-360a9800068666949673446387665336f
                </path>
                <vendor>
                    HITACHI
                </vendor>
                <serial>
                    730157980002
                </serial>
                <size>
                    80530636800
                </size>
                <adapter>
                    4
                </adapter>
                <channel>
                    0
                </channel>
                <id>
                    4
                </id>
                <lun>
                    2
                </lun>
                <hba>
                    qla2xxx
                </hba>
            </BlockDevice>
            <Adapter>
                <host>
                    Host4
                </host>
                <name>
                    qla2xxx
                </name>
                <manufacturer>
                    QLogic HBA Driver
                </manufacturer>
                <id>
                    4
                </id>
            </Adapter>
        </Devlist>
    <!--NeedCopy-->
    
  4. On the master host of the pool, create the SR. Specify the global device path returned in the <path> property from sr-probe. PBDs are created and plugged for each host in the pool automatically.

        xe sr-create host-uuid=valid_uuid \
        content-type=user \
        name-label="Example shared LVM over HBA SR" shared=true \
        device-config:SCSIid=device_scsi_id type=lvmohba
    <!--NeedCopy-->
    

Note:

You can use the XenCenter Repair Storage Repository function to retry the PBD creation and plugging portions of the sr-create operation. This function can be valuable in cases where the LUN zoning was incorrect for one or more hosts in a pool when the SR was created. Correct the zoning for the affected hosts and use the Repair Storage Repository function instead of removing and re-creating the SR.

Thin provisioned shared GFS2 block storage

Thin provisioning better utilizes the available storage by allocating disk storage space to VDIs as data is written to the virtual disk, rather than allocating the full virtual size of the VDI in advance. Thin provisioning enables you to significantly reduce the amount of space required on a shared storage array, and with that your Total Cost of Ownership (TCO).

Thin provisioning for shared block storage is of particular interest in the following cases:

  • You want increased space efficiency. Images are sparsely and not thickly allocated.
  • You want to reduce the number of I/O operations per second on your storage array. The GFS2 SR is the first SR type to support storage read caching on shared block storage.
  • You use a common base image for multiple virtual machines. The images of individual VMs will then typically utilize even less space.
  • You use snapshots. Each snapshot is an image and each image is now sparse.
  • Your storage does not support NFS and only supports block storage. If your storage supports NFS, we recommend you use NFS instead of GFS2.
  • You want to create VDIs that are greater than 2 TiB in size. The GFS2 SR supports VDIs up to 16 TiB in size.

Note:

We recommend not to use a GFS2 SR with a VLAN due to a known issue where you cannot add or remove hosts on a clustered pool if the cluster network is on a non-management VLAN.

The shared GFS2 type represents disks as a filesystem created on an iSCSI or HBA LUN. VDIs stored on a GFS2 SR are stored in the QCOW2 image format.

To use shared GFS2 storage, the Citrix Hypervisor resource pool must be a clustered pool. Enable clustering on your pool before creating a GFS2 SR. For more information, see Clustered pools.

Ensure that storage multipathing is set up between your clustered pool and your GFS2 SR. For more information, see Storage multipathing.

SRs of type GFS2 can be created and managed using the xe CLI or XenCenter.

Create a shared GFS2 SR

You can create your shared GFS2 SR on an iSCSI or an HBA LUN.

Create a shared GFS2 over iSCSI SR

You can create GFS2 over iSCSI SRs by using XenCenter. For more information, see Software iSCSI storage in the XenCenter product documentation.

Alternatively, you can use the xe CLI to create a GFS2 over iSCSI SR.

Device-config parameters for GFS2 SRs:

Parameter Name Description Required?
provider The block provider implementation. In this case, iscsi. Yes
target The IP address or hostname of the iSCSI filer that hosts Yes
targetIQN The IQN target of iSCSI filer that hosts the SR Yes
SCSIid Device SCSI ID Yes

You can find the values to use for these parameters by using the xe sr-probe-ext command.

xe sr-probe-ext type=<type> host-uuid=<host_uuid> device-config:=<config> sm-config:=<sm_config>
  1. Start by running the following command:

    xe sr-probe-ext type=gfs2 device-config:provider=iscsi
    

    The output from the command prompts you to supply additional parameters and gives a list of possible values at each step.

  2. Repeat the command, adding new parameters each time.

  3. When the command output starts with Found the following complete configurations that can be used to create SRs:, you can locate the SR by using the xe sr-create command and the device-config parameters that you specified.

    Example output:

    Found the following complete configurations that can be used to create SRs:
    Configuration 0:
      SCSIid       : 36001405852f77532a064687aea8a5b3f
          targetIQN: iqn.2009-01.example.com:iscsi192a25d6
             target: 198.51.100.27
           provider: iscsi
    
    
    Configuration 0 extra information:
    

To create a shared GFS2 SR on a specific LUN of an iSCSI target, run the following command on a server in your clustered pool:

xe sr-create type=gfs2 name-label="Example GFS2 SR" --shared \
   device-config:provider=iscsi device-config:targetIQN=target_iqns \
   device-config:target=portal_address device-config:SCSIid=scsci_id

If the iSCSI target is not reachable while GFS2 filesystems are mounted, some hosts in the clustered pool might fence.

For more information about working with iSCSI SRs, see Software iSCSI support.

Create a shared GFS2 over HBA SR

You can create GFS2 over HBA SRs by using XenCenter. For more information, see Hardware HBA storage in the XenCenter product documentation.

Alternatively, you can use the xe CLI to create a GFS2 over HBA SR.

Device-config parameters for GFS2 SRs:

Parameter name Description Required?
provider The block provider implementation. In this case, hba. Yes
SCSIid Device SCSI ID Yes

You can find the values to use for the SCSIid parameter by using the xe sr-probe-ext command.

xe sr-probe-ext type=<type> host-uuid=<host_uuid> device-config:=<config> sm-config:=<sm_config>
  1. Start by running the following command:

    xe sr-probe-ext type=gfs2 device-config:provider=hba
    

    The output from the command prompts you to supply additional parameters and gives a list of possible values at each step.

  2. Repeat the command, adding new parameters each time.

  3. When the command output starts with Found the following complete configurations that can be used to create SRs:, you can locate the SR by using the xe sr-create command and the device-config parameters that you specified.

    Example output:

    Found the following complete configurations that can be used to create SRs:
    Configuration 0:
      SCSIid       : 36001405852f77532a064687aea8a5b3f
          targetIQN: iqn.2009-01.example.com:iscsi192a25d6
             target: 198.51.100.27
           provider: iscsi
    
    
    Configuration 0 extra information:
    

To create a shared GFS2 SR on a specific LUN of an HBA target, run the following command on a server in your clustered pool:

xe sr-create type=gfs2 name-label="Example GFS2 SR" --shared \
  device-config:provider=hba device-config:SCSIid=device_scsi_id

For more information about working with HBA SRs, see Hardware host bus adapters.

Constraints

Shared GFS2 storage currently has the following constraints:

  • As with any thin-provisioned SR, if the GFS2 SR usage grows to 100%, further writes from VMs fail. These failed writes can then lead to failures within the VM or possible data corruption or both.

  • XenCenter shows an alert when your SR usage grows to 80%. Ensure that you monitor your GFS2 SR for this alert and take the appropriate action if seen. On a GFS2 SR, high usage causes a performance degradation. We recommend that you keep your SR usage below 80%.

  • VM migration with storage migration (live or offline) is not supported for VMs whose VDIs are on a GFS2 SR. You also cannot migrate VDIs from another type of SR to a GFS2 SR.

  • The FCoE transport is not supported with GFS2 SRs.

  • Trim/unmap is not supported on GFS2 SRs.

  • CHAP is not supported on GFS2 SRs.

  • MCS full clone VMs are not supported with GFS2 SRs.

  • Using multiple GFS2 SRs in the same MCS catalog is not supported.

  • Performance metrics are not available for GFS2 SRs and disks on these SRs.

  • Changed block tracking is not supported for VDIs stored on GFS2 SRs.

  • You cannot export VDIs that are greater than 2 TiB as VHD or OVA/OVF. However, you can export VMs with VDIs larger than 2 TiB in XVA format.

  • We do not recommend using a thin provisioned LUN with GFS2. However, if you do choose this configuration, you must ensure that the LUN always has enough space to allow Citrix Hypervisor to write to it.

  • You cannot have more than 62 GFS2 SRs in your pool.

  • Clustered pools only support up to 16 hosts per pool.
  • To enable HA in a clustered pool, the heartbeat SR must be a GFS2 SR.
  • For cluster traffic, you must use a bonded network that uses at least two different network switches. Do not use this network for any other purposes.
  • Changing the IP address of the cluster network by using XenCenter requires clustering and GFS2 to be temporarily disabled.
  • Do not change the bonding of your clustering network while the cluster is live and has running VMs. This action can cause the cluster to fence.
  • If you have an IP address conflict (multiple hosts having the same IP address) on your clustering network involving at least one host with clustering enabled, the hosts do not fence. To fix this issue, resolve the IP address conflict.

NFS and SMB

Shares on NFS servers (that support any version of NFSv4 or NFSv3) or on SMB servers (that support SMB 3) can be used immediately as an SR for virtual disks. VDIs are stored in the Microsoft VHD format only. Additionally, as these SRs can be shared, VDIs stored on shared SRs allow:

  • VMs to be started on any Citrix Hypervisor servers in a resource pool

  • VM migrate between Citrix Hypervisor servers in a resource pool using live migration (without noticeable downtime)

Important:

  • Support for SMB3 is limited to the ability to connect to a share using the 3 protocol. Extra features like Transparent Failover depend on feature availability in the upstream Linux kernel and are not supported in Citrix Hypervisor 8.2.
  • Clustered SMB is not supported with Citrix Hypervisor.
  • For NFSv4, only the authentication type AUTH_SYS is supported.
  • SMB storage is available for Citrix Hypervisor Premium Edition customers, or those customers who have access to Citrix Hypervisor through their Citrix Virtual Apps and Desktops entitlement or Citrix DaaS entitlement.
  • It is highly recommended for both NFS and SMB storage that a dedicated storage network be used, using at least two bonded links, ideally to independent network switches with redundant power supplies.
  • When using SMB storage, do not remove the share from the storage before detaching the SMB SR.

VDIs stored on file-based SRs are thinly provisioned. The image file is allocated as the VM writes data into the disk. This approach has the considerable benefit that the VM image files take up only as much space on the storage as is required. For example, if a 100 GB VDI is allocated for a VM and an OS is installed, the VDI file only reflects the size of the OS data written to the disk rather than the entire 100 GB.

VHD files may also be chained, allowing two VDIs to share common data. In cases where a file-based VM is cloned, the resulting VMs share the common on-disk data at the time of cloning. Each VM proceeds to make its own changes in an isolated copy-on-write version of the VDI. This feature allows file-based VMs to be quickly cloned from templates, facilitating very fast provisioning and deployment of new VMs.

Note:

The maximum supported length of VHD chains is 30.

File-based SRs and VHD implementations in Citrix Hypervisor assume that they have full control over the SR directory on the file server. Administrators must not modify the contents of the SR directory, as this action can risk corrupting the contents of VDIs.

Citrix Hypervisor has been tuned for enterprise-class storage that uses non-volatile RAM to provide fast acknowledgments of write requests while maintaining a high degree of data protection from failure. Citrix Hypervisor has been tested extensively against Network Appliance FAS2020 and FAS3210 storage, using Data OnTap 7.3 and 8.1

Warning:

As VDIs on file-based SRs are created as thin provisioned, administrators must ensure that the file-based SRs have enough disk space for all required VDIs. Citrix Hypervisor servers do not enforce that the space required for VDIs on file-based SRs is present.

Ensure that you monitor the free space on your SR. If the SR usage grows to 100%, further writes from VMs fail. These failed writes can cause the VM to freeze or crash.

Create a shared NFS SR (NFS)

Note:

If you attempt to attach a read-only NFS SR, this action fails with the following error message: “SR_BACKEND_FAILURE_461 - The file system for SR cannot be written to.”

To create an NFS SR, you must provide the hostname or IP address of the NFS server. You can create the SR on any valid destination path; use the sr-probe command to display a list of valid destination paths exported by the server.

In scenarios where Citrix Hypervisor is used with lower-end storage, it cautiously waits for all writes to be acknowledged before passing acknowledgments on to VMs. This approach incurs a noticeable performance cost, and might be solved by setting the storage to present the SR mount point as an asynchronous mode export. Asynchronous exports acknowledge writes that are not actually on disk. Consider the risks of failure carefully in these situations.

Note:

The NFS server must be configured to export the specified path to all servers in the pool. If this configuration is not done, the creation of the SR and the plugging of the PBD record fails.

The Citrix Hypervisor NFS implementation uses TCP by default. If your situation allows, you can configure the implementation to use UDP in scenarios where there may be a performance benefit. To do this configuration, when creating an SR, specify the device-config parameter useUDP=true.

The following device-config parameters are used with NFS SRs:

Parameter Name Description Required?
server IP address or hostname of the NFS server Yes
serverpath Path, including the NFS mount point, to the NFS server that hosts the SR Yes
nfsversion Specifies the version of NFS to use. If you specify nfsversion="4", the SR uses NFS v4.0, v4.1 or v4.2, depending on what is available. If you want to select a more specific version of NFS, you can specify nfsversion="4.0" and so on. Only one value can be specified for nfsversion. No
useUDP Configure the SR to use UDP rather than the default TCP. No

For example, to create a shared NFS SR on 192.168.1.10:/export1, using any version 4 of NFS that is made available by the filer, use the following command:

    xe sr-create content-type=user \
    name-label="shared NFS SR" shared=true \
    device-config:server=192.168.1.10 device-config:serverpath=/export1 type=nfs \
    device-config:nfsversion="4"
<!--NeedCopy-->

To create a non-shared NFS SR on 192.168.1.10:/export1, using specifically NFS version 4.0, run the following command:

    xe sr-create host-uuid=host_uuid content-type=user \
    name-label="Non-shared NFS SR" \
    device-config:server=192.168.1.10 device-config:serverpath=/export1 type=nfs \
    device-config:nfsversion="4.0"
<!--NeedCopy-->

Create a shared SMB SR (SMB)

To create an SMB SR, provide the hostname or IP address of the SMB server, the full path of the exported share, and appropriate credentials.

Device-config parameters for SMB SRs:

Parameter Name Description Required?
server Full path to share on server Yes
username User account with RW access to share Optional
password_secret (Recommended) Secret ID for the password for the user account, which can be used instead of the password. Optional
password Password for the user account. We recommend that you use the password_secret parameter instead. Optional

Note:

When running the sr-create command, we recommend that you use the device-config:password_secret argument instead of specifying the password on the command line. For more information, see Secrets.

For example, to create a shared SMB SR on 192.168.1.10:/share1, use the following command:

    xe sr-create content-type=user \
    name-label="Example shared SMB SR" shared=true \
    device-config:server=//192.168.1.10/share1 \
    device-config:username=valid_username device-config:password_secret=valid_password_secret type=smb
<!--NeedCopy-->

To create a non-shared SMB SR, run the following command:

    xe sr-create host-uuid=host_uuid content-type=user \
    name-label="Non-shared SMB SR" \
    device-config:server=//192.168.1.10/share1 \
    device-config:username=valid_username device-config:password_secret=valid_password_secret type=smb
<!--NeedCopy-->

LVM over Hardware HBA

The LVM over hardware HBA type represents disks as VHDs on Logical Volumes within a Volume Group created on an HBA LUN that provides, for example, hardware-based iSCSI or FC support.

Citrix Hypervisor servers support Fibre Channel SANs through Emulex or QLogic host bus adapters (HBAs). All Fibre Channel configuration required to expose a Fibre Channel LUN to the host must be completed manually. This configuration includes storage devices, network devices, and the HBA within the Citrix Hypervisor server. After all FC configuration is complete, the HBA exposes a SCSI device backed by the FC LUN to the host. The SCSI device can then be used to access the FC LUN as if it were a locally attached SCSI device.

Use the sr-probe command to list the LUN-backed SCSI devices present on the host. This command forces a scan for new LUN-backed SCSI devices. The path value returned by sr-probe for a LUN-backed SCSI device is consistent across all hosts with access to the LUN. Therefore, this value must be used when creating shared SRs accessible by all hosts in a resource pool.

The same features apply to QLogic iSCSI HBAs.

See Create storage repositories for details on creating shared HBA-based FC and iSCSI SRs.

Note:

Citrix Hypervisor support for Fibre Channel does not support direct mapping of a LUN to a VM. HBA-based LUNs must be mapped to the host and specified for use in an SR. VDIs within the SR are exposed to VMs as standard block devices.

The block size of an LVM over HBA LUN must be 512 bytes. To use storage with 4 KB native blocks, the storage must also support emulation of 512 byte allocation blocks.

Create a storage repository