Thin provisioned shared GFS2 block storage

Thin provisioning better utilizes the available storage by allocating disk storage space to VDIs as data is written to the virtual disk, rather than allocating the full virtual size of the VDI in advance. Thin provisioning enables you to significantly reduce the amount of space required on a shared storage array, and with that your Total Cost of Ownership (TCO).

Thin provisioning for shared block storage is of particular interest in the following cases:

  • You want increased space efficiency. Images are sparsely and not thickly allocated.
  • You want to reduce the number of I/O operations per second on your storage array. The GFS2 SR is the first SR type to support storage read caching on shared block storage.
  • You use a common base image for multiple virtual machines. The images of individual VMs will then typically utilize even less space.
  • You use snapshots. Each snapshot is an image and each image is now sparse.
  • Your storage does not support NFS and only supports block storage. If your storage supports NFS, we recommend you use NFS instead of GFS2.
  • You want to create VDIs that are greater than 2 TiB in size. The GFS2 SR supports VDIs up to 16 TiB in size.

The shared GFS2 type represents disks as a filesystem created on an iSCSI or HBA LUN. VDIs stored on a GFS2 SR are stored in the QCOW2 image format.

Prerequisites

Before you begin, ensure the following prerequisites are met:

  • All Citrix Hypervisor servers in the clustered pool must have at least 2 GiB of control domain memory.

  • All hosts in the cluster must use static IP addresses for the cluster network.

  • We recommend that you use clustering only in pools containing at least three hosts, as pools of two hosts are sensitive to self-fencing the entire pool.

  • If you have a firewall between the hosts in your pool, ensure that hosts can communicate on the cluster network using the following ports:
    • TCP: 8892, 21064
    • UDP: 5404, 5405

    For more information, see Communication Ports Used by Citrix Technologies.

  • If you are clustering an existing pool, ensure that high availability is disabled. You can enable high availability again after clustering is enabled.

  • You have a block-based storage device that is visible to all Citrix Hypervisor servers in the resource pool.

Set up a clustered pool to use a shared GFS2 SR

To use shared GFS2 storage, the Citrix Hypervisor resource pool must be a clustered pool. Enable clustering on your pool before creating a GFS2 SR.

Note:

Clustered pools behave differently to non-clustered pools. For more information about cluster behavior, see Clustered pools.

If you prefer, you can set up clustering on your pool by using XenCenter. For more information, see the XenCenter product documentation.

To use the xe CLI to create a clustered pool:

  1. Create a bonded network to use as the clustering network. On the Citrix Hypervisor server that you want to be the pool master, complete the following steps:

    1. Open a console on the Citrix Hypervisor server.

    2. Name your resource pool by using the following command:

      xe pool-param-set name-label="New Pool" uuid=<pool_uuid>
      
    3. Create a network for use with the bonded NIC by using the following command:

      xe network-create name-label=bond0
      

      The UUID of the new network is returned.

    4. Find the UUIDs of the PIFs to use in the bond by using the following command:

      xe pif-list
      
    5. Create your bonded network in either active-active mode, active-passive mode, or LACP bond mode. Depending on the bond mode you want to use, complete one of the following actions:

      • To configure the bond in active-active mode (default), use the bond-create command to create the bond. Using commas to separate the parameters, specify the newly created network UUID and the UUIDs of the PIFs to be bonded:

         xe bond-create network-uuid=<network_uuid> /
              pif-uuids=<pif_uuid_1>,<pif_uuid_2>,<pif_uuid_3>,<pif_uuid_4>
        

        Type two UUIDs when you are bonding two NICs and four UUIDs when you are bonding four NICs. The UUID for the bond is returned after running the command.

      • To configure the bond in active-passive or LACP bond mode, use the same syntax, add the optional mode parameter, and specify lacp or active-backup:

         xe bond-create network-uuid=<network_uuid> pif-uuids=<pif_uuid_1>, /
              <pif_uuid_2>,<pif_uuid_3>,<pif_uuid_4> /
              mode=balance-slb | active-backup | lacp
        

    After you have created your bonded network on the pool master, when you join other Citrix Hypervisor servers to the pool, the network and bond information is automatically replicated to the joining server.

    For more information, see Networking.

  2. Create a resource pool of at least three Citrix Hypervisor servers.

    Repeat the following steps on each Citrix Hypervisor server that is a (non-master) pool member:

    1. Open a console on the Citrix Hypervisor server.
    2. Join the Citrix Hypervisor server to the pool on the pool master by using the following command:

      xe pool-join master-address=master_address master-username=administrators_username master-password=password
      

      The value of the master-address parameter must be set to the fully qualified domain name of the Citrix Hypervisor server that is the pool master. The password must be the administrator password set when the pool master was installed.

    For more information, see Hosts and resource pools.

  3. For every PIF that belongs to this network, set disallow-unplug=true.

    1. Find the UUIDs of the PIFs that belong to the network by using the following command:

      xe pif-list
      
    2. Run the following command on a Citrix Hypervisor server in your resource pool:

      xe pif-param-set disallow-unplug=true uuid=<pif_uuid>
      
  4. Enable clustering on your pool. Run the following command on a Citrix Hypervisor server in your resource pool:

    xe cluster-pool-create network-uuid=<network_uuid>
    

    Provide the UUID of the bonded network that you created in an earlier step.

Set up storage multipathing to your shared GFS2 SR

Important:

Before attempting to enable multipathing, verify that the following statements are true:

  • Multiple targets are available on your storage server.

    For example, an iSCSI storage back-end queried for sendtargets on a given portal returns multiple targets, as in the following example:

      iscsiadm -m discovery --type sendtargets --portal 192.168.0.161
      192.168.0.161:3260,1 iqn.strawberry:litchie
      192.168.0.204:3260,2 iqn.strawberry:litchie
    
  • For iSCSI only, dom0 has an IP address on each subnet used by the multipathed storage.

    Ensure that for each path you want to have to the storage, you have a NIC and that there is an IP address configured on each NIC. For example, if you want four paths to your storage, you must have four NICs that each have an IP address configured.

  • For HBA only, multiple HBA are connected to the switch fabric.

You can use XenCenter to set up storage multipathing. For more information, see Storage multipathing in the XenCenter product documentation.

Alternatively, to use the xe CLI to set up storage multipathing, complete the following steps on all of the Citrix Hypervisor servers in your clustered pool:

  1. Open a console on the Citrix Hypervisor server.

  2. Unplug all PBDs on the server by using the following command:

    xe pbd-unplug uuid=<pbd_uuid>
    
  3. Set the value of the other-config:multipathing parameter to true by using the following command:

    xe host-param-set other-config:multipathing=true uuid=<server_uuid>
    
  4. Set the value of the other-config:multipathhandle parameter to dmp by using the following command:

    xe host-param-set other-config:multipathhandle=dmp uuid=<server_uuid>
    
  5. If there are existing SRs on the server running in single path mode but that have multiple paths:

    • Migrate or suspend any running guests with virtual disks in affected the SRs

    • Unplug and replug the PBD of any affected SRs to reconnect them using multipathing:

       xe pbd-unplug uuid=<pbd_uuid>
       xe pbd-plug uuid=<pbd_uuid>
      

For more information, see Storage multipathing.

Create a shared GFS2 SR

You can create your shared GFS2 SR on an iSCSI or an HBA LUN.

Create a shared GFS2 over iSCSI SR

You can create GFS2 over iSCSI SRs by using XenCenter. For more information, see Software iSCSI storage in the XenCenter product documentation.

Alternatively, you can use the xe CLI to create a GFS2 over iSCSI SR.

Device-config parameters for GFS2 SRs:

Parameter Name Description Required?
provider The block provider implementation. In this case, iscsi. Yes
target The IP address or hostname of the iSCSI filer that hosts Yes
targetIQN The IQN target of iSCSI filer that hosts the SR Yes
SCSIid Device SCSI ID Yes

You can find the values to use for these parameters by using the xe sr-probe-ext command.

xe sr-probe-ext type=<type> host-uuid=<host_uuid> device-config:=<config> sm-config:=<sm_config>
  1. Start by running the following command:

    xe sr-probe-ext type=gfs2 device-config:provider=iscsi
    

    The output from the command prompts you to supply additional parameters and gives a list of possible values at each step.

  2. Repeat the command, adding new parameters each time.

  3. When the command output starts with The following SRs were found:, you can use the device-config parameters that you specified to locate the SR when running the xe sr-create command.

To create a shared GFS2 SR on a specific LUN of an iSCSI target, run the following command on a server in your clustered pool:

xe sr-create type=gfs2 name-label="Example GFS2 SR" --shared \
   device-config:provider=iscsi device-config:targetIQN=target_iqns \
   device-config:target=portal_address device-config:SCSIid=scsci_id

If the iSCSI target is not reachable while GFS2 filesystems are mounted, some hosts in the clustered pool might fence.

For more information about working with iSCSI SRs, see Software iSCSI support.

Create a shared GFS2 over HBA SR

You can create GFS2 over HBA SRs by using XenCenter. For more information, see Hardware HBA storage in the XenCenter product documentation.

Alternatively, you can use the xe CLI to create a GFS2 over HBA SR.

Device-config parameters for GFS2 SRs:

Parameter name Description Required?
provider The block provider implementation. In this case, hba. Yes
SCSIid Device SCSI ID Yes

You can find the values to use for the SCSIid parameter by using the xe sr-probe-ext command.

xe sr-probe-ext type=<type> host-uuid=<host_uuid> device-config:=<config> sm-config:=<sm_config>
  1. Start by running the following command:

    xe sr-probe-ext type=gfs2 device-config:provider=hba
    

    The output from the command prompts you to supply additional parameters and gives a list of possible values at each step.

  2. Repeat the command, adding new parameters each time.

  3. When the command output starts with The following SRs were found:, you can use the device-config parameters that you specified to locate the SR when running the xe sr-create command.

To create a shared GFS2 SR on a specific LUN of an HBA target, run the following command on a server in your clustered pool:

xe sr-create type=gfs2 name-label="Example GFS2 SR" --shared \
  device-config:provider=hba device-config:SCSIid=device_scsi_id

For more information about working with HBA SRs, see Hardware host bus adapters.

Constraints

Shared GFS2 storage currently has the following constraints:

  • VM migration with storage live migration is not supported for VMs whose VDIs are on a GFS2 SR.
  • The FCoE protocol is not supported with GFS2 SRs.
  • Trim/unmap is not supported on GFS2 SRs.
  • Performance metrics are not available for GFS2 SRs and disks on these SRs.
  • Changed block tracking is not supported for VDIs stored on GFS2 SRs.
  • You cannot export VDIs that are greater than 2 TiB as VHD or OVA/OVF. However, you can export VMs with VDIs larger than 2 TiB in XVA format.
  • Clustered pools only support up to 16 hosts per pool.
  • If a network has been used for both management and clustering, you cannot separate the management network without recreating the cluster.
  • Changing the IP address of the cluster network by using XenCenter requires clustering and GFS2 to be temporarily disabled.
  • Do not change the bonding of your clustering network while the cluster is live and has running VMs. This action can cause the cluster to fence.
  • If you have an IP address conflict (multiple hosts having the same IP address) on your clustering network involving at least one host with clustering enabled, the hosts do not fence. To fix this issue, resolve the IP address conflict.