Manage storage repositories

This section covers creating storage repository types and making them available to your XenServer host. It also covers various operations required in the ongoing management of Storage Repositories (SRs), including Live VDI Migration.

Create storage repositories

This section explains how to create Storage Repositories (SRs) of different types and make them available to your XenServer host. The examples provided cover creating SRs using the xe CLI. For details on using the New Storage Repository wizard to add SRs using XenCenter, see the XenCenter help.

Note:

Local SRs of type lvm and ext3 can only be created using the xe CLI. After creation, you can manage all SR types by either XenCenter or the xe CLI.

There are two basic steps to create a storage repository for use on a host by using the CLI:

  1. Probe the SR type to determine values for any required parameters.

  2. Create the SR to initialize the SR object and associated PBD objects, plug the PBDs, and activate the SR.

These steps differ in detail depending on the type of SR being created. In all examples, the sr-create command returns the UUID of the created SR if successful.

SRs can be destroyed when no longer in use to free up the physical device. SRs can also be forgotten to detach the SR from one XenServer host and attach it to another. For more information, see Removing SRs in the following section.

Probe an SR

The sr-probe command can be used in the following ways:

  • To identify unknown parameters for use in creating an SR
  • To return a list of existing SRs

In both cases sr-probe works by specifying an SR type and one or more device-config parameters for that SR type. If an incomplete set of parameters is supplied, the sr-probe command returns an error message indicating parameters are missing and the possible options for the missing parameters. When a complete set of parameters is supplied, a list of existing SRs is returned. All sr-probe output is returned as XML.

For example, a known iSCSI target can be probed by specifying its name or IP address. The set of IQNs available on the target is returned:

xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10

Error code: SR_BACKEND_FAILURE_96
Error parameters: , The request is missing or has an incorrect target IQN parameter, \
<?xml version="1.0" ?>
<iscsi-target-iqns>
    <TGT>
        <Index>
            0
        </Index>
        <IPAddress>
            192.168.1.10
        </IPAddress>
        <TargetIQN>
            iqn.192.168.1.10:filer1
        </TargetIQN>
    </TGT>
</iscsi-target-iqns>

Probing the same target again and specifying both the name/IP address and desired IQN returns the set of SCSIids (LUNs) available on the target/IQN.

xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10  \
device-config:targetIQN=iqn.192.168.1.10:filer1

Error code: SR_BACKEND_FAILURE_107
Error parameters: , The SCSIid parameter is missing or incorrect, \
<?xml version="1.0" ?>
<iscsi-target>
    <LUN>
        <vendor>
            IET
        </vendor>
        <LUNid>
            0
        </LUNid>
        <size>
            42949672960
        </size>
        <SCSIid>
            149455400000000000000000002000000b70200000f000000
        </SCSIid>
    </LUN>
</iscsi-target>

Probing the same target and supplying all three parameters returns a list of SRs that exist on the LUN, if any.

xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10  \
device-config:targetIQN=192.168.1.10:filer1 \
device-config:SCSIid=149455400000000000000000002000000b70200000f000000

<?xml version="1.0" ?>
<SRlist>
    <SR>
        <UUID>
            3f6e1ebd-8687-0315-f9d3-b02ab3adc4a6
        </UUID>
        <Devlist>
            /dev/disk/by-id/scsi-149455400000000000000000002000000b70200000f000000
        </Devlist>
    </SR>
</SRlist>

The following parameters can be probed for each SR type:

SR type The device-config parameters, in order of dependency Can be probed? Required for sr-create?
lvmoiscsi target No Yes
  chapuser No No
  chappassword No No
  targetIQN Yes Yes
  SCSIid Yes Yes
lvmohba SCSIid Yes Yes
NetApp target No Yes
  username No Yes
  password No Yes
  chapuser No No
  chappassword No No
  aggregate No (see note 1) Yes
  FlexVols No No
  allocation No No
  asis No No
nfs server No Yes
  serverpath Yes Yes
lvm device No Yes
ext device No Yes
EqualLogic target No Yes
  username No Yes
  password No Yes
  chapuser No No
  chappassword No No
  storagepool No (see note 2) Yes

Notes:

  • Aggregate probing is only possible at sr-create time.
  • Storage pool probing is only possible at sr-create time.

Remove SRs

A Storage Repository (SR) can be removed either temporarily or permanently.

Detach: Breaks the association between the storage device and the pool or host (PBD Unplug). The SR (and its VDIs) becomes inaccessible. The contents of the VDIs and the meta-information used by VMs to access the VDIs are preserved. Detach can be used when you temporarily take an SR offline, for example, for maintenance. A detached SR can later be reattached.

Forget: Preserves the contents of the SR on the physical disk, but the information that connects a VM to its VDIs is permanently deleted. For example, allows you to reattach the SR, to another XenServer host, without removing any of the SR contents.

Destroy: Deletes the contents of the SR from the physical disk.

For Destroy or Forget, the PBD connected to the SR must be unplugged from the host.

  1. Unplug the PBD to detach the SR from the corresponding XenServer host:

    xe pbd-unplug uuid=pbd_uuid
    
  2. Use the sr-destroy command to remove an SR. The command destroys the SR, deletes the SR and corresponding PBD from the XenServer host database and deletes the SR contents from the physical disk:

    xe sr-destroy uuid=sr_uuid
    
  3. Use the sr-forget command to forget an SR. The command removes the SR and corresponding PBD from the XenServer host database but leaves the actual SR content intact on the physical media:

    xe sr-forget uuid=sr_uuid
    

Note:

It can take some time for the software object corresponding to the SR to be garbage collected.

Introduce an SR

To reintroduce a previously forgotten SR, create a PBD. Manually plug the PBD to the appropriate XenServer hosts to activate the SR.

The following example introduces an SR of type lvmoiscsi.

  1. Probe the existing SR to determine its UUID:

    xe sr-probe type=lvmoiscsi device-config:target=192.168.1.10 \
    device-config:targetIQN=192.168.1.10:filer1 \
    device-config:SCSIid=149455400000000000000000002000000b70200000f000000
    
  2. Introduce the existing SR UUID returned from the sr-probe command. The UUID of the new SR is returned:

    xe sr-introduce content-type=user name-label="Example Shared LVM over iSCSI SR"
    shared=true uuid=valid_sr_uuid type=lvmoiscsi
    
  3. Create a PBD to accompany the SR. The UUID of the new PBD is returned:

    xe pbd-create type=lvmoiscsi host-uuid=valid_uuid sr-uuid=valid_sr_uuid \
    device-config:target=192.168.0.1 \
    device-config:targetIQN=192.168.1.10:filer1 \
    device-config:SCSIid=149455400000000000000000002000000b70200000f000000
    
  4. Plug the PBD to attach the SR:

    xe pbd-plug uuid=pbd_uuid
    
  5. Verify the status of the PBD plug. If successful, the currently-attached property is true:

    xe pbd-list sr-uuid=sr_uuid
    

Note:

Perform steps 3 through 5 for each server in the resource pool. These steps can also be performed using the Repair Storage Repository function in XenCenter.

Live LUN expansion

To fulfill capacity requirements, you may need to add capacity to the storage array to increase the size of the LUN provisioned to the XenServer host. Live LUN Expansion allows to you to increase the size of the LUN without any VM downtime.

After adding more capacity to your storage array, enter,

xe sr-scan sr-uuid=sr_uuid

This command rescans the SR, and any extra capacity is added and made available.

This operation is also available in XenCenter. Select the SR to resize, and then click Rescan. For more information, press F1 to display the XenCenter help.

Warnings:

  • It is not possible to shrink or truncate LUNs. Reducing the LUN size on the storage array can lead to data loss.
  • LUN resize is not supported for GFS2 SRs.

Live VDI migration

Live VDI migration allows the administrator to relocate the VMs Virtual Disk Image (VDI) without shutting down the VM. This feature enables administrative operations such as:

  • Moving a VM from cheap local storage to fast, resilient, array-backed storage.
  • Moving a VM from a development to production environment.
  • Moving between tiers of storage when a VM is limited by storage capacity.
  • Performing storage array upgrades.

Limitations and caveats

Live VDI Migration is subject to the following limitations and caveats

  • There must be sufficient disk space available on the target repository.

  • VDIs with more than one snapshot cannot be migrated.

To move virtual disks by using XenCenter

  1. In the Resources pane, select the SR where the Virtual Disk is stored and then click the Storage tab.

  2. In the Virtual Disks list, select the Virtual Disk that you would like to move, and then click Move.

  3. In the Move Virtual Disk dialog box, select the target SR that you would like to move the VDI to.

    Note:

    Ensure that the SR has sufficient space for another virtual disk: the available space is shown in the list of available SRs.

  4. Click Move to move the virtual disk.

For xe CLI reference, see vdi-pool-migrate.

Cold VDI migration between SRs (offline migration)

VDIs associated with a VM can be copied from one SR to another to accommodate maintenance requirements or tiered storage configurations. XenCenter enables you to copy a VM and all of its VDIs to the same or a different SR. A combination of XenCenter and the xe CLI can be used to copy individual VDIs.

For xe CLI reference, see vm-migrate.

Copy all of a VM’s VDIs to a different SR

The XenCenter Copy VM function creates copies of all VDIs for a selected VM on the same or a different SR. The source VM and VDIs are not affected by default. To move the VM to the selected SR rather than creating a copy, select the Remove original VM option in the Copy Virtual Machine dialog box.

  1. Shut down the VM.
  2. Within XenCenter, select the VM and then select the VM > Copy VM option.
  3. Select the desired target SR.

Copy individual VDIs to a different SR

A combination of the xe CLI and XenCenter can be used to copy individual VDIs between SRs.

  1. Shut down the VM.

  2. Use the xe CLI to identify the UUIDs of the VDIs to be moved. If the VM has a DVD drive, its vdi-uuid is listed as not in database and can be ignored.

    xe vbd-list vm-uuid=valid_vm_uuid
    

    Note:

    The vbd-list command displays both the VBD and VDI UUIDs. Be sure to record the VDI UUIDs rather than the VBD UUIDs.

  3. In XenCenter, select the VM Storage tab. For each VDI to be moved, select the VDI and click the Detach button. This step can also be done using the vbd-destroy command.

    Note:

    If you use the vbd-destroy command to detach the VDI UUIDs, first check if the VBD has the parameter other-config:owner set to true. Set this parameter to false. Issuing the vbd-destroy command with other-config:owner=true also destroys the associated VDI.

  4. Use the vdi-copy command to copy each of the VM VDIs to be moved to the desired SR.

    xe vdi-copy uuid=valid_vdi_uuid sr-uuid=valid_sr_uuid
    
  5. In XenCenter, select the VM Storage tab. Click the Attach button and select the VDIs from the new SR. This step can also be done use the vbd-create command.

  6. To delete the original VDIs, select the Storage tab of the original SR in XenCenter. The original VDIs are listed with an empty value for the VM field. Use the Delete button to delete the VDI.

Convert local Fibre Channel SRs to shared SRs

Use the xe CLI and the XenCenter Repair Storage Repository feature to convert a local FC SR to a shared FC SR:

  1. Upgrade all hosts in the resource pool to XenServer 7.6.

  2. Ensure that all hosts in the pool have the SR’s LUN zoned appropriately. See Probe an SR for details on using the sr-probe command to verify that the LUN is present on each host.

  3. Convert the SR to shared:

    xe sr-param-set shared=true uuid=local_fc_sr
    
  4. The SR is moved from the host level to the pool level in XenCenter, indicating that it is now shared. The SR is marked with a red exclamation mark to show that it is not currently plugged on all hosts in the pool.

  5. Select the SR and then select the Storage > Repair Storage Repository option.

  6. Click Repair to create and plug a PBD for each host in the pool.

Reclaim space for block-based storage on the backing array using discard

You can use space reclamation to free up unused blocks on a thinly provisioned LUN. After the space is released, the storage array can then reuse this reclaimed space.

Note:

Space reclamation is only available on some types of storage arrays. To determine whether your array supports this feature and whether it needs a specific configuration, see the Hardware Compatibility List and your storage vendor specific documentation.

To reclaim the space using XenCenter:

  1. Select the Infrastructure view, and then choose the server or pool connected to the SR.

  2. Click the Storage tab.

  3. Select the SR from the list, and click Reclaim freed space.

  4. Click Yes to confirm the operation.

  5. Click Notifications and then Events to view the status of the operation.

For more information, press F1in XenCenter to access the Online Help.

Notes:

  • This operation is available only in XenCenter.
  • The operation is only available for LVM-based SRs that are based on thinly provisioned LUNs on the array. Local SSDs can also benefit from space reclamation.
  • Space reclamation is not required for file-based SRs such as NFS and Ext3. The Reclaim Freed Space button is not available in XenCenter for these SR types.
  • Space Reclamation is an intensive operation and can lead to a degradation in storage array performance. Therefore, only initiate this operation when space reclamation is required on the array. Citrix recommends that you schedule this work outside of peak array demand hours.

Automatically reclaim space when deleting snapshots

When deleting snapshots with XenServer, space allocated on LVM-based SRs is reclaimed automatically and a VM reboot is not required. This operation is known as ‘Online Coalescing’.

Online Coalescing only applies to LVM-based SRs (LVM, LVMoISCSI, and LVMoHBA). It does not apply to EXT or NFS SRs, whose behavior remains unchanged. In certain cases, automated space reclamation might be unable to proceed. We recommend that you use the Off-Line Coalesce tool in these scenarios:

  • Under conditions where a VM I/O throughput is considerable
  • In conditions where space is not being reclaimed after a period

Notes:

  • Running the Off Line Coalesce tool incurs some downtime for the VM, due to the suspend/resume operations performed.
  • Before running the tool, delete any snapshots and clones you no longer want. The tool reclaims as much space as possible given the remaining snapshots/clones. If you want to reclaim the entire space, delete all snapshots and clones.
  • VM disks must be either on shared or local storage for a single host. VMs with disks in both types of storage cannot be coalesced.

Reclaim space by using the off line coalesce tool

Note:

Online Coalescing only applies to LVM-based SRs (LVM, LVMoISCSI, and LVMoHBA), it does not apply to EXT or NFS SRs, whose behavior remains unchanged.

Enable the hidden objects using XenCenter. Click View > Hidden objects. In the Resource pane, select the VM for which you want to obtain the UUID. The UUID is displayed in the General tab.

In the Resource pane, select the resource pool master (the first host in the list. The General tab displays the UUID. If you are not using a resource pool, select the VM’s host.

  1. Open a console on the host and run the following command:

    xe host-call-plugin host-uuid=host-UUID \
        plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=VM-UUID
    

    For example, if the VM UUID is 9bad4022-2c2d-dee6-abf5-1b6195b1dad5 and the host UUID is b8722062-de95-4d95-9baa-a5fe343898ea, run the following command:

    xe host-call-plugin host-uuid=b8722062-de95-4d95-9baa-a5fe343898ea \
        plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=9bad4022-2c2d-dee6-abf5-1b6195b1dad5
    
  2. This command suspends the VM (unless it is already powered down), initiates the space reclamation process, and then resumes the VM.

Notes:

Citrix recommends that you shut down or suspend the VM manually before executing the off-line coalesce tool. You can shut down or suspend the VM using either XenCenter or the XenServer CLI. If you execute the coalesce tool on a running VM, the tool automatically suspends the VM, performs the required VDI coalesce operations, and resumes the VM.

If the Virtual Disk Images (VDIs) to be coalesced are on shared storage, you must execute the off-line coalesce tool on the pool master.

If the VDIs to be coalesced are on local storage, execute the off-line coalesce tool on the server to which the local storage is attached.

Adjust the disk I/O scheduler

For general performance, the default disk scheduler noop is applied on all new SR types. The noop scheduler provides the fairest performance for competing VMs accessing the same device. To apply disk QoS, it is necessary to override the default setting and assign the cfq disk scheduler to the SR. The corresponding PBD must be unplugged and replugged for the scheduler parameter to take effect. The disk scheduler can be adjusted using the following command:

xe sr-param-set other-config:scheduler=noop|cfq|anticipatory|deadline \
uuid=valid_sr_uuid

Note:

This command does not affect EqualLogic, NetApp, or NFS storage.

Virtual disk QoS settings

Virtual disks have an optional I/O priority Quality of Service (QoS) setting. This setting can be applied to existing virtual disks using the xe CLI as described in this section.

For shared SR, where multiple hosts are accessing the same LUN, the QoS setting is applied to VBDs accessing the LUN from the same host. QoS is not applied across hosts in the pool.

Before configuring any QoS parameters for a VBD, ensure that the disk scheduler for the SR has been set appropriately. See Adjusting the disk I/O scheduler in the previous section for details on how to adjust the scheduler. The scheduler parameter must be set to cfq on the SR for which the QoS is desired.

Note:

Remember to set the scheduler to cfq on the SR, and to ensure that the PBD has been replugged for the scheduler change to take effect.

The first parameter is qos_algorithm_type. This parameter must be set to the value ionice, which is the only type of QoS algorithm supported for virtual disks in this release.

The QoS parameters themselves are set with key/value pairs assigned to the qos_algorithm_param parameter. For virtual disks, qos_algorithm_param takes a sched key, and depending on the value, also requires a class key.

Possible values of qos_algorithm_param:sched are:

-sched=rt or sched=real-time sets the QoS scheduling parameter to real time priority, which requires a class parameter to set a value

-sched=idle sets the QoS scheduling parameter to idle priority, which requires no class parameter to set any value

-sched=anything sets the QoS scheduling parameter to best effort priority, which requires a class parameter to set a value

The possible values for class are:

  • One of the following keywords: highest, high, normal, low, lowest

  • An integer between 0 and 7, where 7 is the highest priority and 0 is the lowest. For example, I/O requests with a priority of 5, are given priority over I/O requests with a priority of 2.

To enable the disk QoS settings, you must also set the other-config:scheduler to cfq and replug PBDs for the storage in question.

For example, the following CLI commands set the virtual disk’s VBD to use real time priority 5:

xe vbd-param-set uuid=vbd_uuid qos_algorithm_type=ionice
xe vbd-param-set uuid=vbd_uuid qos_algorithm_params:sched=rt
xe vbd-param-set uuid=vbd_uuid qos_algorithm_params:class=5
xe sr-param-set uuid=sr_uuid other-config:scheduler=cfq
xe pbd-plug uuid=pbd_uuid