Hosts and resource pools

This section describes how resource pools can be created through a series of examples using the xe command line interface (CLI). A simple NFS-based shared storage configuration is presented and several simple VM management examples are discussed. It also contains procedures for dealing with physical node failures.

Citrix Hypervisor servers and resource pools overview

A resource pool comprises multiple Citrix Hypervisor server installations, bound together to a single managed entity which can host Virtual Machines. If combined with shared storage, a resource pool enables VMs to be started on any Citrix Hypervisor server which has sufficient memory. The VMs can then be dynamically moved among Citrix Hypervisor servers while running with a minimal downtime (live migration). If an individual Citrix Hypervisor server suffers a hardware failure, the administrator can restart failed VMs on another Citrix Hypervisor server in the same resource pool. When high availability is enabled on the resource pool, VMs automatically move to another host when their host fails. Up to 64 hosts are supported per resource pool, although this restriction is not enforced.

A pool always has at least one physical node, known as the master. Only the master node exposes an administration interface (used by XenCenter and the Citrix Hypervisor Command Line Interface, known as the xe CLI). The master forwards commands to individual members as necessary.

Note:

When the pool master fails, master re-election takes place only if high availability is enabled.

Requirements for creating resource pools

A resource pool is a homogeneous (or heterogeneous with restrictions) aggregate of one or more Citrix Hypervisor servers, up to a maximum of 64. The definition of homogeneous is:

  • CPUs on the server joining the pool are the same (in terms of the vendor, model, and features) as the CPUs on servers already in the pool.

  • The server joining the pool is running the same version of Citrix Hypervisor software, at the same patch level, as servers already in the pool.

The software enforces extra constraints when joining a server to a pool. In particular, Citrix Hypervisor checks that the following conditions are true for the server joining the pool:

  • The server is not a member of an existing resource pool.

  • The server has no shared storage configured.

  • The server is not hosting any running or suspended VMs.

  • No active operations are in progress on the VMs on the server, such as a VM shutting down.

  • The clock on the server is synchronized to the same time as the pool master (for example, by using NTP).

  • The management interface of the server is not bonded. You can configure the management interface when the server successfully joins the pool.

  • The management IP address is static, either configured on the server itself or by using an appropriate configuration on your DHCP server.

Citrix Hypervisor servers in resource pools can contain different numbers of physical network interfaces and have local storage repositories of varying size. In practice, it is often difficult to obtain multiple servers with the exact same CPUs, and so minor variations are permitted. If it is acceptable to have hosts with varying CPUs as part of the same pool, you can force the pool joining operation by passing --force parameter.

All hosts in the pool must be in the same site and connected by a low latency network.

Note:

Servers providing shared NFS or iSCSI storage for the pool must have a static IP address.

A pool must contain shared storage repositories to select on which Citrix Hypervisor server to run a VM and to move a VM between Citrix Hypervisor servers dynamically. If possible create a pool after shared storage is available. We recommend that you move existing VMs with disks located in local storage to shared storage after adding shared storage. You can use the xe vm-copy command or use XenCenter to move VMs.

Create a resource pool

Resource pools can be created using XenCenter or the CLI. When a new host joins a resource pool, the joining host synchronizes its local database with the pool-wide one, and inherits some settings from the pool:

  • VM, local, and remote storage configuration is added to the pool-wide database. This configuration is applied to the joining host in the pool unless you explicitly make the resources shared after the host joins the pool.

  • The joining host inherits existing shared storage repositories in the pool. Appropriate PBD records are created so that the new host can access existing shared storage automatically.

  • Networking information is partially inherited to the joining host: the structural details of NICs, VLANs, and bonded interfaces are all inherited, but policy information is not. This policy information, which must be reconfigured, includes:

    • The IP addresses of management NICs, which are preserved from the original configuration.

    • The location of the management interface, which remains the same as the original configuration. For example, if the other pool hosts have management interfaces on a bonded interface, the joining host must be migrated to the bond after joining.

    • Dedicated storage NICs, which must be reassigned to the joining host from XenCenter or the CLI, and the PBDs replugged to route the traffic accordingly. This is because IP addresses are not assigned as part of the pool join operation, and the storage NIC works only when this is correctly configured. See Manage networking for details on how to dedicate a storage NIC from the CLI.

Note:

You can only join a new host to a resource pool when the host’s management interface is on the same tagged VLAN as that of the resource pool.

To join Citrix Hypervisor servers host1 and host2 into a resource pool by using the CLI

  1. Open a console on Citrix Hypervisor server host2.

  2. Command Citrix Hypervisor server host2 to join the pool on Citrix Hypervisor server host1 by issuing the command:

    xe pool-join master-address=host1 master-username=administrators_username master-password=password
    

    The master-address must be set to the fully qualified domain name of Citrix Hypervisor server host1. The password must be the administrator password set when Citrix Hypervisor server host1 was installed.

Citrix Hypervisor servers belong to an unnamed pool by default. To create your first resource pool, rename the existing nameless pool. Use tab-complete to find the pool_uuid:

xe pool-param-set name-label="New Pool" uuid=pool_uuid

Create heterogeneous resource pools

Citrix Hypervisor simplifies expanding deployments over time by allowing disparate host hardware to be joined in to a resource pool, known as heterogeneous resource pools. Heterogeneous resource pools are made possible by using technologies in Intel (FlexMigration) and AMD (Extended Migration) CPUs that provide CPU “masking” or “leveling”. The CPU masking and leveling features allow a CPU to be configured to appear as providing a different make, model, or functionality than it actually does. This feature enables you to create pools of hosts with disparate CPUs but still safely support live migration.

Note:

The CPUs of Citrix Hypervisor servers joining heterogeneous pools must be of the same vendor (that is, AMD, Intel) as CPUs on hosts in the pool. However, the specific type (family, model, and stepping numbers) need not be.

Citrix Hypervisor simplifies the support of heterogeneous pools. Hosts can now be added to existing resource pools, irrespective of the underlying CPU type (as long as the CPU is from the same vendor family). The pool feature set is dynamically calculated every time:

  • A new host joins the pool

  • A pool member leaves the pool

  • A pool member reconnects following a reboot

Any change in the pool feature set does not affect VMs that are currently running in the pool. A Running VM continues to use the feature set which was applied when it was started. This feature set is fixed at boot and persists across migrate, suspend, and resume operations. If the pool level drops when a less-capable host joins the pool, a running VM can be migrated to any host in the pool, except the newly added host. When you move or migrate a VM to a different host within or across pools, Citrix Hypervisor compares the VM’s feature set against the feature set of the destination host. If the feature sets are found to be compatible, the VM is allowed to migrate. This enables the VM to move freely within and across pools, regardless of the CPU features the VM is using. If you use Workload Balancing to select an optimal destination host to migrate your VM, a host with incompatible feature set will not be recommended as the destination host.

Add shared storage

For a complete list of supported shared storage types, see Storage repository formats. This section shows how shared storage (represented as a storage repository) can be created on an existing NFS server.

To add NFS shared storage to a resource pool by using the CLI

  1. Open a console on any Citrix Hypervisor server in the pool.

  2. Create the storage repository on server:/path by issuing the command

    xe sr-create content-type=user type=nfs name-label="Example SR" shared=true \
        device-config:server=server \
        device-config:serverpath=path
    

    device-config:server Is the hostname of the NFS server and device-config:serverpath is the path on the NFS server. As shared is set to true, shared storage is automatically connected to every Citrix Hypervisor server in the pool. Any Citrix Hypervisor servers that join later are also connected to the storage. The Universally Unique Identifier (UUID) of the storage repository is printed on the screen.

  3. Find the UUID of the pool by running the following command:

    xe pool-list
    
  4. Set the shared storage as the pool-wide default with the command

    xe pool-param-set uuid=pool_uuid default-SR=sr_uuid
    

    As the shared storage has been set as the pool-wide default, all future VMs have their disks created on shared storage by default. See Storage repository formats for information about creating other types of shared storage.

Remove Citrix Hypervisor servers from a resource pool

Note:

Before removing any Citrix Hypervisor server from a pool, ensure that you shut down all the VMs running on that host. Otherwise, you can see a warning stating that the host cannot be removed.

When you remove (eject) a host from a pool, the machine is rebooted, reinitialized, and left in a state similar to a fresh installation. Do not eject Citrix Hypervisor servers from a pool if there is important data on the local disks.

To remove a host from a resource pool by using the CLI

  1. Open a console on any host in the pool.

  2. Find the UUID of the host by running the command

    xe host-list
    
  3. Eject the required host from the pool:

    xe pool-eject host-uuid=host_uuid
    

    The Citrix Hypervisor server is ejected and left in a freshly installed state.

    Warning:

    Do not eject a host from a resource pool if it contains important data stored on its local disks. All of the data is erased when a host is ejected from the pool. If you want to preserve this data, copy the VM to shared storage on the pool using XenCenter, or the xe vm-copy CLI command.

When Citrix Hypervisor servers containing locally stored VMs are ejected from a pool, the VMs will be present in the pool database. The locally stored VMs are also visible to the other Citrix Hypervisor servers. The VMs do not start until the virtual disks associated with them have been changed to point at shared storage seen by other Citrix Hypervisor servers in the pool, or removed. Therefore, we recommend that you move any local storage to shared storage when joining a pool. Moving to shared storage allows individual Citrix Hypervisor servers to be ejected (or physically fail) without loss of data.

Note:

When a host is removed from a pool that has its management interface on a tagged VLAN network, the machine is rebooted and its management interface will be available on the same network.

Prepare a pool of Citrix Hypervisor servers for maintenance

Before performing maintenance operations on a host that is part of a resource pool, you must disable it. Disabling the host prevents any VMs from being started on it. You must then migrate its VMs to another Citrix Hypervisor server in the pool. You can do this by placing the Citrix Hypervisor server in to Maintenance mode using XenCenter. See the XenCenter Help for details.

Backup synchronization occurs every 24 hrs. Placing the master host in to maintenance mode results in the loss of the last 24 hrs of RRD updates for offline VMs.

Warning:

We highly recommend rebooting all Citrix Hypervisor servers before installing an update and then verifying their configuration. Some configuration changes only take effect when Citrix Hypervisor is rebooted, so the reboot may uncover configuration problems that can cause the update to fail.

To prepare a host in a pool for maintenance operations by using the CLI

  1. Run the following command:

    xe host-disable uuid=Citrix Hypervisor_host_uuid
    xe host-evacuate uuid=Citrix Hypervisor_host_uuid
    

    This command disables the Citrix Hypervisor server and then migrate any running VMs to other Citrix Hypervisor servers in the pool.

  2. Perform the desired maintenance operation.

  3. Enable the Citrix Hypervisor server when the maintenance operation is complete:

    xe host-enable
    
  4. Restart any halted VMs and resume any suspended VMs.

Export resource pool data

The Export Resource Data option allows you to generate a resource data report for your pool and export the report into a .xls or .csv file. This report provides detailed information about various resources in the pool such as, servers, networks, storage, virtual machines, VDIs, and GPUs. This feature enables administrators to track, plan, and assign resources based on various workloads such as CPU, storage, and Network.

Note:

Export Resource Pool Data is available for Citrix Hypervisor Premium Edition customers, or those who have access to Citrix Hypervisor through their Citrix Virtual Apps and Desktops entitlement.

The list of resources and various types of resource data that are included in the report:

Server:

  • Name
  • Pool Master
  • UUID
  • Address
  • CPU Usage
  • Network (avg/max KBs)
  • Used Memory
  • Storage
  • Uptime
  • Description

Networks:

  • Name
  • Link Status
  • MAC
  • MTU
  • VLAN
  • Type
  • Location

VDI:

  • Name
  • Type
  • UUID
  • Size
  • Storage
  • Description

Storage:

  • Name
  • Type
  • UUID
  • Size
  • Location
  • Description

VMs:

  • Name
  • Power State
  • Running on
  • Address
  • MAC
  • NIC
  • Operating System
  • Storage
  • Used Memory
  • CPU Usage
  • UUID
  • Uptime
  • Template
  • Description

GPU:

  • Name
  • Servers
  • PCI Bus Path
  • UUID
  • Power Usage
  • Temperature
  • Used Memory
  • Computer Utilization

Note:

Information about GPUs is available only if there are GPUs attached to your Citrix Hypervisor server.

To export resource data

  1. In the XenCenter Navigation pane, select Infrastructure and then select the pool.

  2. Select the Pool menu and then Export Resource Data.

  3. Browse to a location where you would like to save report and then click Save.

Host power on

Powering on hosts remotely

You can use the Citrix Hypervisor server Power On feature to turn a server on and off remotely, either from XenCenter or by using the CLI.

To enable host power, the host must have one of the following power-control solutions:

  • Wake on LAN enabled network card.

  • Dell Remote Access Cards (DRAC). To use Citrix Hypervisor with DRAC, you must install the Dell supplemental pack to get DRAC support. DRAC support requires installing RACADM command-line utility on the server with the remote access controller and enabling DRAC and its interface. RACADM is often included in the DRAC management software. For more information, see Dell’s DRAC documentation.

  • Hewlett-Packard Integrated Lights-Out (iLO). To use Citrix Hypervisor with iLO, you must enable iLO on the host and connect interface to the network. For more information, see HP’s iLO documentation.

  • A custom script based on the management API that enables you to turn the power on and off through Citrix Hypervisor. For more information, see Configuring a custom script for the Host Power On feature in the following section.

Using the Host Power On feature requires two tasks:

  1. Ensure the hosts in the pool support controlling the power remotely. For example, they have Wake on LAN functionality, a DRAC or iLO card, or you have created a custom script).

  2. Enable the Host Power On functionality using the CLI or XenCenter.

Use the CLI to manage host power on

You can manage the Host Power On feature using either the CLI or XenCenter. This section provides information about managing it with the CLI.

Host Power On is enabled at the host level (that is, on each Citrix Hypervisor).

After you enable Host Power On, you can turn on hosts using either the CLI or XenCenter.

To enable host power on by using the CLI

Run the command:

xe host-set-power-on-mode host=<host uuid> \
    power-on-mode=("" , "wake-on-lan",  "iLO", "DRAC","custom") \
    power-on-config=key:value

For iLO and DRAC the keys are power_on_ip to specify the password if you are using the secret feature. For more information, see Secrets.

To turn on hosts remotely by using the CLI

Run the command:

xe host-power-on host=<host uuid>

Configure a custom script for the Host Power On feature

If your server’s remote-power solution uses a protocol that is not supported by default (such as Wake-On-Ring or Intel Active Management Technology), you can create a custom Linux Python script to turn on your Citrix Hypervisor computers remotely. However, you can also create custom scripts for iLO, DRAC, and Wake on LAN remote-power solutions.

This section provides information about configuring a custom script for Host Power On using the key/value pairs associated with the Citrix Hypervisor API call host.power_on.

When you create a custom script, run it from the command line each time you want to control power remotely on Citrix Hypervisor. Alternatively, you can specify it in XenCenter and use the XenCenter UI features to interact with it.

The Citrix Hypervisor API is documented in the document, the Citrix Hypervisor Management API, which is available from the developer documentation website.

Warning:

Do not change the scripts provided by default in the /etc/xapi.d/plugins/ directory. You can include new scripts in this directory, but you must never change the scripts contained in that directory after installation.

Key/Value Pairs

To use Host Power On, configure the host.power_on_mode and host.power_on_config keys. See the following section for information about the values.

There is also an API call that lets you set these fields simultaneously:

void host.set_host_power_on_mode(string mode, Dictionary<string,string> config)
host.power_on_mode
  • Definition: Contains key/value pairs to specify the type of remote-power solution (for example, Dell DRAC).

  • Possible values:

    • An empty string, representing power-control disabled

    • “iLO”: Lets you specify HP iLO.

    • “DRAC”: Lets you specify Dell DRAC. To use DRAC, you must have already installed the Dell supplemental pack.

    • “wake-on-lan”: Lets you specify Wake on LAN.

    • Any other name (used to specify a custom power-on script). This option is used to specify a custom script for power management.

  • Type: string

host.power_on_config
  • Definition: Contains key/value pairs for mode configuration. Provides additional information for iLO and DRAC.

  • Possible values:

    • If you configured iLO or DRAC as the type of remote-power solution, you must also specify one of the following keys:

      • “power_on_ip”: The IP address you specified configured to communicate with the power-control card. Alternatively, you can type the domain name for the network interface where iLO or DRAC is configured.

      • “power_on_user”: The iLO or DRAC user name associated with the management processor, which you may have changed from its factory default settings.

      • “power_on_password_secret”: Specifies using the secrets feature to secure your password.

    • To use the secrets feature to store your password, specify the key “power_on_password_secret”. For more information, see Secrets.

  • Type: Map (string,string)

Sample script

The sample script imports the Citrix Hypervisor API, defines itself as a custom script, and then passes parameters specific to the host you want to control remotely. You must define the parameters session in all custom scripts.

The result appears when the script is unsuccessful.

import XenAPI
def custom(session,remote_host,
power_on_config):
result="Power On Not Successful"
for key in power_on_config.keys():
result=result+''
key=''+key+''
value=''+power_on_config[key]
return result

Note:

After creating the script, save it in /etc/xapi.d/plugins with a .py extension.

Communicate with Citrix Hypervisor servers and resource pools

Citrix Hypervisor uses TLS protocols to encrypt management API traffic. Any communication between Citrix Hypervisor and management API clients (or appliances) now uses TLS 1.2 protocol by default. However, if the management API client or the appliance does not communicate using TLS 1.2, earlier protocols may be used for communication.

Citrix Hypervisor uses the following cipher suites:

-TLS_RSA_WITH_AES_128_CBC_SHA256

-TLS_RSA_WITH_AES_256_CBC_SHA

-TLS_RSA_WITH_AES_128_CBC_SHA

-TLS_RSA_WITH_RC4_128_SHA

-TLS_RSA_WITH_RC4_128_MD5

-TLS_RSA_WITH_3DES_EDE_CBC_SHA

Citrix Hypervisor also enables you to configure your host or resource pool to allow communication through TLS 1.2 only. This option allows communication between Citrix Hypervisor and management API clients (or appliances) using the TLS 1.2 protocol. The TLS 1.2 only option uses cipher suite TLS_RSA_WITH_AES_128_CBC_SHA256.

Warning:

Select the TLS 1.2 only option after you ensure that all management API clients and appliances that communicate with the Citrix Hypervisor pool are compatible with TLS 1.2.

Enable IGMP snooping on your Citrix Hypervisor pool

Citrix Hypervisor sends multicast traffic to all guest VMs leading to unnecessary load on host devices by requiring them to process packets they have not solicited. Enabling IGMP snooping prevents hosts on a local network from receiving traffic for a multicast group they have not explicitly joined, and improves the performance of multicast. IGMP snooping is especially useful for bandwidth-intensive IP multicast applications such as IPTV.

You can enable IGMP snooping on a pool using either XenCenter or the command-line interface. To enable IGMP snooping using XenCenter, navigate to Pool Properties and select Network Options. For more information, see the XenCenter Help. For xe commands, see pool-igmp-snooping.

Notes:

  • IGMP snooping is available only when network back-end uses Open vSwitch.

  • When enabling this feature on a pool, it may also be necessary to enable IGMP querier on one of the physical switches. Or else, multicast in the sub network will fallback to broadcast and may decrease Citrix Hypervisor performance.

  • When enabling this feature on a pool running IGMP v3, VM migration or network bond failover results in IGMP version switching to v2.

  • To enable this feature with GRE network, users must set up an IGMP Querier in the GRE network. Alternatively, you can forward the IGMP query message from the physical network into the GRE network. Or else, multicast traffic in the GRE network can be blocked.