Configuration limits

Citrix recommends using the following configuration limits as a guideline when selecting and configuring your virtual and physical environment for XenServer. Citrix fully supports for XenServer the following tested and recommended configuration limits.

  • Virtual machine limits

  • XenServer host limits

  • Resource pool limits

Factors such as hardware and environment can affect the limitations listed below. More information about supported hardware can be found on the Hardware Compatibility List. Consult your hardware manufacturers’ documented limits to ensure that you do not exceed the supported configuration limits for your environment.

Virtual machine (VM) limits

Item Limit
Virtual CPUs per VM (Linux) 32 (see note 1)
Virtual CPUs per VM (Windows) 32
RAM per VM 1.5 TB (see note 2)
Virtual Disk Images (VDI) (including CD-ROM) per VM 255 (see note 3)
Virtual CD-ROM drives per VM 1
Virtual Disk Size (NFS) 2 TB minus 4 GB
Virtual Disk Size (LVM) 2 TB minus 4 GB
Virtual NICs per VM 7 (see note 4)


  1. Consult your guest OS documentation to ensure that you do not exceed the supported limits.

  2. The maximum amount of physical memory addressable by your operating system varies. Setting the memory to a level greater than the operating system supported limit may lead to performance issues within your guest. Some 32-bit Windows operating systems can support more than 4 GB of RAM through use of the physical address extension (PAE) mode. The limit for 32-bit PV Virtual Machines is 64 GB. For more information, see your guest operating system documentation and Guest operating system support.

  3. The maximum number of VDIs supported depends on the guest operating system. Consult your guest operating system documentation to ensure that you do not exceed the supported limits.

  4. Several guest operating systems have a lower limit, other guests require installation of the XenServer Tools to achieve this limit.

XenServer host limits

Item Limit
Logical processors per host 288 (see note 1)
Concurrent VMs per host 1000 (see note 2)
Concurrent protected VMs per host with HA enabled 500
Virtual GPU VMs per host 128 (see note 3)
RAM per host 5 TB (see note 4)
Concurrent active virtual disks per host 4096
Physical NICs per host 16
Physical NICs per network bond 4
Virtual NICs per host 512
VLANs per host 800
Network Bonds per host 4
Graphics Capability  
GPUs per host 12 (See note 5)


  1. The maximum number of logical physical processors supported differs by CPU. For more information, see the Hardware Compatibility List.

  2. The maximum number of VMs/host supported depends on VM workload, system load, network configuration, and certain environmental factors. Citrix reserves the right to determine what specific environmental factors affect the maximum limit at which a system can function. For systems running over 500 VMs, Citrix recommends allocating 8 GB RAM to the Control Domain (Dom0). For information about configuring Dom0 memory, see CTX134951 - How to Configure dom0 Memory in XenServer 6.2 and Later.

  3. For NVIDIA vGPU, 128 vGPU accelerated VMs per host with 4xM60 cards (4x32=128 VMs), or 2xM10 cards (2x64=128 VMs). For Intel GVT-g, 7 VMs per host with a 1,024 MB aperture size. Smaller aperture sizes can further restrict the number of GVT-g VMs supported per host. This figure might change. For the current supported limits, see the Hardware Compatibility List.

  4. If a host has one or more 32-bit paravirtualized guests (Linux VMs), running a maximum of 128 GB RAM is supported on the host.

  5. This figure might change. For the current supported limits, see the Hardware Compatibility List.

Resource pool limits

Item Limit
VMs per resource pool 4096
Hosts per resource pool 64 (See note 1)
VLANs per resource pool 800
Active hosts per cross-server private network 64
Cross-server private networks per resource pool 16
Virtual NICs per cross-server private network 16
Cross-server private network virtual NICs per resource pool 256
Hosts per vSwitch controller 64
Virtual NICs per vSwitch controller 1024
VMs per vSwitch controller 1024
Disaster recovery  
Integrated site recovery storage repositories per resource pool 8
Paths to a LUN 8
Multipathed LUNs per host 256 (See note 2)
Multipathed LUNs per host (used by storage repositories) 256 (See note 2)
VDIs per SR (NFS, SMB, EXT, GFS2) 20000
VDIs per SR (LVM) 1000
Storage XenMotion  
(non-CDROM) VDIs per VM 6
Snapshots per VM 1
Concurrent transfers 3
Concurrent operations per pool 25


  1. Clustered pools that use GFS2 storage support a maximum of 16 hosts in the resource pool.
  2. When HA is enabled, Citrix recommends increasing the default timeout to at least 120 seconds when more than 30 multipathed LUNs are present on a host. For information about increasing the HA timeout, see CTX139166 - How to Change High Availability Timeout Settings.

Configuration limits