Prerequisites for cluster nodes
Citrix ADC appliances that are to be added to a cluster must satisfy the following criteria:
-
All appliances must have the same software version and build.
-
All appliances must be of the same platform type. This means that a cluster must have either all hardware appliances (MPX) or virtual appliances (VPX) or SDX Citrix ADC instances.
Note
-
For a cluster of hardware appliances (MPX), the appliances must be of the same model type.
-
For the formation of the heterogeneous cluster, all appliances must be of MPX platform type.
-
For a cluster of virtual appliances (VPX), the appliances must be deployed on the following hypervisors: XenServer, Hyper-V, VMware ESX, and KVM.
-
Clusters of SDX Citrix ADC instances are supported in NetScaler 10.1 and later releases. To create a cluster of SDX Citrix ADC instances, see “Set up a cluster of Citrix NetScaler instances”.
-
Jumbo frames are supported on a Citrix ADC cluster that is made up of Citrix ADC SDX instances.
-
You can create L3 clusters of SDX instances.
-
-
(For releases prior to NetScaler 11.0) All appliances must be on the same network. In NetScaler 11.0 and later releases, appliances can belong to different networks.
-
All appliances must have the same licenses. Also, depending on the Citrix ADC version, there are some additional aspects to address:
- For releases prior to NetScaler 10.5 Build 52.x:
- A separate cluster license file is required. This file must be copied to the /nsconfig/license/ directory of the configuration coordinator.
- Because of the separate cluster license file, the cluster feature is available irrespective of the Citrix ADC license.
- For releases after NetScaler 10.5 Build 52.x:
- No separate cluster license is required.
- Cluster is licensed with the Advanced and Premium licenses. Cluster is not available for Standard license.
- For releases prior to NetScaler 10.5 Build 52.x:
-
Be initially configured and connected to a common client-side and server-side network.
Note
For a cluster of virtual appliances, that has large configurations, it is recommended to use 6 GB RAM for each node of the cluster.