Setting up inter-node communication
The nodes in a cluster setup communicate with one another using the following inter-node communication mechanisms:
- Nodes that are within the network (same subnet) communicate with each other through the cluster backplane. The backplane must be explicitly set up. See the detailed steps listed below.
- Across networks, steering of packets is done through a GRE tunnel and other node-to-node communication is routed across nodes as required.
- From Release 11.0 all builds, a cluster can include nodes from different networks.
- From Release 11.1 build 64.11, GRE steering is supported on Fortville NICs in an L3 cluster.
- In an L3 cluster deployment, packets between NetScaler appliance nodes are exchanged over an unencrypted GRE tunnel that uses the NSIP addresses of the source and destination nodes for routing. When this exchange occurs over the internet, in the absence of an IPsec tunnel, the NSIPs is exposed on the internet, and this can result in security issues. Citrix advises customers to establish their own IPsec solution when using a L3 cluster.
To set up the cluster backplane, do the following for every node
- Identify the network interface that you want to use for the backplane.
- Connect an Ethernet or optical cable from the selected network interface to the cluster backplane switch.
For example, to use interface 1/2 as the backplane interface for node 4, connect a cable from the 1/2 interface of node 4 to the backplane switch.
Important points to note when setting up the cluster backplane
Do not use the appliance’s management interface (0/x) as the backplane interface. In a cluster, the interface 0/1/x is read as:
0 -> node ID 0 1/x -> NetScaler appliance interface
Do not use the backplane interfaces for the client or server data planes.
Configure a link aggregate (LA) channel to optimize the throughput of the cluster backplane.
In a two-node cluster, where the backplane is connected back-to-back, the cluster is operationally DOWN under any of the following conditions:
- One of the nodes is rebooted.
- Backplane interface of one of the nodes is disabled.
Therefore, Citrix recommends that you dedicate a separate switch for the backplane, so that the other cluster node and traffic are not impacted. You cannot scale out the cluster with a back-to-back link. You might encounter a downtime in the production environment when you scale out the cluster nodes.
Backplane interfaces of all nodes of a cluster must be connected to the same switch and bound to the same L2 VLAN.
If you have multiple clusters with the same cluster instance ID, make sure that the backplane interfaces of each cluster are bound to a different VLAN.
The backplane interface is always monitored, regardless of the HA monitoring settings of that interface.
The state of MAC spoofing on the different virtualization platforms can affect the steering mechanism on the cluster backplane. Therefore, make sure the appropriate state is configured:
- XenServer - Disable MAC spoofing
- Hyper-V - Enable MAC spoofing
- VMware ESX - Enable MAC spoofing (also make sure “Forged Transmits” is enabled)
The MTU for the cluster backplane is automatically updated. However, if jumbo frames are configured on the cluster, the MTU of the cluster backplane must be explicitly configured. The value must be set to 78 + X, where X is the maximum MTU of the client and server data planes. For example, if MTU of server data plane is 7500 and of the client data plane is 8922, then the MTU of cluster backplane must be set to 78 + 8922 = 9000. To set this MTU, use the following command:
> set interface <backplane_interface> -mtu <value> <!--NeedCopy-->
The MTU for interfaces of the backplane switch must be specified to be greater than or equal to 1578 bytes, if the cluster has features like MBF, L2 policies, ACLs, routing in CLAG deployments, and vPath.