ADC

FAQs

A list of the FAQ about clustering.

How many Citrix ADC appliances can be included in a single Citrix ADC cluster?

A Citrix ADC cluster can include one appliance or as many as 32 Citrix ADC nCore hardware or virtual appliances. Each of these nodes must satisfy the criteria specified in Prerequisites for Cluster Nodes.

Can a Citrix ADC appliance be a part of multiple clusters?

No. A Citrix ADC appliance can belong to one cluster only.

What is a cluster IP address? What is its subnet mask?

The cluster IP address is the management address of a Citrix ADC cluster. All cluster configurations must be performed by accessing the cluster through this address. The subnet mask of the cluster IP address is fixed at 255.255.255.255.

How can I make a specific cluster node as the cluster configuration coordinator?

To manually set a specific node as the cluster configuration coordinator, you must set the priority of that node to the lowest numeric value (highest priority). To understand, let us consider a cluster with three nodes that have the following priorities:

n1 - 29, n2 - 30, n3 - 31

Here, n1 is the configuration coordinator. If you want to make n2 the configuration coordinator, you must set its priority to a value that is lower than n1, for example, 28. On saving the configuration, n2 becomes the configuration coordinator.

Note

n2 with its original priority value of 30 becomes the configuration coordinator when n1 goes down. The node with the next lowest priority value is selected in case the configuration coordinator goes down.

Why are the network interfaces of a cluster represented in 3-tuple (n/u/c) notation instead of the regular 2-tuple (u/c) notation?

When a Citrix ADC appliance is part of a cluster, you must be able to identify the node to which the interface belongs. Therefore, the network interface naming convention for cluster nodes is modified from u/c to n/u/c, where n denotes the node Id.

How can I set the host name for a cluster node?

The host name of a cluster node must be specified by running the set ns hostname command through the cluster IP address. For example, to set the host name of the cluster node with ID 2, the command is:

set ns hostname hostName1 -ownerNode 2

Can I automatically detect Citrix ADC appliances so that I can add them to a cluster?

Yes. The configuration utility allows you to discover appliances that are present in the same subnet as the NSIP address of the configuration coordinator. For more information, see Discovering NetScaler Appliances.

Is the traffic serving capability of a cluster affected if a node is removed or disabled or reboot or shutdown or made inactive?

Yes. When any of these operations are performed on an active node of the cluster, the cluster has one less node to serve traffic. Also, existing connections on this node are terminated.

I have multiple standalone appliances, each of which has different configurations. Can I add them to a single cluster?

Yes. You can add appliances that have different configurations to a single cluster. However, when the appliance is added to the cluster, the existing configurations are cleared. To use the configurations that are available on each of the individual appliances, you must:

  1. Create a single *.conf file for all the configurations.
  2. Edit the configuration file to remove features that are not supported in a cluster environment.
  3. Update the naming convention of interfaces from 2-tuple (u/c) format to 3-tuple (n/u/c) format.
  4. Apply the configurations to the configuration coordinator node of the cluster by using the batch command.

Can I migrate the configurations of a standalone Citrix ADC appliance or an HA setup to the clustered setup?

No. When a node is added to a clustered setup, its configurations are implicitly cleared by using the clear ns config command (with the extended option). In addition, the SNIP addresses and all VLAN configurations (except default VLAN and NSVLAN) are cleared. Therefore, it is recommended that you back up the configurations before adding the appliance to a cluster. Before using the backed-up configuration file for the cluster, you must:

  1. Edit the configuration file to remove features that are not supported in a cluster environment.
  2. Update the naming convention of interfaces from two-tuple (x/y) format to three-tuple (x/y/z) format.
  3. Apply the configurations to the configuration coordinator node of the cluster by using the batch command.

Is backplane interfaces part of the L3 VLANs?

Yes, by default, backplane interfaces have presence on all the L3 VLANs that are configured on the cluster.

How can I configure a cluster that includes nodes from different networks?

Note

Supported from NetScaler 11.0 onwards.

A cluster that includes nodes from different networks is called a L3 cluster (sometimes referred to as a cluster in INC mode). In an L3 cluster, all nodes that belong to a single network must be grouped in a single node group. Therefore, if a cluster includes two nodes each from three different networks, you have to create 3 node groups (one for each network) and associate each of these node groups with the nodes that belong to that network. For configuration information, see the steps to set up a cluster.

How can I configure/un-configure the NSVLAN on a cluster?

Do either one of the following:

  • To make the NSVLAN available in a cluster, make sure that each appliance has the same NSVLAN configured before it is added to a cluster.

  • To remove the NSVLAN from a cluster node, first remove the node from the cluster and then delete the NSVLAN from the appliance.

I have a cluster set up where some Citrix ADC nodes are not connected to the external network. Can the cluster still function normally?

Yes. The cluster supports a mechanism called linksets, which allows unconnected nodes to serve traffic by using the interfaces of connected nodes. The unconnected nodes communicate with the connected nodes through the cluster backplane. For more information, see Using Linksets.

How can deployments that require MAC-Based Forwarding (MBF) be supported in a clustered setup?

Deployments that use MBF must use linksets. For more information, see Using Linksets.

Can I run commands from the NSIP address of a cluster node?

No. Access to individual cluster nodes through the NSIP addresses is read-only. Therefore, when you log on to the NSIP address of a cluster node you can only view the configurations and the statistics. You cannot configure anything. However, there are some operations you can run from the NSIP address of a cluster node. For more information, see Operations Supported on Individual Nodes.

Can I disable configuration propagation among cluster nodes?

No, you cannot explicitly disable the propagation of cluster configurations among cluster nodes. However, during a software upgrade or downgrade, a version mismatch error can automatically disable configuration propagation.

Can I change the NSIP address or change the NSVLAN of a Citrix ADC appliance when it is a part of the cluster?

No. To make such changes you must first remove the appliance from the cluster, perform the changes, and then add the appliance to the cluster.

Does the Citrix ADC cluster support L2 and L3 VLANs?

Yes. A cluster supports VLANs between cluster nodes. The VLANs must be configured on the cluster IP address.

  • L2 VLAN. You can create a layer2 VLAN by binding interfaces that belong to different nodes of the cluster.
  • L3 VLAN. You can create a layer3 VLAN by binding IP addresses that belong to different nodes of the cluster. The IP addresses must belong to the same subnet. Make sure that one of the following criteria is satisfied. Otherwise, the L3 VLAN bindings can fail.

    • All nodes have an IP address on the same subnet as the one bound to the VLAN.
    • The cluster has a striped IP address and the subnet of that IP address is bound to the VLAN.

When you add a node to a cluster that has only spotted IPs, the sync happens before spotted IP addresses are assigned to that node. In such cases, L3 VLAN bindings can be lost. To avoid this loss, either add a striped IP or add the L3 VLAN bindings on the NSIP of the newly added node.

How can I configure SNMP on a Citrix ADC cluster?

SNMP monitors the cluster, and all the nodes of the cluster, in the same way that it monitors a standalone appliance. The only difference is that SNMP on a cluster must be configured through the cluster IP address. When generating hardware specific traps, two more varbinds are included to identify the node of the cluster: node ID and NSIP address of the node.

The Citrix ADC appliance provides a show techsupport -scope cluster command that extracts configuration data, statistical information, and logs of all the cluster nodes. Run this command on the cluster IP address.

The output of this command is saved in a file named collector_cluster_<nsip_CCO>_P_<date-timestamp>.tar.gz which is available in the /var/tmp/support/cluster/ directory of the configuration coordinator.

Send this archive to the technical support team to debug the issue.

Can I use striped IP addresses as the default gateway of servers?

In cluster deployments, make sure the default gateway of the server points to a striped IP address (if you are using a Citrix ADC-owned IP address). For example, in the case of LB deployments with USIP enabled, the default gateway must be a striped SNIP address.

Can I view the routing configurations of a specific cluster node from the cluster IP address?

Yes. You can view and clear the configurations specific to a node by specifying the owner node while entering the VTYSH shell.

For example, to view the output of a command on nodes 0 and 1, the command is as follows:

\> vtysh
ns# owner-node 0 1
ns(node-0 1)\# show cluster state
ns(node-0 1)\# exit-cluster-node
ns\#

How can I specify the node for which I want to set the LACP system priority?

Note

Supported from NetScaler 10.1 onwards.

In a cluster, you must set that node as the owner node by using the set lacp command.

For example: To set the LACP system priority for a node with ID 2:

set lacp -sysPriority 5 -ownerNode 2<!--NeedCopy-->

How are IP tunnels configured in a cluster setup?

Note

Supported from NetScaler 10.1 onwards.

Configuring IP tunnels in a cluster is the same as on a standalone appliance. The only difference is that in a cluster setup, the local IP address must be a striped SNIP address.

How can I add a failover interface set (FIS) on the nodes of a Citrix ADC cluster?

Note

Supported from NetScaler 10.5 onwards.

On the cluster IP address, specify the ID of the cluster node on which the FIS must be added, using the command as follows:

add fis <name> -ownerNode <nodeId>

Notes

  • The FIS name for each cluster node must be unique.
  • A cluster LA channel can be added to a FIS. You make sure that the cluster LA channel has a local interface as a member interface.

For more information on FIS, see Configuring failover interface set.

How are net profiles configured in a cluster setup?

Note

Supported from NetScaler 10.5 onwards.

You can bind spotted IP addresses to a net profile. This net profile can then be bound to a spotted load balancing virtual server or service (that is defined using a node group). The following recommendations must be followed, failing which, the net profile configurations are not honored and the USIP/USNIP settings are used:

Note

  • If the strict parameter of the node group is set to Yes, the net profile must contain a minimum of one IP address from each node group member.
  • If the strict parameter of the node group is set to No, the net profile must include at least one IP address from each of the cluster nodes.

How can I configure WIonNS in a cluster setup?

Note

Supported from NetScaler 11.0 Build 62.x onwards.

To use WIonNS on a cluster, you must do the following:

  1. Make sure that the Java package and the WI package are present in the same directory on all the cluster nodes.
  2. Create a load balancing virtual server that has persistency configured.
  3. Create services with IP addresses as the NSIP address of each of the cluster nodes that you want to serve WI traffic. This step can only be configured using the Citrix ADC CLI.
  4. Bind the services to the load balancing virtual server.

Note

If you are using WIonNS over a VPN connection, make sure that the load balancing virtual server is set as WIHOME.

Can the cluster LA channel be used for management access?

No. Management access to a cluster node, must not be configured on a cluster LA channel (for example, CLA/1) or its member interfaces. It is because when the node is INACTIVE, the corresponding cluster LA interface is marked as power down, and therefore looses management access.

How cluster nodes communicate to each other and what are the different types of traffic that goes through the backplane?

A backplane is a set of interfaces in which one interface of each node is connected to a common switch, which is called the cluster backplane switch. The different types of traffic that goes through a backplane, which is used by internode communication are:

  • Node to Node Messaging (NNM)
  • Steered traffic
  • Configuration propagation and synchronization

Each node of the cluster uses a special MAC cluster backplane switch address to communicate with other nodes through the backplane. The cluster special MAC is of the form: 0x02 0x00 0x6F <cluster_id> <node_id> <reserved>, where <cluster_id> is the cluster instance ID. The <node_id> is the node number of the Citrix ADC appliance that is added to a cluster.

Note

The amount of traffic that is handled by a backplane has negligible CPU overhead.

What gets routed over the GRE tunnel for the Layer 3 cluster?

Only steered data traffic goes over the GRE tunnel. The packets are steered through the GRE tunnel to the node on the other subnet.

How Node to Node Messaging (NNM) and heartbeat messages are exchanged, and how are they routed?

The NNM, heartbeat messages, and cluster protocol are non-steering traffic. These messages are not sent through the tunnel, but they are routed directly.

What are the MTU recommendations when Jumbo frames are enabled for layer 3 cluster tunneled traffic?

The following are the layer 3 cluster recommendations of Jumbo MTU over GRE tunnel:

  • The Jumbo MTU can be configured among cluster nodes across the L3 path to accommodate GRE tunnel overhead.
  • The fragmentation does not happen for full sized packets which must be steered.
  • Steering of traffic continues to work even if Jumbo frames are not allowed, but with more overhead due to fragmentation.

How the global hash key is generated and shared across all nodes?

The rsskey for a standalone appliance is generated at the boot time. In a cluster setup, the first node holds the rsskey of the cluster. Every new node joining to the cluster synchronizes the rsskey.

What is the need of set rsskeytype –rsskey symmetric command for *:*, USIP on, useproxyport off, topologies?

It is not specific to a cluster, applies to a standalone appliance as well. With USIP ON, and use proxy port disabled, symmetric rsskey reduces both Core to Core (C2C) steering and node to node steering.

What are the factors that contribute to change the CCO node?

The first node added to form a cluster setup becomes the configuration coordinator (CCO) node. The following factors contribute to change the CCO node in the cluster setup:

  • When the current CCO node is removed from the cluster setup
  • When the current CCO node crashes
  • When the priority of the non-CCO node is changed (lower priority has higher precedence)
  • In dynamic conditions like, network reachability between the nodes
  • When there is change in node states – active, spare, and passive. Active nodes are preferred as CCO.
  • When there is a change in configuration, and the node having the latest configuration is preferred as CCO.

Can I form a cluster with nodes of different hardware platforms?

Yes. NetScaler appliance supports a heterogeneous cluster. A heterogeneous cluster can have a combination of different platforms in the same cluster. For more information, see Support for heterogeneous cluster.

FAQs

In this article