Adding a node to the cluster
You can seamlessly scale the size of a cluster to include a maximum of 32 nodes. When a Citrix ADC appliance is added to the cluster, the configurations from that appliance are cleared (by internally running the clear ns config -extended command). The SNIP addresses, MTU settings of the backplane interface, and all VLAN configurations (except the default VLAN and NSVLAN) are also cleared from the appliance.
The cluster configurations are then synchronized on this node. There can be an intermittent drop in traffic while the synchronization is in progress.
Before you add a Citrix ADC appliance to a cluster:
- Set up the backplane interface for the node. Check preceding topic.
- Check if the licenses that are available on the appliance match and are available on the configuration coordinator. The appliance is added only if the licenses match.
- If you want the NSVLAN on the cluster, make sure that the NSVLAN is created on the appliance before it is added to the cluster.
- Citrix recommends that you add the node as a passive node. Then, after joining the node to the cluster, complete the node specific configuration from the cluster IP address. Run the force cluster sync command if the cluster has only spotted IP addresses, has L3 VLAN binding, or has static routes.
- When an appliance with a preconfigured link aggregate (LA) channel is added to a cluster, the LA channel continues to exist in the cluster environment. The LA channel is renamed from LA/x to nodeId/LA/x, where LA/x is the LA channel identifier.
To add a node to the cluster by using the CLI
- Log on to the cluster IP address and, at the command prompt, do the following:
Add the appliance (for example, 10.102.29.70) to the cluster.
For an L3 cluster:
- The nodegroup parameter must be set to a nodegroup that has nodes of the same network. > > > - If this node belongs to the same network as the first node that was added, then configure the nodegroup that was used for that node. > - If this node belongs to a different network, then create a nodegroup and bind this node to the nodegroup. >- The backplane parameter is mandatory for nodes that are associated with a nodegroup that has more than one node, so that the nodes within the network can communicate with each other.
add cluster node <nodeId> <IPAddress> -state <state> -backplane <interface_name> -nodegroup <name> Example: add cluster node 1 10.102.29.70 -state PASSIVE -backplane 1/1/1 <!--NeedCopy-->
Save the configuration.
save ns config <!--NeedCopy-->
Log on to the newly added node (for example, 10.102.29.70) and join the node to the cluster.
join cluster -clip <ip_addr> -password <password> Example: join cluster -clip 10.102.29.61 -password nsroot <!--NeedCopy-->
Configure the following commands on the CLIP.
Bind VLAN to an interface
bind vlan <id> -ifnum <interface_name> <!--NeedCopy-->
bind vlan 1 -ifnum 2/1/2 <!--NeedCopy-->
Add spotted IP address to the newly added node
add ns ip <IpAddress> <netmask> -ownerNode <positive_interger> <!--NeedCopy-->
add ns ip 188.8.131.52 255.0.0.0 -ownerNode 2 <!--NeedCopy-->
Verify VLAN on NSIP
show vlan <id> <!--NeedCopy-->
show vlan 1 <!--NeedCopy-->
Perform the following configurations:
- If the node is added to a cluster that has only spotted IPs, the configurations are synchronized before the spotted IP addresses are assigned to that node. In such cases, L3 VLAN bindings can be lost. To avoid this loss, either add a striped IP or add the L3 VLAN bindings.
- Define the required spotted configurations.
- Set the MTU for the backplane interface.
Save the configuration.
save ns config <!--NeedCopy-->
Warm reboot the appliance.
reboot -warm <!--NeedCopy-->
After the node is UP and sync is successful, change RPC credentials for the node from the cluster IP address. For more information about changing an RPC node password, see Change an RPC node password.
set rpcNode <node-NSIP> -password <passwd> Example: set rpcNode 192.0.2.4 -password mypassword <!--NeedCopy-->
Set the cluster node to Active.
set cluster node <nodeID> -state active. Example: set cluster node 1 -state active <!--NeedCopy-->
To add a node to the cluster by using the GUI
- Log on to the cluster IP address.
- Navigate to System > Cluster > Nodes.
- In the details pane, click Add to add the new node (for example, 10.102.29.70).
- In the Create Cluster Node dialog box, configure the new node. For a description of a parameter, hover the mouse cursor over the corresponding text box.
- Click Create. When prompted to perform a warm reboot, click Yes.
- After the node is UP and sync is successful, change RPC credentials for the node from the cluster IP address. For more information about changing an RPC node password, see Change an RPC node password.
- Navigate to System > Cluster > Nodes > Edit.
- Modify the State to ACTIVE and confirm.
To join a previously added node to the cluster by using the GUI
If you have used the command line to add a node to the cluster, but have not joined the node to the cluster, you can use the following procedure.
When a node joins the cluster, it takes over its share of traffic from the cluster and hence an existing connection can get terminated.
- Log on to the node that you want to join to the cluster (for example, 10.102.29.70).
- Navigate to System > Cluster.
- In the details pane, under Get Started, click the Join Cluster link.
- In the Join to existing cluster dialog box, set the cluster IP address and the
nsrootpassword of the configuration coordinator. For a description of a parameter, hover the mouse cursor over the corresponding text box.
- Click OK.