ADC

Migrating an HA setup to a cluster setup

Migrating an existing high availability (HA) setup to a cluster setup requires you to first remove the two NetScaler instances from the HA setup and create a backup of the HA configuration file. You can then use these instances to create a cluster and apply the backed-up configuration to the cluster.

Note

  • Before applying the configuration from the backed-up HA configuration file to the cluster, you must modify it to make it cluster compatible.

The preceding approach is a basic migration solution which results in downtime for the deployed application. As such, it must be used only in deployments where there is no consideration of application availability.

However, in most deployments, the availability of the application is of paramount importance. For such cases, you must use the approach where an HA setup can be migrated to a cluster setup with minimal downtime. In this approach, an existing HA setup is migrated to a cluster setup by first removing the secondary instance and using that instance to create a single-node cluster. After the cluster becomes operational and serves traffic, the primary instance of the HA setup is added to the cluster.

To convert an HA setup to a cluster setup by using the CLI

Let us consider the example of a HA setup with primary instance (NS1) - 198.51.100.131 and secondary instance (NS2) - 198.51.100.132.

  1. Make sure that the configuration of the HA pair is stable.

  2. Log on to the secondary instance, go to the shell, and create a copy of the ns.conf file (for example, /nsconfig/ns_backup.conf). For the list of backup files supported in cluster, see Back up a cluster setup

  3. Log on to the secondary instance, NS2, and clear the configuration. This operation removes NS2 from the HA setup and makes it a standalone instance.

    > clear ns config full
    

    Note

    • This step is required to make sure that NS2 does not start owning VIP addresses, now that it is a standalone instance.
    • At this stage, the primary instance, NS1, is still active and continues to serve traffic.
  4. Create a cluster on NS2 (now no longer a secondary instance) and configure it as a PASSIVE node.

     > add cluster instance 1
    
     > add cluster node 0 198.51.100.132 -state PASSIVE -backplane 0/1/1
    
     > add ns ip 198.51.100.133 255.255.255.255 -type CLIP
    
     > enable cluster instance 1
    
     > save ns config
    
     > reboot -warm
    
  5. Modify the backed-up configuration file as follows:

    1. (Optional) Remove the features that are not supported on a cluster. For the list of unsupported features, see NetScaler Features Supported by a Cluster. If you do not perform this step, the unsupported commands might fail when you apply the configuration from the backed-up file.

    2. Remove the configuration that have interfaces, or update the interface names from the c/u convention to the n/c/u convention.

      Example

      > add vlan 10 -ifnum 0/1
      

      must be changed to

      > add vlan 10 -ifnum 0/0/1 1/0/1
      
    3. The backup configuration file can have SNIP addresses. These addresses are striped on all the cluster nodes. It is recommended that you add spotted IP addresses for each node.

      Example

      > add ns ip 1.1.1.1 255.255.255.0 -ownerNode 0
      
      > add ns ip 1.1.1.2 255.255.255.0 -ownerNode 1
      
    4. Update the host name to specify the owner node.

      Example

      > set ns hostname ns0 -ownerNode 0
      
      > set ns hostname ns1 -ownerNode 1
      
    5. Change all other relevant networking configuration that depends on spotted IP addresses. For example, L3 VLAN, RNAT configuration which uses SNIPs as NATIP, INAT rules that refers to SNIPs/MIPs).

  6. On the cluster, do the following:

    1. Make the topological changes to the cluster by connecting the cluster backplane, the cluster link aggregation channel, and so on.

    2. Apply configuration from the modified file to the configuration coordinator through the cluster IP address.

      > batch -f /nsconfig/ns_backup.conf -o /nsconfig/batch_output > **Note:** > > The output of the commands is saved in the `batch_output` file. You must review the output file to ensure that the necessary commands are run without errors.
      
    3. Configure external traffic distribution mechanisms like ECMP or cluster link aggregation.

    Note:

    Ensure that you configure the necessary spotted configuration on the cluster nodes. For more information on the list of spotted configuration, see List of spotted configuration and Supportability matrix for NetScaler cluster.

  7. Switch the traffic from the HA setup to the cluster.

    1. Log on to the primary instance, NS1, and disable all the data interfaces on it.

      > disable interface <interface_id>
      
    2. Log on to the cluster IP address and configure NS2 as an ACTIVE node.

      > set cluster node 0 -state ACTIVE
      

    Note

    There might be a minimal downtime between disabling the interfaces and making the cluster node active.

  8. Ensure that the cluster and all the services are up.

  9. Log on to the primary instance, NS1, and remove it from the HA setup.

    1. Clear the configuration. This operation removes NS1 from the HA setup and makes it a standalone instance.

      > clear ns config full
      
    2. Enable all the data interfaces.

      > enable interface <interface_id>
      
  10. Add NS1 to the cluster.

    1. Log on to the cluster IP address and add NS1 to the cluster.

      > add cluster node 1 198.51.100.131 -state PASSIVE -backplane 1/1/1
      
    2. Log on to NS1 and join it to the cluster by sequentially running the following commands:

      > join cluster -clip 198.51.100.133 -password nsroot
      
      > save ns config
      
      > reboot -warm
      
  11. Log on to NS1 and perform the required topological and configuration changes.

    Note:

    Ensure that you configure the necessary spotted configuration on the cluster nodes. For more information on the list of spotted configuration, see List of spotted configuration and Supportability matrix for NetScaler cluster.

  12. Log on to the cluster IP address and set NS1 as an ACTIVE node.

        > set cluster node 1 -state ACTIVE
    
Migrating an HA setup to a cluster setup