NetScaler ingress controller

Policy based routing support for multiple Kubernetes clusters

When you are using a single NetScaler to load balance multiple Kubernetes clusters, the NetScaler Ingress Controller adds pod CIDR networks through static routes. These routes establish networking connectivity between Kubernetes pods and NetScaler. However, when the pod CIDRs overlap there may be route conflicts. NetScaler supports policy based routing (PBR) to address the networking conflicts in such scenarios. In PBR, decisions are taken based on the criteria that you specify. Typically, a next hop is specified where you send the selected packets. In a multi-cluster Kubernetes environment, PBR is implemented by reserving a subnet IP address (SNIP) for each Kubernetes cluster or the NetScaler Ingress Controller. Using net profile, the SNIP is bound to all service groups created by the same NetScaler Ingress Controller. For all the traffic generated from service groups belonging to the same cluster, the source IP address is the same SNIP.

Following is a sample topology where PBR is configured for two Kubernetes clusters which are load balanced using a NetScaler VPX or MPX.

PBR configuration

Configure PBR using the NetScaler Ingress Controller

To configure PBR, you need one SNIP or more per Kubernetes cluster. You can provide SNIP values either using the environment variable in the NetScaler Ingress Controller deployment YAML file during bootup or using ConfigMap.

Perform the following steps to deploy the NetScaler Ingress Controller and configure PBR using ConfigMap.

  1. Download the citrix-k8s-ingress-controller.yaml using the following command:

    wget  https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/deployment/baremetal/citrix-k8s-ingress-controller.yaml
    
  2. Edit the NetScaler Ingress Controller YAML file:

      - Specify the values of the environment variables as per your requirements. For more information on specifying the environment variables, see the [Deploy NetScaler Ingress Controller](/en-us/netscaler-k8s-ingress-controller/cic-yaml.html) documentation.
    
  3. Deploy the NetScaler Ingress Controller using the edited YAML file with the following command on each cluster.

    kubectl create -f citrix-k8s-ingress-controller.yaml
    
  4. Create a YAML file cic-configmap.yaml with the required SNIP values in the ConfigMap.

    Following is an example for a ConfigMap with the SNIP values:

    apiVersion: v1
    kind: ConfigMap
    metadata:
        name: pbr-test
        namespace: default
    data:
        NS_SNIPS: '["192.0.2.2", "192.0.2.1"]'
    
  5. Apply the ConfigMap.

     kubectl create -f cic-configmap.yaml
    

You can also specify the SNIPs using the NS_SNIPS environment variable in the NetScaler Ingress Controller deployment YAML file.

     - name: "NS_SNIPS"
        value: '["192.0.2.2", "192.0.2.1"]'

The following are the usage guidelines while using ConfigMap for configuring SNIP:

  • Only SNIPs can be added or removed via ConfigMap. The feature-node-watch argument can only be enabled during bootup.

  • When you add a ConfigMap:

    • If SNIPs are already provided using the environment variable during bootup and you want to retain them, those SNIPs should be specified in the ConfigMap along with the new SNIPs.
  • When you delete ConfigMap:

    • All PBRs generated by ConfigMap SNIPs are deleted. If SNIPs are provided via the environment variable, PBR for those IP addresses is added.

    • If SNIPs are not provided using the NS_SNIPS environment variable, static routes are added since feature-node-watch is enabled.

Validate PBR configuration on a NetScaler after deploying the NetScaler Ingress Controller

This validation example uses a two node Kubernetes cluster with the NetScaler Ingress Controller deployed along with the following ConfigMap with two SNIPs.

Image

You can verify that the NetScaler Ingress Controller adds the following configurations to the ADC:

  1. An IPset of all NS_SNIPs provided by the ConfigMap is added.

    Image

  2. A net profile is added with the SrcIP set to the IPset.

    Image

  3. The service group added by the NetScaler Ingress Controller contains the net profile set.

    Image

  4. Finally, the NetScaler Ingress Controller adds PBRs.

    Image

    Here:

    • The number of PBRs is equivalent to (number of SNIPs) * (number of Kubernetes nodes). In this case, it adds four(2*2) PBRs.
    • The srcIP of the PBR is the NS_SNIPS provided to the NetScaler Ingress Controller by ConfigMap. The destIP is the CNI overlay subnet range of the Kubernetes node.
    • NextHop is the IP address of the Kubernetes node.
  5. You can also use the logs of the NetScaler Ingress Controller to validate the configuration.

    logs

Configure PBR using the node controller

You can configure PBR using the node controller for multiple Kubernetes clusters. When you are using a single NetScaler to load balance multiple Kubernetes clusters with node controller for networking, the static routes added by it to forward packets to the IP address of the VXLAN tunnel interface may cause route conflicts. To support PBR, node controller needs to works in conjunction with the NetScaler Ingress Controller to bind the net profile to the service group.

Perform the following steps to configure PBR using the node controller:

  1. While starting the node controller, provide the CLUSTER_NAME as an environment variable. Specifying this variable indicates that it is a multi-cluster deployment and the node controller configures PBR instead of static routes.

    Example:

    - name: CLUSTER_NAME 
      value: "dev-cluster"
    
  2. While deploying the NetScaler Ingress Controller, provide the CLUSTER_NAME as an environment variable. This value should be the same as the value provided in node controller.

    Example:

    - name: CLUSTER_NAME  
      value: "dev-cluster "
    
  3. Specify the argument --enable-cnc-pbr as True in the arguments section of the NetScaler Ingress Controller deployment YAML file. When you specify this argument, NetScaler Ingress Controller is aware that the node controller is configuring PBR on the NetScaler.

    Example:

    args: 
     - --enable-cnc-pbr True          
    

Note:

  • The value provided for CLUSTER_NAME in the node controller and NetScaler Ingress Controller deployment files should match when they are deployed in the same Kubernetes cluster.

  • The CLUSTER_NAME is used while creating the net profile entity and binding it to service groups on NetScaler VPX or MPX.

Validate PBR configuration on a NetScaler after deploying the node controller

This validation example uses a two node Kubernetes cluster with node controller and NetScaler Ingress Controller deployed.

You can verify that the following configurations are added to the ADC by node controller:

  1. A net profile is added, with the value of srcIP set to the SNIP added by node controller while creating the VXLAN tunnel network between the NetScaler and Kubernetes nodes.

    Image

  2. NetScaler Ingress Controller binds the net profile to the service groups it creates.

    Image

  3. Finally, node controller adds PBRs.

    Image

    Here:

    • The number of PBRs is equal to number of Kuberntes nodes. In this case, it adds two PBRs.
    • The srcIP of the PBR is the SNIP added by node controller in tunnel network . The destIP is the CNI overlay subnet range of the Kubernete node. The NextHop is the IP address of Kubernetes node’s VXLAN Tunnel interface.

      Note:

      node controller adds PBRs instead of static routes. The rest of the configuration of the VXLAN and bridge table remains the same. For more information, see the node controller configuration.

Policy based routing support for multiple Kubernetes clusters