Beginner’s guide

Overview

This guide covers a sample deployment to understand how to load balance north-south traffic to the web application deployed in Kubernetes and Docker. Also, it provides details on how to load balance east-west traffic between the frontend, backend services of the web application in Kubernetes.

The sample deployment consistes of the following components:

  • A Kubernetes cluster with two nodes inculding a master and worker node.

  • A single instance of NetScaler CPX to load balance east-west traffic in the kubernetes cluster.

  • A single instance of NetScaler VPX appliance to load balance ingress traffic to the web application.

  • A single instance on NetScaler MAS server to use as a ingress controller.

  • A frontend service (hotdrinks.beverages.com) for the web application.

  • Two backend services (coffee and tea) for the web application.

The following table lists the component versions:

Component Version
Kubernetes v1.11.1
NetScaler CPX 12.0.58.15
NetScaler VPX appliance 12.0.58.15

Prerequisites

Copy fingerprint of the NetScaler MAS server

In NetScaler Management and Analytics System (MAS), do the following to copy the fingerprint of the NetScaler MAS server:

  1. Log on to NetScaler MAS.

  2. Navigate to System > System Administration, and click View SSL Certificate.

  3. On the Certificate Details page, from the Fingerprint field, copy the fingerprint of the NetScaler MAS server.

    Fingerprint

Configure Kubernetes cluster details in NetScaler MAS

To use NetScaler MAS as an Ingress controller for a Kubernetes environment, you must integrate NetScaler MAS and the Kubernetes cluster by configuring the Kubernetes cluster details in NetScaler MAS.

Prerequisites

Ensure that you have:

  • The kube-apiserver URL. Type the following command in the Kubernetes node to obtain the kube-apiserver URL:

     root@ubuntu:~# kubectl cluster-info
     Kubernetes master is running at https://10.102.29.101:6443
     KubeDNS is running at https://10.102.29.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    
  • Authentication credentials to authenticate with Kubernetes. You can use basic authentication, token-based authentication, or client-based authentication.

    If you want to use client-based authentication, you need to use the client certification and client key details from Kubernetes. The client certificate and client key details are available in the /etc/kuberenetes/admin.conf file. Type the following command to obtain the client certificate and client key details respectively:

     root@ubuntu:~# cat /etc/kubernetes/admin.conf | grep client-certificate
     client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJSlhueXNZSUJsa1V3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T0RBNE1UQXhNakE0TWpsYUZ3MHhPVEE0TVRBeE1qQTRNelphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXp3R3VCemduaUVlTmt2cjYKQjY2RWl2bHN5Vll2SXdSNk5YRm91YXZ3SUtXYkNiLzl3bk5sWHNkY2pMYlZ1M09kOHk4SXZEVlpjZFJVL2RXbQp2ampXa3hzSENJRW12ODJZNE5VeW56V3ZxeTVibDdFeS9pb1Nsb1JGU0wwa1oyZ0o3RkgwbEhSU2IrOEMxVlRWCi9zTDhTMlRseHJKcUQ0RHlNWS9IeDlwMFBIWVN3Y1dXRGlWckNtbnpPSHlIK1MyVURjNllTc0xvYXlxclV2Y08Ka05TMjJPZ3hLUGJRUTFZMStLcFNLODBGR0lGdFJOWFEzaFlrNnNSRk5PMnI1NU5TVjhTUE4yYTAzWGFOTERxVgpWUUpyUGhOVE51VUw3WkNlbmxubExZUXRnVE5RMkQ4TmJ5cTJXdWxaU2xmZTJScmNXZUt2NHNYanN5MDZGK3dpClBiMWkrd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKNlZjSGxJVTMzUVhWWVdDZTFONElITmhNNWxVem5mZnN4egpYSy9oTkc2V0Q3eVhscVhRVEpUalhPbnRISFVvY0Q3UGRUaG5tYUV5a0NHTDdMZVZmVEtjU3BsdnVrbXhZdDJkClh5VHFYeStPdmszRHMwUmY1QnNYRXBxTGN1VDE1Q2xWYW5MaEpuOGdxYkIxbk12bkNIMnFXT3VjNUR4cVNGMFYKYXdxOVZiekE5bkRvdzNLSml2d0ZxWWNpaENxSHQ1cENyQ2VOaFJHTGZnaEc0UUJDRENudnRML1NlR2x5WGMxWgozQWQ4eDZVSmxGS1ZXaklaWThKTGRzZjdicUxyRmswVVVrMHdGeUtrY1RtY1BNaTE1UEY5aDgwOHluLzU3Sy9ICmNHMVdadFE0U3R5czRyTUFwc0RXcStWZXVzSlhQd1VNVVo4TEtwc0sybFF5U2tBdWZCUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    
     root@ubuntu:~# cat /etc/kubernetes/admin.conf | grep client-key
     client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBendHdUJ6Z25pRWVOa3ZyNkI2NkVpdmxzeVZZdkl3UjZOWEZvdWF2d0lLV2JDYi85CnduTmxYc2RjakxiVnUzT2Q4eThJdkRWWmNkUlUvZFdtdmpqV2t4c0hDSUVtdjgyWTROVXlueld2cXk1Ymw3RXkKL2lvU2xvUkZTTDBrWjJnSjdGSDBsSFJTYis4QzFWVFYvc0w4UzJUbHhySnFENER5TVkvSHg5cDBQSFlTd2NXVwpEaVZyQ21uek9IeUgrUzJVRGM2WVNzTG9heXFyVXZjT2tOUzIyT2d4S1BiUVExWTErS3BTSzgwRkdJRnRSTlhRCjNoWWs2c1JGTk8ycjU1TlNWOFNQTjJhMDNYYU5MRHFWVlFKclBoTlROdVVMN1pDZW5sbmxMWVF0Z1ROUTJEOE4KYnlxMld1bFpTbGZlMlJyY1dlS3Y0c1hqc3kwNkYrd2lQYjFpK3dJREFRQUJBb0lCQUNwWUw1OHViM2ZERzBTNwpyald3RDFEV1lOaDJsc0hWQXFLNEJqSWs1OFBsM0djTUxQNS8ySGFnMVYrN2J0RWZmMm5sYnlZQXk4RXJMQStZCmlybFNxeUlBWDFud0FWc3UxVno0ZjVodHhQZUJUaDhqa2tqSGxuSFBzTlNHVEZJU3lDVGRSdWl2T3NYRzRJOS8KQVI5U0I0WHNwOHdUWnZxdzU5b1hqVWhtZVd4OFpDby9LRmdVcTBUQkJIQ1lDYlk1LzhWOFVjaU5tOHN1YVE0LwowQWhnMjV3a0l2S3kwZWp3bEpsYVZPK2doenlTeGQ4L0NPemc2YTJqYXBBeGczR05sS2diR2lVL3lHck1XVnY1Cjc1WmtGS0FRTm1IY296UDhaMGFLc3BFV3ZUZEVZVVFEaWQwUG5OSk9aRE9JQ2l4VUMvR2sycjhsL0tzaVpTSU4KempHeUFwRUNnWUVBMDZEQ08xMUx1THNVQURZUlVOM29FcmJoRndZR2RVKzlNc2pGVnNIeG9xTGE3QUVjQTZYTgpXS09zaHV4dktHMDlqTXJDWGJkNXBpL1dJWEErUytJVitobGVDVWdRV0dLWnFVQXhCak1VMlI1MGJtTWpSOE1wCjMyU3ZRZ1NmVGJvVThTZXNzajFvSVZEeG5IQk9PLzZwRWdVT09WVU5jUWFxR1NtLzhLcEduQjhDZ1lFQSttamQKYXlDN3NLMjlsRU5NTXBoSGFqU0Q0VmV3M3ZJamFZNVhMY2QxOXcwNnhpYmR1bnFVYUVaMThLTmM5NUsrN0NuSApoV2grenpDZDIwbWhJdTZyWm1zK0NQYm1sSDhNb0l1ZHlqOU55M3drdjIrWkw2bnVkT3lPeGpsVWJUMkZaNzJlCk1yTVdkQ2hFMWwySEVGZWlHUXo1enFtQ21kWlpwYTA2Z3NnTTNhVUNnWUFkOXBIcGs5RUh5NzBPTnBtSENKUTIKS2h4K2hRVGZFVFlwZlpHck1mU0RZV2w3cHNDUHA2Y0dXTTR4b0VJd3lCN0IwMmRubTNXbTJQa0piUG4xQm9LMApFV2xtQ1FUL2JwNXcvenl4c3dQTnBlazRRK01YNHdNSHRScTNUeTQ2OUJESkFDUU1iSE5VM0VBSk5VRnVieVVDCi95SS9iZEprWVZ3dUNlSTZNZkdqWXdLQmdRQ1dzREErWFU1VlBkaE50a25PVUpENU9tejZXQmpac1FEYWJvdkwKd3JJY1gxdTFEb0p6eTN3dlcrZHhUZjJPQmtMYVB6SVArQmdIZW93a0FDVDFyb1o2ZGFLNUprc1BwWHpseDk3RwpiRjNXUy9pWk13RU9DOGF4bWdFNURCcmdPaHRqbUZud3pKQ0FpaE1TcE9tNFRlUUFDeXp3emxVSFdsUk1QUGh1CjV3L0crUUtCZ0RzdEN2WVMzNVlBR3IrM25FR0Z4Y2cwdlpQdmtxVGxhUmpIRlcrZDhGYkNLWUZPQWRLUTk4TUwKcThTUzI1dVRqWUlITDJrS04vK0NnVUt2TUYxSFFNbzE0UmEzS2NTT2NoOFhCWitTazllRWRqL2dyaGRvd0FORgpkZFVBSEhMdzY5NWd6ZkZ3bkdTQWgyVWRZdTdVL3RxZlBOUGphRnJVbHVDc3NPaW95bmN5Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
    

To configure Kubernetes cluster details in NetScaler MAS:

  1. Log on to the NetScaler MAS server GUI by using the administrator credentials. The default administrator credentials are nsroot/nsroot.

  2. Navigate to Orchestration > Container Orchestration.

  3. In the Container Orchestration pane, select Kubernetes Configuration and click Continue.

    Container Orchestration

  4. On the Cluster Setting section, enter the kube-apiserver URL in the the API Server URL field. If you are using a digital certificate for communication, Select the Server Certificate Validation check box, and enter the certificate passcode in the CA field.

  5. On the User Settings section, you can select any of the following authentication methods:

    • Basic Athentication: In the Username field, enter the username of the host machine.

    • Token Based Authenction: In the Token field, enter the token.

    • Client Authentication: In the Client Certificate and Client Key fields, enter the client certificate and client key details respectively.

  6. Click OK.

    Container Orchestration Config

Configure NetScaler CPX to load balance east-west traffic in Kubernetes

You can deploy NetScaler CPX instances as a daemon set in the Kubernetes cluster to load balance containerized applications in the cluster. When you deploy the NetScaler CPX instances as a daemon set, the NetScaler CPX instance are deployed as a pod on a node, and it is then automatically deployed as a pod on each new node that joins the Kubernetes cluster.

Before you deploy the NetScaler CPX instance, ensure that you create a service account for the NetScaler CPX instance on the master node. You can use the following YAML file create a service account, cpx:

Sample: serviceaccount.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cpx
rules:
  - apiGroups: [""]
    resources: ["services", "endpoints", "ingresses", "pods", "secrets"]
    verbs: ["*"]

  - apiGroups: ["extensions"]
    resources: ["ingresses", "ingresses/status"]
    verbs: ["*"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: cpx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cpx
subjects:
- kind: ServiceAccount
  name: cpx
  namespace: default
apiVersion: rbac.authorization.k8s.io/v1

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: cpx
  namespace: default

Deploy the YAML file to create the service account, cpx, using the kubectl create -f <yaml> command.

Sample:

kubectl create -f serviceaccount.yaml

Verify if the cpx service account is created using the kubectl get sa command.

Sample:

root@ubuntu:~# kubectl get sa
NAME      SECRETS   AGE
cpx       1         2d
default   1         19d

Deploy a NetScaler CPX instance as a daemon set, you must write a YAML file or a JSON script. The file or script specifies the container type, CPX image file name, NetScaler MAS server IP address, and NetScaler MAS server fingerprint.

The following is a sample YAML file to deploy a NetScaler CPX instance:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: cpx
spec:
  template:
    metadata:
      name: cpx
      labels:
        app: cpx-daemon
      annotations:
        NETSCALER_AS_APP: "True"
    spec:
      serviceAccountName: cpx
      hostNetwork: true
      containers:
        - name: cpx
          image: "cpx:12.0-58.15"
          securityContext:
             privileged: true
          env:
          - name: "EULA"
            value: "yes"
          - name: "NS_NETMODE"
            value: "HOST"
          - name: "kubernetes_url"
            value: "https://10.102.29.101:6443"
          - name: "NS_MGMT_SERVER"
            value: "10.106.73.237"
          - name: "NS_MGMT_FINGER_PRINT"
            value: "E9:AA:2C:BC:47:DF:AC:A2:02:A6:1C:2F:8B:64:AC:61:10:ED:72:4B"
          - name: "NS_ROUTABLE"
            value: "FALSE"
          - name: "HOST"
            valueFrom:
               fieldRef:
                  fieldPath: status.hostIP
          - name: "KUBERNETES_TASK_ID"
            valueFrom:
               fieldRef:
                  fieldPath: metadata.name
          volumeMounts:
          imagePullPolicy: Always

The following table describes the sections, parameters, and environment variables used in the sample daemon set:

Section Parameter Description
container name Name of the NetScaler CPX container.
  image Specifies the image for container creation.
SecurityContext privileged: true Specifies that the NetScaler CPX container is run in privileged mode.
  name: “EULA” A NetScaler CPX specific environment variable, which is required for verification that you have read and understand the End User License Agreement (EULA) available at: https://www.citrix.com/products/netscaler-adc/cpx-express.html.
  name: “NS_NETMODE” A NetScaler CPX specific environment variable that allows you to specify that the NetScaler CPX instance is started in host mode. After the instance starts in host mode, it configures 4 default iptable rules on the host machine for management access to the instance. It uses the following ports: 9995 for HTTP, 9996 for HTTPS, 9997 for SSH and 9998 for SNMP. Also, If you want to specify different ports, you can use the following environment variables: -e NS_HTTP_PORT, -e NS_HTTPS_PORT, -e NS_SSH_PORT, and -e NS_SNMP_PORT.
  name: “kubernetes_url” A NetScaler CPX specific environment variable that specifies the Kubernetes URL.
  name: “NS_MGMT_SERVER” A NetScaler CPX specific environment variable that describes the NetScaler MAS server IP address. When the NetScaler CPX instance is deployed, it automatically registers with the NetScaler MAS server at this IP address.
  name: “NS_MGMT_FINGER_PRINT” A NetScaler CPX specific environment variable that defines the NetScaler MAS fingerprint.
  name: “NS_ROUTABLE” A NetScaler CPX specific environment variable specifying whether the NetScaler CPX container is run in non-IP-per-container mode. Be sure to set the value to FALSE.
  name: “KUBERNETES_TASK_ID” Identifies the NetScaler CPX ID in the Kubernetes cluster.
imagePullPolicy   Specifies how Kubernetes pulls the image.

After you have deployed the NetScaler CPX instance, you can validate the deployment by performing the following:

  1. Check the service account, type the following command:
    root@ubuntu:~#kubectl get serviceaccount
    NAME SECRETS AGE
    cpx 1 21h
    default 1 6d
    
  2. Check the NetScaler CPX creation, type the following command:
    root@ubuntu:~#kubcetl get pods
    NAME READY STATUS RESTARTS AGE
    cpx-55tjg 1/1 Running 0 21h
    
    root@ubuntu:~#kubectl get ds
    NAME      DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    cpx       1         1         1         1            1           <none>          3d
    
  3. Verify if the NetScaler CPX is registered with NetScaler MAS, do the following:

    1. Log on to the NetScaler MAS server GUI by using the administrator credentials. The default administrator credentials are nsroot/nsroot.

    2. Navigate to Networks > Instances and click NetScaler CPX. You can view the registered NetScaler CPX instance:

    CPX registration

  4. On the master node in Kubernetes, verify if NetScaler MAS has applied the NetScaler-specific configurations to the NetScaler CPX instance, do the following:

    1. Access the bash of the NetScaler CPX pod using the following command:
      kubectl exec -it <POD-NAME> bash
      

      Where POD-NAMEis the NetScaler CPX pod name.

    2. Using the cli_script.sh script view the configuration applied on the NetScaler CPX instance.

      root@ubuntu:~# kubectl exec -it cpx-ph5pm bash
      root@worker:/# cli_script.sh 'sh cs vs'
      exec: sh cs vs
      1)      kube-dns.kube-system.dns.svc-cs (192.168.1.3:25366) - UDP       Type: CONTENT
              State: UP
              Last state change was at Wed Aug 29 06:25:16 2018
              Time since last state change: 0 days, 04:02:17.650
              Client Idle Timeout: 120 sec
              Down state flush: ENABLED
              Disable Primary Vserver On Down : DISABLED
              Appflow logging: ENABLED
              State Update: DISABLED
              Default: kube-dns.kube-system.dns.svc-lb-default-lb     Content Precedence: RULE
              L2Conn: OFF     Case Sensitivity: ON
              Authentication: OFF
              401 Based Authentication: OFF
              Listen Policy: NONE
              IcmpResponse: PASSIVE
              RHIstate:  PASSIVE
              Traffic Domain: 0
      

Create services in Kubernetes

After deploying the NetScaler CPX instance for load balancing east-west traffic, you need to deploy the frontend service and backend services.

The following are the sample YAML files for the coffee and tea backend services:

Sample: coffee.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coffee-backend
  labels:
      name: coffee-backend
spec:
  replicas: 4
  template:
    metadata:
      labels:
        name: coffee-backend
    spec:
      containers:
      - name: coffee-backend
        image: in-docker-reg.eng.citrite.net/cpx-dev/hotdrinks.beverages.com:v1
        ports:
        - name: coffee-backend
          containerPort: 80
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
---
apiVersion: v1
kind: Service
metadata:
  name: coffee-backend
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    name: coffee-backend

Sample: tea.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: tea-backend
  labels:
      name: tea-backend
spec:
  replicas: 4
  template:
    metadata:
      labels:
        name: tea-backend
    spec:
      containers:
      - name: tea-backend
        image: in-docker-reg.eng.citrite.net/cpx-dev/hotdrinks.beverages.com:v1
        ports:
        - name: tea-backend
          containerPort: 80
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
---
apiVersion: v1
kind: Service
metadata:
  name: tea-backend
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    name: tea-backend

The following is the sample YAML file for the frontend service:

Sample: hotdrink-frontend.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
  labels:
      name: frontend
spec:
  replicas: 4
  template:
    metadata:
      labels:
        name: frontend
    spec:
      containers:
      - name: frontend
        image: in-docker-reg.eng.citrite.net/cpx-dev/hotdrinks.beverages.com:v1
        ports:
        - name: frontend
          containerPort: 80
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    name: frontend

Now, deploy these service using the kubectl create -f <yaml-file> command as shown below:

root@ubuntu:~# kubectl create -f coffee.yaml
deployment.extensions/coffee-backend created

root@ubuntu:~# kubectl create -f tea.yaml
deployment.extensions/tea-backend created

root@ubuntu:~# kubectl create -f hotdrink-frontend.yaml
deployment.extensions/frontend created

After you have deployed the service, verify if the pods are created as follows:

root@ubuntu:~# kubectl get pods
NAME                             READY     STATUS    RESTARTS   AGE
coffee-backend-574f4dbfb-4tbkm   1/1       Running   0          7m
coffee-backend-574f4dbfb-j2swl   1/1       Running   0          7m
coffee-backend-574f4dbfb-tph7c   1/1       Running   0          7m
coffee-backend-574f4dbfb-tszln   1/1       Running   0          7m
cpx-ph5pm                        1/1       Running   0          12m
frontend-dc465fb47-87f4z         1/1       Running   0          6m
frontend-dc465fb47-ts88r         1/1       Running   0          6m
frontend-dc465fb47-vndq2         1/1       Running   0          6m
frontend-dc465fb47-x566h         1/1       Running   0          6m
tea-backend-7876c577c4-p8jsf     1/1       Running   0          6m
tea-backend-7876c577c4-r9gvd     1/1       Running   0          6m
tea-backend-7876c577c4-tfm8f     1/1       Running   0          6m
tea-backend-7876c577c4-xrnkq     1/1       Running   0          6m

Once the service pods are created, you must verify if NetScaler MAS applies the NetScaler specific configuration to NetScaler CPX instance for east-west traffic loadbalancing:

root@ubuntu:~# kubectl exec -it cpx-55tjg bash
root@worker:/# cli_script.sh 'sh cs vs'
exec: sh cs vs
1)      kube-dns.kube-system.53.svc-cs (192.168.1.3:20084) - TCP        Type: CONTENT
        State: UP
        Last state change was at Mon Aug 27 10:34:14 2018
        Time since last state change: 0 days, 00:13:28.870
        Client Idle Timeout: 9000 sec
        Down state flush: ENABLED
        Disable Primary Vserver On Down : DISABLED
        Appflow logging: ENABLED
        State Update: DISABLED
        Default: kube-dns.kube-system.53.svc-lb-default-lb      Content Precedence: RULE
        L2Conn: OFF     Case Sensitivity: ON
        Authentication: OFF
        401 Based Authentication: OFF
        Listen Policy: NONE
        IcmpResponse: PASSIVE
        RHIstate:  PASSIVE
        Traffic Domain: 0
2)      coffee-backend.default.80.svc-cs (192.168.1.3:29867) - TCP      Type: CONTENT
        State: UP
        Last state change was at Mon Aug 27 10:38:21 2018
        Time since last state change: 0 days, 00:09:21.60
        Client Idle Timeout: 9000 sec
        Down state flush: ENABLED
        Disable Primary Vserver On Down : DISABLED
        Appflow logging: ENABLED
        State Update: DISABLED
        Default: coffee-backend.default.80.svc-lb-default-lb    Content Precedence: RULE
        L2Conn: OFF     Case Sensitivity: ON
        Authentication: OFF
        401 Based Authentication: OFF
        Listen Policy: NONE
        IcmpResponse: PASSIVE
        RHIstate:  PASSIVE
        Traffic Domain: 0
3)      tea-backend.default.80.svc-cs (192.168.1.3:21847) - TCP Type: CONTENT
        State: UP
        Last state change was at Mon Aug 27 10:38:37 2018
        Time since last state change: 0 days, 00:09:05.360
        Client Idle Timeout: 9000 sec
        Down state flush: ENABLED
        Disable Primary Vserver On Down : DISABLED
        Appflow logging: ENABLED
        State Update: DISABLED
        Default: tea-backend.default.80.svc-lb-default-lb       Content Precedence: RULE
        L2Conn: OFF     Case Sensitivity: ON
        Authentication: OFF
        401 Based Authentication: OFF
        Listen Policy: NONE
        IcmpResponse: PASSIVE
        RHIstate:  PASSIVE
        Traffic Domain: 0
4)      frontend.default.80.svc-cs (192.168.1.3:27109) - TCP    Type: CONTENT
        State: UP
        Last state change was at Mon Aug 27 10:39:30 2018
        Time since last state change: 0 days, 00:08:12.570
        Client Idle Timeout: 9000 sec
        Down state flush: ENABLED
        Disable Primary Vserver On Down : DISABLED
        Appflow logging: ENABLED
        State Update: DISABLED
        Default: frontend.default.80.svc-lb-default-lb  Content Precedence: RULE
        L2Conn: OFF     Case Sensitivity: ON
        Authentication: OFF
        401 Based Authentication: OFF
        Listen Policy: NONE
        IcmpResponse: PASSIVE
        RHIstate:  PASSIVE
        Traffic Domain: 0
Done

Also in NetScaler MAS, you can verify the NetScaler specific configurations pushed by NetScaler MAS to the NetScaler CPX instance. Navigate to Applications > Configurations.

App configurations

Configure NetScaler VPX as an Ingress device

You can configure NetScaler ADCs such as, NetScaler MPX, NetScaler VPX, or NetScaler CPX as ingress device. In this sample deployment, a NetScaler VPX applaince is used as a ingress device.

To use NetScaler VPX appliance as a ingress device, ensure that you have:

Create ingress rules in Kubernetes

You must create ingress rules in Kubernetes. The NetScaler MAS ingress controller uses these rules to configure content switiching policies on the NetScaler VPX appliance (ingress device).

The following is the sample YAML file to create ingress rules in Kubernetes, the YAML file uses NetScaler specific annotations:

Annotation Description
NETSCALER_VIP The virtual server IP address (VIP). This IP address must be routable.
NETSCALER_HTTP_PORT The virtual server HTTP port
NETSCALER_SECURE_PORT The virtual server HTTPs port

Sample: ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web-ingress
  annotations:
   NETSCALER_HTTP_PORT: "8888"
   NETSCALER_VIP: "10.102.29.140"

spec:
  rules:
  - host: hotdrinks.beverages.com
    http:
      paths:
      - path: /
        backend:
          serviceName: frontend
          servicePort: 80

Now, deploy the ingress rules using the kubectl create -f <yaml-file> command as shown below:

root@ubuntu:~# kubectl create -f ingress.yaml
Ingress/web-ingress created

After you have deployed the ingress rules, NetScaler MAS creates the content switching configuration and applies it on the NetScaler VPX appliance (ingress device).

You can verify the content switching configuration both in NetScaler MAS and the NetScaler VPX appliance (ingress device).

On NetScaler MAS, navigate to Applications > Configurations.

Ingress configurations

Add static route in the ingress device

You must add static route in the NetScaler VPX appliance (ingress device) to route the internet traffic to kubernetes network. You need to route the traffic to the services configured in the NetScaler VPX appliance (ingress device).

Before you add the static route, ensure that you:

  • Use the sh servicegroup <service-group-name> command on the NetScaler VPX appliance to view the configured service IP addresses.

    Service Group

    Note: The service IP addresses are in DOWN state.

  • Use the add ip <IP-ADDRESS> <NETMASK> -type SNIP to configure a SNIP on the NetScaler VPX appliance (ingress device) in the host network. The host network is the network over which the Kubernetes nodes communicate with each other. Usually the eth0 IP address is from this network.

  • Note down the podCIDR on each Kubernetes nodes. If you using overlay subnetwork, use the kubectl get nodes -o yaml | grep podCIDR to view the podCIDR as shown below:

    root@ubuntu:~# kubectl get nodes -o yaml | grep podCIDR
     podCIDR: 10.244.0.0/24
     podCIDR: 10.244.1.0/24
    

    If you using flannel subnetwork, the details are stored in the /run/flannel/subnet.env on all the nodes as shown below:

    root@worker:~# cat /run/flannel/subnet.env
    FLANNEL_NETWORK=10.244.0.0/16
    FLANNEL_SUBNET=10.244.1.1/24
    FLANNEL_MTU=1450
    FLANNEL_IPMASQ=true
    

Now, add a static route on the NetScaler VPX appliance (ingress device) using the add route <podCIDR_network> <podCIDR_netmask> <node_HostIP> command.

Sample:

> add route 10.244.1.0 255.255.255.0 10.102.29.102
 Done

After you have added the static route, verify the ingress configuration pushed by NetScaler MAS and also the state of the service IP addresses:

  • Use the sh servicegroup <service-group-name> command on the NetScaler VPX appliance to view the configured service IP addresses. The state of the service IP addresses must be in UP state as shown below:

    Service Group

  • Use the sh lb vs, sh cs vs and sh cs policy commands to view the ingress configuration pushed by NetScaler MAS to the NetScaler VPX appliance.

    > sh lb vs
     1)  web-ingress.default.80-frontend.default.80-lb (0.0.0.0:0) - HTTP        Type: ADDRESS
         State: UP
         Last state change was at Wed Aug 29 16:21:39 2018
         Time since last state change: 0 days, 00:14:16.120
         Effective State: UP
         Client Idle Timeout: 180 sec
         Down state flush: ENABLED
         Disable Primary Vserver On Down : DISABLED
         Appflow logging: ENABLED
         Port Rewrite : DISABLED
         No. of Bound Services :  4 (Total)       4 (Active)
         Configured Method: LEASTCONNECTION
         Current Method: Round Robin, Reason: Bound service's state changed to UP        BackupMethod: ROUNDROBIN
         Mode: IP
         Persistence: NONE
         Vserver IP and Port insertion: OFF
         Push: DISABLED  Push VServer:
         Push Multi Clients: NO
         Push Label Rule: none
         L2Conn: OFF
         Skip Persistency: None
         Listen Policy: NONE
         IcmpResponse: PASSIVE
         RHIstate: PASSIVE
         New Service Startup Request Rate: 0 PER_SECOND, Increment Interval: 0
         Mac mode Retain Vlan: DISABLED
         DBS_LB: DISABLED
         Process Local: DISABLED
         Traffic Domain: 0
         TROFS Persistence honored: ENABLED
         Retain Connections on Cluster: NO
      Done
    
     > sh cs vs
     1)  web-ingress.default.80-cs (10.102.29.70:80) - HTTP      Type: CONTENT
         State: UP
         Last state change was at Wed Aug 29 12:11:48 2018
         Time since last state change: 0 days, 04:24:15.110
         Client Idle Timeout: 180 sec
         Down state flush: ENABLED
         Disable Primary Vserver On Down : DISABLED
         Appflow logging: ENABLED
         Port Rewrite : DISABLED
         State Update: DISABLED
         Default:        Content Precedence: RULE
         Vserver IP and Port insertion: OFF
         L2Conn: OFF     Case Sensitivity: ON
         Authentication: OFF
         401 Based Authentication: OFF
         Push: DISABLED  Push VServer:
         Push Label Rule: none
         Listen Policy: NONE
         IcmpResponse: PASSIVE
         RHIstate:  PASSIVE
         Traffic Domain: 0
      Done
    
     > sh cs policy
     1)
         Policy: web-ingress.default.80-cs-frontend.default.80-cspol     Rule: HTTP.REQ.HOSTNAME.SERVER.EQ("hotdrinks.beverages.com") && HTTP.REQ.URL.PATH.STARTSWITH("/")       Action: web-ingress.default.80-cs-frontend.default.80-csaction
    
         Hits: 0
      Done
    
    

Verify the sample deployment

After you have deployed all the components of the sample deployment, ensure that you add DNS entry (HOST entry mapping) to map hotdrinks.beverages.com to the virtual server IP address of the NetScaler VPX appliance (ingress device). For the sample deployment, the verfication URL is: http://hotdrinks.beverages.com/frontend.php.

Sample Deployment