Detailed procedures to setup dual tier or service mesh lite topology

Software requirements

Kubernetes Distribution Kubernetes Version Container Network Interfaces (CNI) CPX version CIC version NetScaler Console version NetScaler agent Version
Open source v1.16.3 Flannel 13.0–41.28 1.5.25 13.0–47.22 13.0–47.22

Before you begin

You can view service graph using the following scenarios:

  • NetScaler Console and Kubernetes cluster on the same network (for example, NetScaler Console and Kubernetes cluster hosted on Citrix Hypervisor).

  • NetScaler Console and Kubernetes cluster on a different network. In this scenario, you must configure an on-prem agent and register the agent on the network, where Kubernetes cluster is hosted.

To use service graph in NetScaler Console, ensure you have:

Software requirements

Kubernetes Distribution Kubernetes Version Container Network Interfaces (CNI) CPX version CIC version NetScaler Console version NetScaler agent Version
Open source v1.16.3 Flannel 13.0–41.28 1.5.25 13.0–47.22 13.0–47.22

Before you begin

You can view service graph using the following scenarios:

  • NetScaler Console and Kubernetes cluster on the same network (for example, NetScaler Console and Kubernetes cluster hosted on Citrix Hypervisor).

  • NetScaler Console and Kubernetes cluster on a different network. In this scenario, you must configure an on-prem agent and register the agent on the network, where Kubernetes cluster is hosted.

To use service graph in NetScaler Console, ensure you have:

Configure static routes in NetScaler Console

Inside the Kubernetes cluster, all containerized pods use an overlay network. Establishing the communication using those private IP addresses directly is not possible. To enable communication from NetScaler Console to Kubernetes cluster, you must configure static routing in NetScaler Console.

Note

If you are using an on-prem agent, ensure you configure static routes on the agent. Using an SSH client, log on to NetScaler agent and configure the static routes.

Consider that you have the following IP addresses for your Kubernetes cluster:

  • Kubernetes master – 101.xx.xx.112

  • Kubernetes worker 1 – 101.xx.xx.111

  • Kubernetes worker 2 – 101.xx.xx.110

On the Kubernetes master, run the following command to identify the pod network to do the static routing:

kubectl get nodes -o jsonpath="{range .items[*]}{'podNetwork: '}{.spec.podCIDR}{'\t'}{'gateway: '}{.status.addresses[0].address}{'\n'}{end}"

The following is an example output after you run the command:

Example command

  1. Using an SSH client, log on to NetScaler Console

  2. Configure the static routing using the command route add -net <public IP address range> <Kubernetes IP address>

    For example:

    route add -net 192.168.0.0/24 101.xx.xx.112

    route add -net 192.168.1.0/24 101.xx.xx.111

    route add -net 192.168.2.0/24 101.xx.xx.110

  3. Verify the configuration by using netstat -rn

    static routing

  4. Append these route commands in /mpsconfig/svm.conf file.

    1. In NetScaler Console, access the svm.conf file using the following command:

      vim /mpsconfig/svm.conf

    2. Add the static routes in svm.conf file.

      For example, route add -net 192.168.0.0/24 101.xx.xx.112.

Download the sample deployment files from GitHub

  1. Use the command git clone https://github.com/citrix/citrix-k8s-ingress-controller.git to clone the git hub repository in the master node.

  2. To access the YAMLs:

    cd citrix-k8s-ingress-controller/example/servicegraph-demo/

Add parameters in CPX YAML file

Note

If you are using CPX 58.x or later, you must use the non-nsroot password while registering to agent. To ensure security, NetScaler agent 61.x or later releases need mandatory password change. If your NetScaler agent is upgraded to 61.x or latest version, you must ensure to use CPX 58.x or later build.

You must include the following parameters in the cpx.yaml file to ensure CPX registration with NetScaler Console:

- name: "NS_MGMT_SERVER"
  value: "xx.xx.xx.xx"
- name: "NS_MGMT_FINGER_PRINT"
  value: "xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx"
- name: "NS_HTTP_PORT"
  value: "9080"
- name: "NS_HTTPS_PORT"
  value: 9443"
- name: "NS_MGMT_USER"
  value: "nsroot"
- name: "NS_MGMT_PASS"
  value: <your password>
- name: "LOGSTREAM_COLLECTOR_IP"
  value: "xx.xx.xx.xx"

<!--NeedCopy-->
  • NS_MGMT_SERVER – Indicates the NetScaler Console IP address

    Note

    If an agent is used, then this indicates the agent IP address.

  • NS_MGMT_FINGER_PRINT – Indicates the authentication for CPX in NetScaler Console. To get the fingerprint:

    1. In NetScaler Console, navigate to Settings > Administration

    2. Under System Configurations, click View ADM Fingerprint

      Fingerprint

      Note:

      If you have configured an on-prem agent, navigate to Infrastructure > Instances > Agents, select the agent, and then click View Fingerprint.

      Fingerprint agent

  • NS_HTTP_PORT – Indicates the HTTP port for communication

  • NS_HTTPS_PORT – Indicates the HTTPS port for communication

  • NS_MGMT_USER - Indicates the user name

  • NS_MGMT_PASS - Indicates the password. Specify a password of your choice

  • LOGSTREAM_COLLECTOR_IP – Indicates the NetScaler agent IP address, where Logstream protocol must be enabled to transfer log data from CPX to NetScaler Console

Add VPX or SDX or MPX or BLX instance in NetScaler Console

To get the tier-1 NetScaler instance analytics in service graph, you must add the VPX/SDX/MPX/BLX instance in NetScaler Console and enable Web Insight.

  1. Navigate to Infrastructure > Instances > NetScaler

  2. Click the Add option to add the instance. For more information, see Add instances in NetScaler Console

  3. After adding the instance, select the virtual server and enable Web Insight. For more information, see Manage licensing and enable analytics on virtual servers

Add Kubernetes cluster in NetScaler Console

To add the Kubernetes cluster:

  1. Log on to NetScaler Console with administrator credentials.

  2. Navigate to Orchestration > Kubernetes > Cluster. The Clusters page is displayed.

  3. Click Add.

  4. In the Add Cluster page, specify the following parameters:

    1. Name - Specify a name of your choice.

    2. API Server URL - You can get the API Server URL details from the Kubernetes Master node.

      1. On the Kubernetes master node, run the command kubectl cluster-info.

        API Server URL

      2. Enter the URL that displays for “Kubernetes master is running at.”

    3. Authentication Token - Specify the authentication token. The authentication token is required to validate access for communication between Kubernetes cluster and NetScaler Console. To generate an authentication token:

      On the Kubernetes master node:

      1. Use the following YAML to create a service account:

        apiVersion: v1
        kind: ServiceAccount
        metadata:
          name: <name>
          namespace: <namespace>
        <!--NeedCopy-->
        
      2. Run kubectl create -f <yaml file>.

        The service account is created.

      3. Run kubectl create clusterrolebinding <name> --clusterrole=cluster-admin --serviceaccount=<namespace>:<name> to bind the cluster role to service account.

        The service account now has the cluster-wide access.

        A token is automatically generated while creating the service account.

      4. Run kubectl describe sa <name> to view the token.

      5. To get the secret string, run kubectl describe secret <token-name>.

        Generate token

    4. Select the agent from the list.

      Note

      If you are using an on-prem agent, ensure to select the same agent that you have added in the CPX YAML.

    5. Click Create.

      add cluster

Deploy a sample microservice application

On the master node:

  1. Run kubectl create -f namespace.yaml to create a namespace.

  2. Deploy hotdrink microservices, ingress, and secrets using following commands:

    kubectl create -f team_hotdrink.yaml -n sg-demo

    kubectl create -f hotdrink-secret.yaml -n sg-demo

Deploy CPX and register CPX in NetScaler Console

  1. Run kubectl create -f rbac.yaml to deploy cluster role and cluster binding.

  2. Run kubectl create -f cpx.yaml -n sg-demo to deploy CPX.

After the deployment, the CPX registration is automatically done.

Enable auto select virtual servers for licensing

Note

Ensure you have sufficient virtual server licenses. For more information, see Licensing

After you add Kubernetes cluster in NetScaler Console, you must ensure to auto-select virtual servers for licensing. Virtual servers must be licensed to display data in Service Graph. To auto-select virtual servers:

  1. Navigate to Settings > Licensing & Analytics Configuration.

  2. Under Virtual Server License Summary, enable Auto-select Virtual Servers and Auto-select non addressable Virtual Servers.

    Auto-select virtual server

Enable Web Transaction and TCP Transaction settings

After you add the Kubernetes cluster and enable the auto-select virtual servers, change the Web Transaction Settings and TCP Transactions Settings to All.

  1. Navigate to Settings > Analytics Settings.

    The Analytics Settings page is displayed.

  2. Click Enable Features for Analytics.

  3. Under Web Transaction Settings, select All.

  4. Under TCP Transactions Settings, select All.

    TCP

  5. Click OK.

Send traffic to microservices

Next, you must send traffic to microservices to get the service graph populated in NetScaler Console.

  1. Run kubectl get svc -n sg-demo to expose CPX through NodePort.

    NodePort

  2. Edit the etc/host file and create a domain IP entry for hotdrink.beverages.com

    You can now access the microservice using https://hotdrink.beverages.com