Detailed procedures to setup service mesh topology

The prerequisites to deploy the service mesh topology is available at Setting up service graph.

Configure NetScaler agent

To enable communication between Kubernetes cluster and NetScaler Console, you must install and configure an agent. You can configure an agent using a hypervisor, public cloud services (such as Microsoft Azure, AWS), or built-in agent available on NetScaler instances (ideal for HA deployments).

Follow the procedure to configure an agent.

Note

  • You can also use an existing agent.

  • The agents are, by default, automatically upgraded to NetScaler Console latest build. You can view the agent details on the Infrastructure > Instances > Agents page. You can also specify the time when you want the agent upgrades to happen. For more information, see Configuring Agent Upgrade Settings.

Configure static routes in the agent

Inside the Kubernetes cluster, all containerized pods use an overlay network. Establishing the communication using those private IP addresses directly is not possible. To enable communication from NetScaler Console to Kubernetes cluster, you must configure static routing in the agent.

Consider that you have the following IP addresses for your Kubernetes cluster:

  • Kubernetes master – 101.xx.xx.112

  • Kubernetes worker 1 – 101.xx.xx.111

  • Kubernetes worker 2 – 101.xx.xx.110

On the Kubernetes master, run the following command to identify the pod network to do the static routing:

kubectl get nodes -o jsonpath="{range .items[*]}{'podNetwork: '}{.spec.podCIDR}{'\t'}{'gateway: '}{.status.addresses[0].address}{'\n'}{end}"

The following is an example output after you run the command:

Example command

After successfully configuring an agent:

  1. Using an SSH client, log on to the agent

  2. Type shell and press Enter to switch to bash

  3. Configure the static routing using the command route add -net <public IP address range> <Kubernetes IP address>

    For example:

    route add -net 192.168.0.0/24 101.xx.xx.112

    route add -net 192.168.1.0/24 101.xx.xx.111

    route add -net 192.168.2.0/24 101.xx.xx.110

  4. Verify the configuration by using netstat -rn

    static routing

  5. Append these route commands in /mpsconfig/svm.conf file.

    1. In the agent, access the svm.conf file using the following command:

      vim /mpsconfig/svm.conf

    2. Add the static routes in svm.conf file.

      For example, route add -net 192.168.0.0/24 101.xx.xx.112.

Configure the required parameters

In the Kubernetes master:

  1. Create secret with the agent credentials in every namespace where CPX as an ingress gateway / sidecar is deployed.

    kubectl create secret generic admlogin --from-literal=username=<username> --from-literal=password=<password> -n <namespace>

  2. helm repo add citrix https://citrix.github.io/citrix-helm-charts/

  3. Deploy the NetScaler CPX as an Ingress Gateway

    helm install citrix-adc-istio-ingress-gateway citrix/citrix-adc-istio-ingress-gateway --version 1.2.1 --namespace <namespace> --set ingressGateway.EULA=YES,citrixCPX=true,ADMSettings.ADMFingerPrint=XX:00:X1:00:XX:0X:X0,ADMSettings.ADMIP=<xx.xx.xx.xx>, ingressGateway.image=quay.io/citrix/citrix-k8s-cpx-ingress,ingressGateway.tag=13.0-58.30

    The following table lists the configurable parameters in the Helm chart and its default values:

    Parameter Description Default Optional/Mandatory (helm)
    citrixCPX NetScaler CPX FALSE Mandatory for NetScaler CPX
    xDSAdaptor.image Image of the Citrix xDS adaptor container quay.io/citrix/citrix-istio-adaptor:1.2.1 Mandatory
    ADMSettings.ADMIP NetScaler Console IP address null Mandatory for NetScaler CPX
    ADMSettings.ADMFingerPrint The NetScaler Console Fingerprint. Navigate to Infrastructure > Instances > Agents, select the agent, and click View Fingerprint null Optional
    ingressGateway.EULA End User License Agreement(EULA) terms and conditions. If yes, then user agrees to EULA terms and conditions. NO Mandatory for NetScaler CPX
    ingressGateway.image Image of NetScaler CPX designated to run as Ingress Gateway quay.io/citrix/citrix-k8s-cpx-ingress:13.0–58.30 Mandatory for NetScaler CPX
  4. Deploy the Citrix SideCar Injector.

    helm install cpx-sidecar-injector citrix/citrix-cpx-istio-sidecar-injector --version 1.2.1 --namespace <namespace> set cpxProxy.EULA=YES,ADMSettings.ADMFingerPrint=xx:xx:xx:xx,ADMSettings.ADMIP=<xx.xx.xx.xx>,cpxProxy.image=quay.io/citrix/citrix-k8s-cpx-ingress,cpxProxy.tag=13.0-58.30

    The following table lists the configurable parameters in the Helm chart and its default values:

    Parameter Description Default value
    ADMSettings.ADMIP The NetScaler Console IP address NIL
    cpxProxy.image NetScaler CPX image used as sidecar proxy quay.io/citrix/citrix-k8s-cpx-ingress:13.0–58.30
    cpxProxy.imagePullPolicy Image pull policy for NetScaler IfNotPresent
    cpxProxy.EULA End User License Agreement(EULA) terms and conditions. If yes, then user agrees to EULA terms and conditions. NO
    cpxProxy.cpxSidecarMode Environment variable for NetScaler CPX. It indicates that NetScaler CPX is running as sidecar mode or not. YES
  5. Set the label on any Namespace that needs CPX Sidecar Injection.

    kubectl label namespace <app-namespace> cpx-injection=enabled

    After performing step 3 and 5, you can see the NetScaler CPX is registered in NetScaler Console.

Deploy a sample application

Consider that you want to deploy the following applications:

Deploy sample apps

Perform the following procedure to deploy a sample application:

  1. kubectl create namespace citrix-system

  2. kubectl create namespace bookinfo

  3. kubectl label namespace bookinfo cpx-injection=enabled

  4. kubectl create secret generic admlogin --from-literal=username=<uername> --from-literal=password=<password> -n citrix-system

    Note

    You can give a user name and a password of your choice.

  5. kubectl create secret generic admlogin --from-literal=username=<username> --from-literal=password=<password> -n bookinfo

    Note

    You can give a user name and a password of your choice.

  6. helm install citrix-adc-istio-ingress-gateway citrix/citrix-adc-istio-ingress-gateway --version 1.2.1 --namespace citrix-system --set ingressGateway.EULA=YES,citrixCPX=true,ADMSettings.ADMFingerPrint=xx:xx:xx:xx,ADMSettings.ADMIP=<ADM agent IP address>,ingressGateway.image=quay.io/citrix/citrix-k8s-cpx-ingress,ingressGateway.tag=13.0-58.30

    Note

    You must give your NetScaler Console fingerprint and agent IP address

  7. helm install cpx-sidecar-injector citrix/citrix-cpx-istio-sidecar-injector --namespace citrix-system --set cpxProxy.EULA=YES,ADMSettings.ADMFingerPrint=xx:xx:xx:xx,ADMSettings.ADMIP=<ADM agent IP address>,cpxProxy.image=quay.io/citrix/citrix-k8s-cpx-ingress,cpxProxy.tag=13.0-58.30

    Note

    You must give your NetScaler Console fingerprint and agent IP address

  8. helm install bookinfo bookinfo/ --namespace bookinfo --set citrixIngressGateway.namespace=citrix-system

Add Kubernetes cluster in NetScaler Console

To add the Kubernetes cluster:

  1. Log on to NetScaler Console with administrator credentials.

  2. Navigate to Infrastructure > Orchestration > Kubernetes > Cluster. The Clusters page is displayed.

  3. Click Add.

  4. In the Add Cluster page, specify the following parameters:

    1. Name - Specify a name of your choice.

    2. API Server URL - You can get the API Server URL details from the Kubernetes Master node.

      1. On the Kubernetes master node, run the command kubectl cluster-info.

        API Server URL

      2. Enter the URL that displays for “Kubernetes master is running at.”

    3. Authentication Token - Specify the authentication token. The authentication token is required to validate access for communication between Kubernetes cluster and NetScaler Console. To generate an authentication token:

      On the Kubernetes master node:

      1. Use the following YAML to create a service account:

        apiVersion: v1
        kind: ServiceAccount
        metadata:
          name: <name>
          namespace: <namespace>
        <!--NeedCopy-->
        
      2. Run kubectl create -f <yaml file>.

        The service account is created.

      3. Run kubectl create clusterrolebinding <name> --clusterrole=cluster-admin --serviceaccount=<namespace>:<name> to bind the cluster role to service account.

        The service account now has the cluster-wide access.

        A token is automatically generated while creating the service account.

      4. Run kubectl describe sa <name> to view the token.

      5. To get the secret string, run kubectl describe secret <token-name>.

        Generate token

    4. Select the agent from the list.

      Note

      Ensure to select the same agent that you have added in the CPX YAML.

    5. Click Create.

      add cluster

Enable auto select virtual servers for licensing

Note

Ensure you have sufficient virtual server licenses. For more information, see Licensing

After you add Kubernetes cluster in NetScaler Console, you must ensure to auto-select virtual servers for licensing. Virtual servers must be licensed to display data in Service Graph. To auto-select virtual servers:

  1. Navigate to Settings > NetScaler Console Licensing & Analytics Config.

  2. Under Virtual Server License Summary, enable Auto-select Virtual Servers and Auto-select non addressable Virtual Servers.

    Auto-select virtual server

Enable Web Transaction and TCP Transaction settings

After you add the Kubernetes cluster and enable the auto-select virtual servers, change the Web Transaction Settings and TCP Transactions Settings to All.

  1. Navigate to Settings > Analytics Settings.

    The Settings page is displayed.

  2. Click Enable Features for Analytics.

  3. Under Web Transaction Settings, select All.

    web-transaction-settings

  4. Under TCP Transactions Settings, select All.

    TCP

  5. Click OK.

Send traffic to microservices

Next, you must send traffic to microservices to get the service graph populated in NetScaler Console.

  1. Determine the Ingress IP and port

    export INGRESS_HOST=$(kubectl get pods -l app=citrix-ingressgateway -n citrix-system -o 'jsonpath={.items[0].status.hostIP}')

    export INGRESS_PORT=$(kubectl -n citrix-system get service citrix-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

    export SECURE_INGRESS_PORT=$(kubectl -n citrix-system get service citrix-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

  2. Access the Bookinfo front end application using curl. The productpage service must return 200 OK response.

    curl -kv https://$INGRESS_HOST:$SECURE_INGRESS_PORT/productpage

    curl -v http://$INGRESS_HOST:$INGRESS_PORT/productpage

  3. Visit https://$INGRESS_HOST:$SECURE_INGRESS_PORT/productpage from a browser.

    The Bookinfo page is displayed.

  4. Ensure that $INGRESS_HOST and $SECURE_INGRESS_PORT is replaced by an IP address and a port value.

After you send traffic to microservices, the service graph is populated approximately in 10 minutes duration.

Sample

Using the service graph, you can analyze various service details such as metrics, errors, and so on. For more information, see Service graph.

Detailed procedures to setup service mesh topology