VRD Use Case – Using Citrix ADC Dynamic Routing with Kubernetes
Acme Inc. Route Health Injection and BGP integration for Kubernetes Applications
Acme Inc. is a long-time Citrix customer that has a large Citrix ADC footprint. Citrix ADC serves as the main load balancing and business continuity solution for critical Kubernetes applications. Acme Inc. currently has three main data centers.
Acme Inc. wants to provide redundancy and high availability for critical Kubernetes applications so they can provide greater fault tolerance among all deployment racks in the three data centers.
Using route health injection on the Citrix ADC, the solution provides redundancy for Kubernetes services that are accessed via the existing BGP + ECMP routing fabric.
In addition to route health injection, many Kubernetes applications require the back-end server to receive the real client IP. Traditional load balancing with Citrix ADC sources packets destined to back-end servers from the ADC subnet IP address. For applications that require the true client IP address as the source address, Citrix ADC offers multiple methods. These methods including USIP (use source IP mode) and DSR (direct server return).
Acme Inc. IT provisions test Citrix ADC VPX instances with test VIPs for the Kubernetes applications. This test environment is used to build the route health injection solution with client IP, and fully test it before rolling it out into the production environment.
Acme Inc. and Citrix identified several different requirements:
- Citrix ADC VPX units in each data center, with connectivity to dynamic routing network (3)
- IP addresses for up to three virtual servers to be configured as /32 routes in Acme Inc. dynamic routing
Kubernetes test VIP
- SNIP address in each data center to be used as the next hop for route health injection VIP on each Citrix ADC unit must have its own SNIP address with dynamic routing enabled. This is the gateway for the advertised route health injection VIP.
Identify Kubernetes information including:
- Back-end Kubernetes Pods and test VIPs
- Required ports and load balancing parameters
- SSL certificates (where applicable)
Client IP configuration
- Back-end servers must receive true client IP address
- Multiple options available are discussed in the client IP options section
Route Health Injection (RHI)
Citrix ADC Dynamic Routing with route health injection
The primary purpose of dynamic routing and Route Health Injection in Citrix ADC is to communicate the state or health of VIPs to the upstream routers. The state of a VIP depends on the virtual servers associated with it, and the services which are bound to that VIP. The advertisement of a VIP through route health injection is tied to the states of the virtual servers associated to the virtual IP address.
The virtual IP address must have advertisement enabled. This is accomplished by setting the
-hostroute option to
enabled on the virtual IP address. By default, the
-hostroute option is set to
-hostroute option can be enabled when you add an IP address with the
add ns ip command, or by modifying an existing IP address with the
set ns ip command.
Route Health Injection Monitoring
hostRoute option is enabled, the NetScaler kernel injects the host route into ZebOS NSM (Network Services Module), based on the state of the virtual servers that are associated to the virtual IP address. The
- vserverroute health injectionLevel switch controls the relationship between the state of virtual servers and the virtual IP host route that is sent to the Network Services Module (NSM).
The three options available for virtual server route health injection level:
ALL_VSERVERS– A host route is injected to NSM only if all the virtual servers associated to the virtual IP are UP.
ONE_VSERVER– A host route is injected to NSM only if any one of the virtual servers associated to the virtual IP is UP.
NONE– A host route is injected to the NSM irrespective of the state of the virtual servers associated to the virtual IP.
By default, the –vserverRHILevel is set to ONE_VSERVER.
The following diagram depicts the basic route health injection functionality for a virtual IP address associated to a load balanced virtual server on Citrix ADC:
Route Health Injection Options with Multiple Data Centers
The following describes the route health injection configuration which is selected on a per-application basis, depending on the specific requirements for each application. The available options are:
Active – Active with BGP determining the most efficient route for each client (Anycast or ECMP)
Route Health Injection Active – Active: Anycast or ECMP
Route health injection active / active is Anycast or ECMP. This is a true active-active alternative. In this scenario, the /32 route for the route health injection VIP is advertised in all data centers without cost or local preference being presented to BGP. Three routes are presented to the network with the Citrix ADC’s SNIP address specific to each data center acting as a gateway to access each VIP. Acme Inc. dynamic routing environment directs client requests to the data center on an equal cost basis for traffic distribution. In the event on services failing in one of the three data centers, the monitors bound to the load balanced services bring the virtual server down. This in turn removes the route advertisement for the data center with the failure. All client connections continue working with the remaining data centers.
Important considerations for route health injection active-active configuration:
- route health injection with ECMP is recommended for TCP and UDP based services and requires BGP configuration by Acme Inc. network team. route health injection with ECMP does have a limit on the number of routes upstream routers can support (64).
- route health injection with Anycast supports UDP based services, and is not recommended for TCP based services.
The following diagram describes the active – active Anycast/ECMP scenario:
Citrix ADC and Client IP Options
One of the important requirements requested by Acme Inc. for a large amount of its Kubernetes applications is that back-end servers receive the true client IP address for services being load balanced by Citrix ADC. Typical load balancing on Citrix ADC sources all traffic destined to back end servers from a SNIP (subnet IP) address owned by the NetScaler. For some applications the real client IP is required. Most of the applications that use route health injection also require the true client IP to be sent to the back-end server.
Citrix ADC has a feature called “Use Source IP” (USIP) which can be either bound globally, or individually to each service requiring client IP to the back-end server. The issue with this is that once the client receives the packet with a different source IP, asymmetrical routing occurs, and the packet is dropped. It is because of this that other considerations must be evaluated, and extra configuration is required on the back-end servers for USIP to work properly.
An important consideration when implementing use source IP mode on Citrix ADC is that Surge Protection mode must be turned off within the Citrix ADC modes. More information on USIP mode with Surge Protection is found in the Citrix article here.
Citrix ADC offers multiple methods to accomplish this, and they are described in the following section. The options available are:
- USIP mode with Citrix ADC SNIP as default gateway
- Direct Server Return Layer 3
- IP tunneling
- Client IP insertion on TOS header
- Direct Server Return Layer 2
- Client IP insertion for TCP header
USIP Mode with Citrix ADC SNIP As Default Gateway
During multiple meetings with Acme Inc. IT, it was determined that this method is preferred for most of the load balancing services requiring client IP. This method involves changing the default gateway for each back-end server being load balanced and setting it to the SNIP address of the Citrix ADC unit hosting the load balanced VIP. This option supports all Citrix ADC features as opposed to Direct server return options, where the Citrix ADC is only managing incoming client requests. This option also requires the most bandwidth on the ADC units. This method has the following basic requirements:
- The Citrix ADC must have a SNIP address in the same L2 subnet as all back-end servers being load balanced.
- The SNIP address is configured as the default gateway for all back-end servers. Multiple SNIP addresses may be used for back end servers in different L2 subnets.
- USIP mode must be enabled on the services that point to back-end servers.
USIP may also be enabled globally on Citrix ADC units, however USIP will only apply to services created after enabling USIP mode.
Citrix recommends adding more network interfaces to back end servers and configuring static routes for non-client traffic.
- Backup routines and other processes that require bandwidth do not have to traverse the Citrix ADC unit.
Direct Server Return: Layer 3 Options
Direct server return with Citrix ADC is another configuration option for obtaining the client IP addresses on the back end servers in a load balancing configuration. Direct server return can be configured in Layer 3 mode, therefore permitting use of back end servers on other L3 VLANs, as opposed to USIP which requires L2 connectivity from the ADC to the back end servers. Direct server return configuration does not support certain Citrix ADC features due to response traffic not traversing the Citrix ADC unit. This option requires the least throughput on the Citrix ADC units.
Direct server return has more complex configurations required for back-end servers, as they must be able to extract the client IP and rewrite the TCP header to respond directly to the client. Citrix currently supports two different methods to configure Layer 3 DSR:
- DSR mode with IP tunneling (IP over IP)
- DSR mode with TOS (type of service TCP header field) Layer 3
DSR has the following basic requirements:
- The Citrix ADC must be configured with USIP on the service.
- The back-end servers have a loopback address configured with the Citrix ADC VIP address.
- The back-end server must be configured specifically for each method:
- IP tunneling: the back end server must de-capsulate the packets from the ADC and extract the client IP for direct response to the client.
- TOS (Type of Service): the back end server must be able read the TOS header of the TCP packet and use this information to reply directly to the client.
- Either method might require custom configurations on back end servers and the use of a third-party application.
Layer 3 DSR might require the configuration of exceptions on firewalls and security devices.
More information on direct server return with Layer 3 can be found here:
- DSR with TOS
- DSR with IP Tunneling
Expose services of Type “LoadBalancer”
Services of the type LoadBalancer are natively supported in Kubernetes deployments on public clouds such as, AWS, GCP, or Azure. In cloud deployments, when you create a service of type LoadBalancer, a cloud managed load balancer is assigned to the service. The service is then exposed using the load balancer.
For on-premises, bare metal, or public cloud deployments of Kubernetes, you can use a Citrix ADC outside the cluster to load balance the incoming traffic. The Citrix ingress controller provides flexible IP address management that enables multitenancy for Citrix ADCs. The Citrix ingress controller allows you to load balance multiple services using a single ADC and also combines various Ingress functions. Using the Citrix ADC with the Citrix ingress controller, you can maximize the utilization of load balancer resources for your public cloud and significantly reduce your operational expenses.
The Citrix ingress controller supports the services of type LoadBalancer when the Citrix ADC is outside the Kubernetes cluster (Tier-1). When a service of type LoadBalancer is created, updated, or deleted, the Citrix ingress controller configures the Citrix ADC with a load balancing virtual server.
The load balancing virtual server is configured with an IP address (virtual IP address or VIP) that is obtained in one of the following ways:
- By automatically assigning a virtual IP address to the service using the IPAM controller provided by Citrix. The solution is designed in such a way that you can easily integrate the solution with ExternalDNS providers such as Infoblox. For more information, see Interoperability with ExternalDNS.
By specifying an IP address using the spec.loadBalancerIP field in your service definition. The Citrix ingress controller uses the IP address provided in the spec.loadBalancerIP field as the IP address for the load balancing virtual server that corresponds to the service.
apiVersion: v1 kind: Service metadata: name: hello-world-service spec: type: LoadBalancer loadBalancerIP: "" ports: - port: 80 targetPort: 8080 selector: run: load-balancer-example <!--NeedCopy-->
For a more detailed reference, see Expose services of type LoadBalancer.
As a prerequisite for Citrix ADC Dynamic Routing with Kubernetes, IP Address Management (IPAM) must be configured. IPAM is used to automatically assign and release IP addresses in ADM managed deployments. For a more detailed reference, see Configure IP Address Management.