Autoscaling of Citrix ADC VPX in Microsoft Azure using Citrix ADM
Autoscaling is a cloud computing method that automatically adds or removes resources depending upon the actual usage. Autoscaling is useful whenever your site or application needs an on-demand resource allocation to satisfy the fluctuating number of client requests or processing jobs.
The demand for web applications or services can vary significantly. Maintaining the correct number of Citrix ADC instances for the different traffic needs is important. You can increase or decrease the network resources on Microsoft Azure depending on the demand. Thus, it provides cost optimization without compromising the performance.
Citrix Application Delivery Management (ADM) autoscaling maintains the exact number of Citrix ADC instances for fluctuating resource consumption. Citrix ADM determines the traffic flow based on the fluctuating resource consumption, it decides to scale out or scale in the Citrix ADC instances dynamically. Thus, it provides you the flexibility to maintain the correct number of Citrix ADC instances.
Citrix ADM monitors the resource usage of Citrix ADC instances and matches with the configured threshold value. It triggers the scale-out action if one of the configured resources exceeds the specified threshold value.
Citrix ADM triggers the scale-in action only when the usage of all the configured resources falls below the normal threshold value.
High availability of applications: Autoscaling ensures that your application always has the right number of Citrix ADC VPX instances to handle the traffic demands. It ensures that your application is up and running all the time irrespective of traffic demands.
Smart scaling decisions and zero-touch configuration: Autoscaling continuously monitors your application and adds or removes Citrix ADC instances dynamically depending on the demand. The instances are automatically added when demand is increased for a certain period. The instances are automatically removed when the demand is decreased for a certain period. The addition and removal of Citrix ADC instances happen automatically making it a zero-touch manual configuration.
Automatic DNS management: The Citrix ADM autoscale feature offers an automatic DNS management. Whenever new Citrix ADC instances are added, the domain names are updated automatically.
Graceful connection termination: During a scale-in, the Citrix ADC instances are gracefully removed avoiding the loss of client connections.
Better cost management: Autoscaling dynamically increases or decreases Citrix ADC instances as needed. This method enables you to optimize the costs involved. Launching instances only when they are needed and terminate them when they are not needed reduces the operational costs. Thus, you pay only for the resources you use.
Observability: Observability is key to application dev-ops or IT personnel to monitor the health of the application. The Citrix ADM’s autoscale dashboard enables you to visualize the threshold parameter values, autoscale trigger time stamps, events, and the instances participating in autoscale.
Citrix ADCs provisioned by Citrix ADM use Microsoft Azure subscription licenses.
The Citrix ADC instances that are created in the Citrix autoscale group uses Citrix ADC Advanced or Premium ADC licenses. Citrix ADC clustering feature is included in Advanced or Premium ADC licenses.
Supported Citrix ADC Azure virtual machine images for autoscaling
Use the Azure virtual machine image that supports a minimum of three NICs. Autoscaling Citrix ADC VPX instance is supported only on Premium and Advanced edition. For more information on Azure virtual machine image types, see VM types and sizes in Microsoft Documentation.
The following are the recommended VM sizes for autoscaling:
Citrix ADM handles the client traffic distribution using Azure DNS or Azure Load Balancer (ALB).
Traffic distribution using Azure DNS
The following diagram illustrates how the DNS based autoscaling occurs using the Azure traffic manager as the traffic distributor:
In DNS based autoscaling, DNS acts as a distribution layer. The Azure traffic manager is the DNS based load balancer in Microsoft Azure. Traffic manager directs the client traffic to the appropriate Citrix ADC instance that is available in the Citrix ADM autoscaling group.
Azure traffic manager resolves the FQDN to the VIP address of the Citrix ADC instance.
In DNS based autoscaling, each Citrix ADC instance in the Citrix ADM autoscale group requires a public IP address.
Citrix ADM triggers the scale-out or scale-in action at the cluster level. When a scale-out is triggered, the registered virtual machines are provisioned and added to the cluster. Similarly, when a scale-in is triggered, the nodes are removed and de-provisioned from the Citrix ADC VPX clusters.
Traffic distribution using Azure Load Balancer
The following diagram illustrates how the autoscaling occurs using Azure Load Balancer as the traffic distributor:
Azure Load Balancer is the distribution tier to the cluster nodes. ALB manages the client traffic and distributes it to Citrix ADC VPX clusters. ALB sends the client traffic to Citrix ADC VPX cluster nodes that are available in the Citrix ADM autoscaling group across availability zones.
Public IP address is allocated to Azure Load Balancer. Citrix ADC VPX instances do not require a public IP address.
Citrix ADM triggers the scale-out or scale-in action at the cluster level. When a scale-out is triggered the registered virtual machines are provisioned and added to the cluster. Similarly, when a scale-in is triggered, the nodes are removed and de-provisioned from the Citrix ADC VPX clusters.
Citrix ADM autoscale group
Autoscale group is a group of Citrix ADC instances that load balance applications as a single entity and trigger autoscaling based on the configured threshold parameter values.
Resource group contains the resources that are related to Citrix ADC autoscaling. This resource group helps you to manage the resources required for autoscaling. For more information, see Manage resource groups.
Azure back-end virtual machine scale set
Azure virtual machine scale is a collection of identical VM instances. The number of VM instances can increase or decrease depending on the client traffic. This set provides high-availability to your applications. For more information, see Virtual machine scale sets.
Availability Zones are isolated locations within an Azure region. Each region is made up of several availability zones. Each availability zone belongs to a single region. Each availability zone has one Citrix ADC VPX cluster. For more information, see Availability zones in Azure.
How the autoscaling works
The following flowchart illustrates the autoscaling workflow:
The Citrix ADM collects the statistics (CPU, Memory, and throughput) from the autoscale provisioned clusters for every minute.
The statistics are evaluated against the configuration thresholds. Depending on the statistics, scale out or scale in is triggered. Scale-out is triggered when the statistics exceed the maximum threshold. Scale-in is triggered when the statistics are operating below the minimum threshold.
If a scale-out is triggered:
New node is provisioned.
The node is attached to the cluster and the configuration is synchronized from the cluster to the new node.
The node is registered with Citrix ADM.
The new node IP addresses are updated in the Azure traffic manager.
If a scale-in is triggered:
The node is identified to remove.
Stop new connections to the selected node.
Waits for the specified period for the connections to drain. In DNS traffic, it also waits for the specified TTL period.
The node is detached from the cluster, deregistered from Citrix ADM, and then de-provisioned from Microsoft Azure.
When the application is deployed, an IP set is created on clusters in every availability zone. Then, the domain and instance IP addresses are registered with the Azure traffic manager or ALB. When the application is removed, the domain and instance IP addresses are deregistered from the Azure traffic manager or ALB. Then, the IP set is deleted.
Example autoscaling scenario
Consider that you have created an autoscale group named asg_arn in a single availability zone with the following configuration.
Selected threshold parameters – Memory usage.
Threshold limit set to memory:
Minimum limit: 40
Maximum limit: 85
Watch time – 2 minutes.
Cooldown period – 10 minutes.
Time to wait during de-provision – 10 minutes.
DNS time to live – 10 seconds.
After the autoscale group is created, statistics are collected from the autoscale group. The autoscale policy also evaluates if any an autoscale event is in progress. If the autoscaling is in progress, wait for that event to complete before collecting the statistics.
The sequence of events
Memory usage exceeds the threshold limit at T2. However, the scale-out is not triggered because it did not breach for the specified watch time.
Scale-out is triggered at T5 after a maximum threshold is breached for 2 minutes (watch time) continuously.
No action was taken for the breach between T5-T10 because the node provisioning is in progress.
Node is provisioned at T10 and added to the cluster. Cooldown period started.
No action was taken for the breach between T10-T20 because of the cooldown period. This period ensures the organic growing of instances of an autoscale group. Before triggering the next scaling decision, it waits for the current traffic to stabilize and average out on the current set of instances.
Memory usage drops below the minimum threshold limit at T23. However, the scale-in is not triggered because it did not breach for the specified watch time.
Scale-in is triggered at T26 after the minimum threshold is breached for 2 minutes (watch time) continuously. A node in the cluster is identified for de-provisioning.
No action was taken for the breach between T26-T36 because Citrix ADM is waiting to drain existing connections. For DNS based autoscaling, TTL is in effect.
For DNS based autoscaling, Citrix ADM waits for the specified Time-To-Live (TTL) period. Then, it waits for existing connections to drain before initiating node de-provisioning.
No action was taken for the breach between T37-T39 because the node de-provisioning is in progress.
Node is removed and de-provisioned at T40 from the cluster.
All the connections to the selected node were drained before initiating node de-provisioning. Therefore, cooldown period is skipped after the node de-provision.