Advanced Concepts

Citrix ADC and Microsoft Azure Validated Reference Design

Citrix ADC on Microsoft Azure ensures organizations have access to secure and optimized applications and assets deployed in the cloud and provides the flexibility to establish a networking foundation that adjusts to the changing needs of an environment. This validated design guides organizations through how to configure front end autoscale in Azure to reliably and cost effectively deliver applications.

Overview Citrix ADC VPX

Citrix ADC is an all-in-one application delivery controller that accelerates the performance of internal and external web applications. The appliance reduces application ownership costs, optimizes the user experience, and ensures that applications are always available by using:

  • Advanced Layer 4-7 services load balancing and traffic management

  • Proven application acceleration such as HTTP compression and caching

  • An integrated application firewall for application security

  • Server offloading to significantly reduce costs and consolidate servers

As an undisputed leader of service and application delivery, Citrix ADC is deployed in thousands of networks around the world. Citrix ADC is leveraged to optimize, secure, and control the delivery of enterprise and cloud services. The appliance is deployed directly in front of web and database servers. Citrix ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. It’s much simpler to meet SLAs with end-to-end monitoring that transforms network data into actionable business intelligence. Citrix ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required.

Overview of Citrix ADC in Microsoft Azure

The Citrix ADC VPX virtual appliance is available as an image in the Microsoft Azure Marketplace. Citrix ADC VPX on Microsoft Azure Resource Manager (ARM) enables customers to use Azure cloud computing capabilities and apply Citrix ADC load balancing and traffic management features for their business needs. You can deploy Citrix ADC VPX instances on ARM either as standalone instances or as high availability pairs in active-active or active-standby modes.

Limitations and usage guidelines

  • The Azure architecture does not accommodate support for the following features:
    • Clustering
    • IPv6
    • Gratuitous ARP (GARP)
    • L2 Mode
    • Tagged VLAN
    • Dynamic Routing
    • Virtual MAC (vMAC)
    • USIP
    • CloudBridge Connector
  • The Intranet IP (IIP) feature is not supported, because Azure does not provide the pool of IP addresses required for this feature. IIP is frequently used in VOIP, SIP, or server-initiated-connection deployment.

  • If you expect that you might have to shut down and temporarily deallocate the Citrix ADC VPX virtual machine at any time, assign a static Internal IP address while creating the virtual machine. If you do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible.

  • In an Azure deployment, only the following Citrix ADC VPX models are supported: VPX 10, VPX 200, and VPX 1000. These virtual appliances can be deployed on any instance type that has two or more cores and more than 2 GB memory. See the Citrix ADC VPX data sheet.

  • The deployment ID that Azure generates during virtual machine provisioning is not visible to the user in ARM. You cannot use the deployment ID to deploy Citrix ADC VPX appliance on ARM.

Use cases

Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, Citrix ADC on Azure combines essential application delivery capabilities in a single VPX instance. This includes L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other services conveniently available via the Azure Marketplace. Furthermore, a single policy framework governs everything. Citrix ADC is managed with the same powerful set of tools used to administer on-premises Citrix ADC deployments. The net result is that Citrix ADC on Azure enables several compelling use cases. Citrix ADC not only supports the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers.

Production delivery

Many enterprises are actively embracing Azure as an infrastructure-as-a-service (IaaS) offering for production delivery of applications. Now, businesses can front-end those applications with the same cloud networking platform used by the largest websites and cloud service providers in the world. Extensive offload, acceleration, and security capabilities can be leveraged to enhance performance and reduce costs.

Hybrid cloud designs

With Citrix ADC on Azure, hybrid clouds that span enterprise data centers and extend into Azure can benefit from the same Citrix ADC cloud networking platform, significantly easing the transition of applications and workloads back and forth between a private data center and Azure. The full suite of Citrix ADC capabilities, ranging from intelligent database load balancing with Data Stream to unprecedented application visibility with AppFlow® and real-time monitoring and response with Action Analytics, can be leveraged with Citrix ADC on Azure.

Business continuity

Enterprises looking to use Azure as part of their disaster recovery and business continuity plans can rely upon Citrix ADC global server load balancing running both on-premises and within Azure to continuously monitor availability and performance of both enterprise data centers and Azure environments, ensuring users are always sent to the optimal location.

Development and testing

Enterprises running production delivery on-premises but using Azure for development and testing can now include Citrix ADC within their Azure test environments, speeding time-to-production due to better mimicry of the production implementation within their test environments. In each use case, network architects can also leverage Citrix CloudBridge— configured either as a standalone instance or as feature of a Citrix ADC platinum edition instance—to secure and optimize the connection between the enterprise data centers and the Azure Cloud, thereby speeding data transfer/synchronization and minimizing network costs

Network architecture

In ARM, a Citrix ADC VPX virtual machine (VM) resides in a virtual network. By default, a Citrix ADC VPX provisioned in Azure operates in single-IP mode described in the following section.

A virtual NIC is created on each Citrix ADC VM. The network security group configured in the virtual network is bound to the NIC. Together, they control the traffic flowing into the VM and out of the VM.

The network security group forwards the requests to the Citrix ADC VPX instance, and the VPX instance sends them to the servers. The responses from the servers follow the same path in reverse. You can configure network security group to control a single VPX VM, or with subnets and virtual networks and control traffic in multiple VPX VM deployments.

The NIC contains network configuration details such as the virtual network, subnets, internal IP address, and Public IP address.

While on ARM, it is good to know the following IP addresses used to access the VMs:

  • Public IP (PIP) address is the internet-facing IP address configured directly on the virtual NIC of the Citrix ADC VM. The PIP allows you to directly access a VM from the external network without the need to configure inbound and outbound rules on the network security group.
  • Citrix ADC IP (NSIP) address is internal IP address configured on the VM. It is non-routable.
  • Virtual IP address (VIP) is configured by using the NSIP and a port number. Clients access Citrix ADC services through the PIP address, and when the request reaches the NIC of the Citrix ADC VPX VM or the Azure load balancer, the VIP gets translated to internal IP (NSIP) and internal port number.
  • Internal IP address is the private internal IP address of the VM from the virtual network’s address space pool. This IP address cannot be reached from the external network. This IP address is by default dynamic unless you set it to static. Traffic from the internet is routed to this address according to the rules created on the network security group. The network security group and the NIC selectively send the right type of traffic to the right port on the NIC, which depends on the services configured on the VM.


In this document, PIP, VIP, and Instance Level PIP (ILPIP) mean the same thing and are used interchangeably.

The following figure shows how traffic flows from a client to a server through a Citrix ADC VPX instance provisioned in ARM.


How Citrix ADC VPX works on Azure

In an on-premises deployment, a Citrix ADCVPX instance requires, at least three IP addresses:

  • Management IP address, called the Citrix ADC IP (NSIP) address
  • Subnet IP (SNIP) address for communicating with the server farm
  • Virtual server IP (VIP) address for accepting client requests

In an Azure deployment, only one IP address (a private (internal) address) is assigned to an instance during provisioning through DHCP.

To avoid this limitation, you can deploy a Citrix ADC VPX instance in Azure with a single IP architecture. This way, the three IP functions of a Citrix ADC appliance are multiplexed onto one IP address. This single IP address uses different port numbers to function as the NSIP, SNIP, and VIP.

The following image illustrates how a single IP address is used to perform the functions of NSIP, SNIP, and VIP.


Traffic flow through port address translation

In an Azure deployment, when you provision the Citrix ADC VPX instance as a virtual machine (VM), Azure assigns a Public IP address and an internal IP address (non-routable) to the Citrix ADC virtual machine. Inbound and Outbound rules are defined on the network security group for the Citrix ADC instance, along with a public port and a private port for each rule defined. The Citrix ADC instance listens on the internal IP address and private port.

Any external request is received on the Citrix ADC VPX VM’s virtual NIC. The NIC is bound to the network security group, which specifies the private IP and private port combination for where to translate the request’s destination address and port (the Public IP address and port). ARM performs the port address translation (PAT) to map the Public IP address and port to the internal IP address and private port of the Citrix ADC virtual machine. Finally, ARM then forwards the traffic to the VM.

The following figure shows how Azure performs PAT to direct traffic to the Citrix ADC internal IP address and private port.


In this example, the Public IP address is assigned to the VM is 140.x.x.x, and the internal IP address is 10.x.x.x. When the inbound and outbound rules are defined, public HTTP port 80 is defined as the port on which the client requests are received. A corresponding private port, 10080, is defined as the port on which the Citrix ADC VM listens. The client request is received on the Public IP address 140.x.x.x at port 80. Azure performs PAT to map this address and port to internal IP address 10.x.x.x on private port 10080 and forwards the client request.

For information about port usage guidelines while, see the Port Usage Guidelines section.

For information about network security group and access control lists, click here.

Traffic flow through network address translation

You can also request a Public IP (PIP) address for your Citrix ADC VM (instance level). If you use this direct PIP at the VM level, you don’t need to define inbound and outbound rules to intercept the network traffic. The incoming request from the Internet is received on the VM directly. Azure performs network address translation (NAT) and forwards the traffic to the internal IP address of the Citrix ADC instance.

The following figure shows how Azure performs network address translation to map the Citrix ADC internal IP address.


In this example, the Public IP assigned to the network security group is 140.x.x.x and the internal IP address is 10.x.x.x. When the inbound and outbound rules are defined, public HTTP port 80 is defined as the port on which the client requests are received. A corresponding private port, 10080, is defined as the port on which the Citrix ADC virtual machine listens. The client request is received on the Public IP address (140.x.x.x). Azure performs network address translation to map the PIP to the internal IP address 10.x.x.x on port 10080, and forwards the client request.


Citrix ADC VPX VMs in high availability are controlled by external or internal load balancers. These load balancers have inbound rules defined on them to control the load balancing traffic. First, the external traffic is intercepted by these load balancers. Then the traffic is diverted according to the load balancing rules configured. This includes back end pools, NAT rules, and health probes defined on the load balancers.

Assign multiple IP addresses

An Azure Virtual Machine (VM) has one or more network interfaces (NIC) attached to it. Any NIC can have one or more static or dynamic public and private IP addresses assigned to it. Assigning multiple IP addresses to a VM enables the following capabilities:

  • Hosting multiple websites or services with different IP addresses and SSL certificates on a single server.
  • Serve as a network virtual appliance, such as a firewall or load balancer.
  • The ability to add any of the private IP addresses for any of the NICs to an Azure Load Balancer back-end pool. In the past, only the primary IP address for the primary NIC could be added to a back-end pool. To learn more about how to load balance multiple IP configurations, read the Load balancing multiple IP configurations article.


A VM with a single NIC is created and connected to a virtual network. The VM requires three different private IP addresses and two public IP addresses.

The IP addresses are assigned to the following IP configurations:

  • IPConfig-1: Assigns a dynamic private IP address (default) and a static public IP address.
  • IPConfig-2: Assigns a static private IP address and a static public IP address.
  • IPConfig-3: Assigns a dynamic private IP address and no public IP address.


Every NIC attached to a VM has one or more IP configurations associated to it. Each configuration is assigned one static or dynamic private IP address. Each configuration may also have one public IP address resource associated to it. A public IP address resource has either a dynamic or static public IP address assigned to it. To learn more about IP addresses in Azure, read the IP addresses in Azure article. You can assign up to 250 private IP addresses to each NIC. While you can assign multiple public IP addresses to each NIC, there are limits to how many public IP addresses that can be used in an Azure subscription. See the Azure limits article for details.


Multiple IP addresses cannot be assigned to resources created through the classic deployment model.

Create a VM with multiple NICs interfaces

You can create virtual machines (VMs) in Azure and attach multiple network interfaces (NICs) to each of your VMs. Multi NIC is a requirement for many network virtual appliances, such as application delivery and WAN optimization solutions. Multi NIC also provides more network traffic management functionality. Examples include isolation of traffic between a front end NIC and back end NICs, and separation of data plane traffic from management plane traffic.


This document walks through a deployment that uses multiple NICs in VMs in a specific scenario. In this scenario, you have a two-tiered IaaS workload hosted in Azure. Each tier is deployed in its own subnet in a virtual network (VNet). The front end tier is composed of several web servers, grouped in a load balancer set for high availability. The back end tier is composed of several database servers. These database servers are deployed with two NICs each, one for database access, the other for management. The scenario also includes Network Security Groups to control what traffic is allowed to each subnet, and NIC in the deployment. The following figure shows the basic architecture of this scenario.


Port usage guidelines

You can configure additional inbound and outbound rules in network security group when creating the Citrix ADC VM or after the VM is provisioned. Each inbound and outbound rule is associated with a public port and a private port.

Before configuring network security group rules, note the following guidelines regarding the port numbers you can use:

  1. The following ports are reserved by the Citrix ADC virtual machine. You cannot define these as private ports when using the Public IP address for requests from the Internet.

    Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000.

    However, if you want internet-facing services such as the VIP to use a standard port (for example, port 443), you have to create port mapping by using the network security group. The standard port is then mapped to a different port that is configured on the Citrix ADC for this VIP service.

    For example, a VIP service might be running on port 8443 on the Citrix ADC instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is directed to private port 8443.

  2. Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG.

  3. Azure load balancer does not work with Public IP address. High availability does not work for traffic that uses PIP associated with VPX instance instead of PIP configured on the load balancer. For more information about configuring Citrix ADC VPX HA in ARM, see Configuring Citrix ADC VPX in High Availability Mode in Azure.

  4. In a Citrix ADC Gateway deployment, you need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured.

Note: You must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the back end server, the VIP port number has to be specified along with the VIP URL (for example, url:port). Note: In ARM, a Citrix ADC VPX VM is associated with two IP addresses. Public IP address and Internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure VIP in VPX, use this combination of internal IP address and port number. Example: If the VPN virtual server FQDN is, and the VPN virtual server is running on port 8443, the call-back URL is:

Configuration steps

The following steps outline how to configure the necessary resource groups, security groups, virtual networks, and translation configurations required to have a functional ADC.

Provisioning the resource group

On the Microsoft Azure Portal page, log into the Azure Resource Manager portal with your user name and password. (In the ARM portal, clicking an option in one pane opens a new pane to the right. Navigate from one pane to another to configure your device.)

Create a resource group to serve as a container for all your resources. Use the resource group to deploy, manage, and monitor your resources as a group.

Create a network security group

Create a network security group to assign inbound and outbound rules to control the incoming and outgoing traffic within the virtual network. Network security groups allow you to define security rules for a single virtual machine and also to define security rules for a virtual network subnet.

Configuring a virtual network and subnets

Virtual networks in ARM provide a layer of security and isolation to your services. VMs and services that are part of the same virtual network can access each other.

For example, create a virtual network with a reserved CIDR block of and two subnets having CIDR blocks as and respectively.

  1. In the Create virtual network pane, enter the following values, and then click Create.

    • Name of the virtual network
    • Address space – type the reserved IP address block for the virtual network
    • Subnet – type the name of the first subnet (you create the second subnet later in this step)
    • Subnet address range – type the reserved IP address block of the subnet
    • Resource group – select the resource group created earlier from the drop-down list

Configuring the second subnet

  1. Select the newly created virtual network from the All Resources pane and in the Settings pane, click Subnets.

  2. Click +Subnet and create the second subnet by entering the following details.

    • Name of the second subnet
    • Address range – type the Reserved IP address block of the second subnet
    • Network security group – select the network security group from the drop-down list

Configuring a storage account

The ARM IaaS infrastructure storage includes all services where we can store data in the form of blobs, tables, queues, and files. You can also create applications using these forms of storage data in ARM.

Create a storage account to store all your data

  1. Click +New > Storage > Storage account.

  2. In the Create Storage account pane, enter the following details:

    • Name of the account
    • Deployment mode – ensure to select Resource Manager
    • Account kind – select General purpose from the drop-down list
    • Replication – select Locally redundant storage from the drop-down list
    • Resource group – select the newly created resource group from the drop-down list
  3. Click Create.

Configuring an availability set

An availability set guarantees that at least one VM is kept up and running in case of planned or unplanned maintenance. Two or more VMs under the same ‘availability set’ are placed on different fault domains to achieve redundant services.

  1. Click +New and search for availability set.

  2. Select Availability set entity from the list. Click Create.

  3. In the Create availability set pane, enter the following details:
    • Name of the set
    • Resource group – select the newly created resource group from the drop-down list
  4. Click Create

Provisioning the Citrix ADC instance

Create an instance of Citrix ADC VPX in the virtual network. Next, obtain the Citrix ADC VPX image from the Azure Marketplace. Use the Azure Resource Manager portal to create a Citrix ADC VPX instance.

Before you begin creating the Citrix ADC VPX instance, ensure that you have created a virtual network with required subnets in which the instance resides. You can create virtual networks during VM provisioning, but without the flexibility to create different subnets. For more information, see Create a Virtual Network Using the Azure Portal article.

Optional: Configure DNS server and VPN connectivity to allow a VM to access Internet resources.

Note: Citrix recommends that you create resource group, network security group, virtual network, and other entities before you provision the Citrix ADC VPX VM. This way the network information is available during provisioning.

  1. Click +New > Networking.
  2. Click See All and in the Networking pane, click Citrix ADC VPX Bring Your Own License.
  3. Click Create.

    Note: As a quick way to find any entity on ARM portal, you can also type the name of the entity in the Azure Marketplace search box and press <Enter>. Type Citrix ADC in the search box to find the Citrix ADC images.

  4. Select Citrix ADC 12.0 VPX Bring Your Own License.
  5. Populate your details.
  6. Purchase and Deploy my Citrix ADC, after passing the validation.
  7. It is recommended to set your IP Addresses to Static.

Note: Ensure to select the latest image. Your Citrix ADC image might have the release number in the name.

Create a VM with multiple IP addresses using PowerShell

The steps that follow explain how to create an example VM with multiple IP addresses, as described in the scenario. Change variable names and IP address types as required for your implementation.

The configuration steps covered are:

  1. Create a VM with multiple IP addresses

  2. Add IP addresses to a VM

  3. Add IP addresses to a VM operating system

  4. Validation (Windows)

  5. Validation (Linux)

Please refer to the following Microsoft Azure documentation: Assign multiple IP addresses to virtual machines using PowerShell.

Configure the Citrix ADC port translation

  1. Click on Network Interfaces for your VM, the Citrix ADC.
  2. Click on your Network security group.
  3. Click Inbound security rules.
  4. Allow SSH and HTTP in the Security Group for Inbound connections.

At this point, you can log into the Citrix ADC instance and configure the features and settings desired for your Azure environment.

Note: When logging into the Citrix ADC for the first time, the wizard may ask for a Subnet IP address. This is not required on Citrix ADC Azure instances as they only use a single IP address for all functions. Skip this step when prompted and continue to the default configuration login page.

Microsoft Azure resource manager portal

The infrastructure for your application is typically made up of many components – maybe a virtual machine, storage account, and virtual network, or a web app, database, database server, and third party services. You do not see these components as separate entities. Instead, you see them as related and interdependent parts of a single entity. You want to deploy, manage, and monitor them as a group. Azure Resource Manager enables you to work with the resources in your solution as a group. You can deploy, update, or delete all the resources for your solution in a single, coordinated operation. You use a template for deployment and that template can work for different environments such as testing, staging, and production. Resource Manager provides security, auditing, and tagging features to help you manage your resources after deployment.


If you are new to Azure Resource Manager, there are some terms you might not be familiar with:

  • resource - A manageable item that is available through Azure. Some common resources are a virtual machine, storage account, web app, database, and virtual network, but there are many more.
  • resource group - A container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. See Resource groups.
  • resource provider - A service that supplies the resources you can deploy and manage through Resource Manager. Each resource provider offers operations for working with the resources that are deployed. Some common resource providers are Microsoft.Compute, which supplies the virtual machine resource, Microsoft.Storage, which supplies the storage account resource, and Microsoft.Web, which supplies resources related to web apps. See Resource providers.
  • Resource Manager template - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group. It also defines the dependencies between the deployed resources. The template can be used to deploy the resources consistently and repeatedly. See Template deployment.
  • declarative syntax - Syntax that lets you state “Here is what I intend to create” without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure.

Reference: Azure Resource Manager overview

Citrix ADC Multi-NIC overview

Citrix ADC virtual instances running on the Azure platform are capable of attaching multiple virtual NICs to a standalone virtual Citrix ADC appliance.

This is a common scenario when distributed architectures are desired in Azure, such as Application Tiers and Database Tiers.

A second common Citrix ADC use case for multiple NICs, is the desire to segregate network zones in the Azure environment. An example of the segregation could be to allow internet sourced traffic to terminate on one interface (DMZ or Public), and the internal Web and Application services to be Private.

In this two-arm networking scenario, the Citrix ADC virtual appliance requires a minimum of two virtual NICs to be presented. One virtual NIC for the Public network and one virtual NIC for the Private network.

Additionally, the multi-NIC configuration in Azure requires the use of multiple subnetworks to accommodate the segregated NICs. This component is configured using the VNET component of the Azure Portal.

Example: Multiple vNIC’s attached to an Azure VM


Note: Different VM sizes support a varying number of NICs, so size your VM accordingly.

Citrix ADC Multi-NIC Configuration

The following configurations describe how to easily create the necessary subnet and virtual networks to have a multi-NIC configuration on an ADC.

Create the resource group


New-AzureRmResourceGroup -Name "myResourceGroup" -Location "EastUS"


Create the VNET and subnets

A common scenario is for a virtual network to have two or more subnets. One subnet may be for front-end traffic, the other for back-end traffic. To connect to both subnets, you then use multiple NICs on your VM.

Define two virtual network subnets using New-AzureRmVirtualNetworkSubnetConfig. The following example defines the subnets for mySubnetFrontEnd and mySubnetBackEnd:


$mySubnetFrontEnd = New-AzureRmVirtualNetworkSubnetConfig -Name "mySubnetFrontEnd" `
 -AddressPrefix ""
$mySubnetBackEnd = New-AzureRmVirtualNetworkSubnetConfig -Name "mySubnetBackEnd" `
 -AddressPrefix ""

Create your virtual network and subnets with New-AzureRmVirtualNetwork. The following example creates a virtual network named myVnet:


$myVnet = New-AzureRmVirtualNetwork -ResourceGroupName "myResourceGroup" `
 -Location "EastUs" `
 -Name "myVnet" `
 -AddressPrefix "" `
 -Subnet $mySubnetFrontEnd,$mySubnetBackEnd


Create the network security group

Typically you also create a network security group to filter network traffic to the VM and a load balancer to distribute traffic across multiple VMs.

Create and configure vNICs

Create two NICs using New-AzureRmNetworkInterface. Attach one NIC to the front-end subnet and one NIC to the back-end subnet. The following example creates NICs named myNic1 and myNic2:


$frontEnd = $myVnet.Subnets|?{$_.Name -eq 'mySubnetFrontEnd'}
$myNic1 = New-AzureRmNetworkInterface -ResourceGroupName "myResourceGroup" `
 -Name "myNic1" `
 -Location "EastUs" `
 -SubnetId $frontEnd.Id
$backEnd = $myVnet.Subnets|?{$_.Name -eq 'mySubnetBackEnd'}
$myNic2 = New-AzureRmNetworkInterface -ResourceGroupName "myResourceGroup" `
 -Name "myNic2" `
 -Location "EastUs" `
 -SubnetId $backEnd.Id


Create VM and attach vNICs

Now start to build your VM configuration. Each VM size has a limit for the total number of NICs that you can add to a VM. For more information, see Windows VM sizes.

Attach the two NICs that you previously created with Add-AzureRmVMNetworkInterface:


$vmConfig = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $myNic1.Id -Primary
$vmConfig = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $myNic2.Id

Finally, create your VM with New-AzureRmVM:


New-AzureRmVM -VM $vmConfig -ResourceGroupName "myResourceGroup" -Location "EastUs"


Citrix ADC multiple IP addresses overview

In this use case, a standalone Citrix ADC VPX appliance can be configured with a single or multiple vNIC that is connected to a virtual network (VNET). The vNIC is associated with three IP configurations (ipconfig), each serving a different purpose.

Example: Multiple VIPs attached to a vNIC


When you assign multiple IP configurations to a NIC, one configuration must be assigned as the -Primary.

$MyNIC.IpConfigurations | Format-Table Name, PrivateIPAddress, PublicIPAddress, Primary


Public IP addresses have a nominal fee. To learn more about IP address pricing, read the IP address pricing page. There is a limit to the number of public IP addresses that can be used in a subscription. To learn more about the limits, read the Azure limits article.

Add a private IP address

To add a private IP address to a NIC, you must create an IP configuration. The following command creates a configuration with a static IP address of When specifying a static IP address, it must be an unused address for the subnet. We recommend that you first test the address to ensure it’s available by entering the Test-AzureRmPrivateIPAddressAvailability -IPAddress -VirtualNetwork $myVnet command. If the IP address is available, the output returns True. If the address is not available, the output returns False, and includes a list of addresses that are available.

Add-AzureRmNetworkInterfaceIpConfig -Name IPConfig-4 -NetworkInterface `
$MyNIC -Subnet $Subnet -PrivateIpAddress


Add a public IP address

A public IP address is added by associating a public IP address resource to either a new IP configuration or an existing IP configuration. Complete the steps in one of the sections that follow, as you require.

$MyPublicIp3 = New-AzureRmPublicIpAddress `
 -Name "MyPublicIp3" `
 -ResourceGroupName $RgName `
 -Location $Location -AllocationMethod Static


Associate the public IP address resource to an existing VM

A public IP address resource can only be associated to an IP configuration that doesn’t already have one associated

Set-AzureRmNetworkInterfaceIpConfig `
 -Name IpConfig-3 `
 -NetworkInterface $mynic `
 -Subnet $Subnet `
 -PublicIpAddress $myPublicIp3


Citrix ADC HA overview

You can deploy a pair of Citrix ADC virtual appliances with multiple NICs in an active-passive high availability (HA) setup on Azure. Each NIC can contain multiple IP addresses. An active-passive deployment requires:

  • An HA Independent Network Configuration (INC) configuration
  • The Azure Load Balancer (ALB) in Direct Server Return (DSR) mode

All traffic goes through the primary node. The secondary node remains in standby mode until the primary node fails.

In an active-passive deployment, the ALB floating public IP (PIP) addresses are added as the VIP addresses in each Citrix ADC node. In HA-INC configuration, the VIP addresses are floating and SNIP addresses are instance specific. ALB monitors each Citrix ADC instance by sending health probe at every 5 seconds. The ADC redirects traffic to only the instance that sends health probes response on regular intervals. So in an HA setup, the primary node responds to health probes and secondary does not. If the primary instance misses two consecutive health probes, ALB does not redirect traffic to that instance. On failover, the new primary starts responding to health probes and the ALB redirects traffic to it. The standard Citrix ADC HA failover time is three seconds. The total failover time that might take for traffic switching can be maximum of 13 seconds.

You can deploy a Citrix ADC pair in active-passive HA mode in two ways by using:

  • Citrix ADC Standard HA template: use this option to configure an HA pair with the default option of three subnets and six NICs.
  • Windows PowerShell commands: use this option to configure an HA pair according to your subnet and NIC requirements.

Citrix ADC HA configuration - PowerShell

Check Configuring an HA Setup with Multiple IP Addresses and NICs by Using PowerShell Commands for the Azure PowerShell commands.

Citrix ADC HA configuration - Azure portal

You can quickly and efficiently deploy a pair of Citrix ADC instances in HA-INC mode by using the standard template. The template creates two nodes, with three subnets and six NICs. The subnets are for management, client, and server-side traffic, and each subnet has two NICs for both the VPX instances.

The Citrix ADC 12.0 HA Pair template is available at the Azure Marketplace.

To use the template:

  1. From Azure Marketplace, select and initiate the Citrix solution template. The template appears.
  2. Ensure deployment type is Resource Manager and select Create.
  3. The Basics page appears. Create a Resource Group and select OK.


  4. The General Settings page appears. Type the details and select OK.


  5. The Network Setting page appears. Check the vnet and subnet configurations, edit the required settings, and select OK.


  6. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm.

  7. The Buy page appears. Select Purchase to complete the deployment.

It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group to see the configuration details. This could include LB rules, back-end pools, health probes, and so on in the Azure portal. The HA pair appears as VPX0 and VPX1.

If further modifications are required for your HA setup, such as creating more security rules and ports, you can do that from the Azure portal.


Next, you need to configure the load-balancing vserver with the ALB public IP (PIP) address, on each node. To find the ALB PIP, select ALB > Frontend IP configuration.


Citrix ADC GSLB and domain based services back-end autoscale with cloud load balancer

GSLB and DBS Overview

Citrix ADC GSLB supports using DBS (Domain Based Services) for Cloud load balancers. This allows for the autodiscovery of dynamic cloud services using a cloud load balancer solution. This configuration allows the Citrix ADC to implement Global Server Load Balancing Domain-Name Based Services (GSLB DBS) in an Active-Active environment. DBS allows the scaling of back end resources in an Amazon Web Services (AWS) and Microsoft Azure environments from DNS discovery. This section covers integrations between Citrix ADC in the AWS and Azure Auto Scaling environments. The final section of the document details the ability to set up a HA pair of Citrix ADCs that span two different Availability Zones (AZs) specific to an AWS or Azure region.


The prerequisites for the Citrix ADC GSLB Service Groups include a functioning Amazon Web Services / Microsoft Azure environment with the knowledge and ability to configure Security Groups, Linux Web Servers, Citrix ADCs within AWS, Elastic IPs, and Elastic Load Balancers.

GSLB DBS Service integration requires Citrix ADC version 12.0.57 for AWS ELB and Microsoft Azure ALB load balancer instances.

Citrix ADC GSLB Service Group Feature Enhancements

GSLB Service Group entity: Citrix ADC version 12.0.57

GSLB Service Group is introduced which supports autoscale using DBS dynamic discovery.

DBS Feature Components (domain based service) shall be bound to the GSLB service group


 > add server sydney_server
 > add gslb serviceGroup sydney_sg HTTP -autoscale DNS -siteName sydney
 > bind gslb serviceGroup sydney_sg sydney_server 80

Domain-Name based services – Azure ALB

GLSB DBS utilizes the FQDN of your Azure Load Balancer to dynamically update the GSLB Service Groups to include the back-end servers that are being created and deleted within Azure. To configure this feature, we point the Citrix ADC to our Azure Load Balancer to dynamically route to different servers in Azure. We can do this without having to manually update the Citrix ADC every time an instance is created and deleted within Azure. The Citrix ADC DBS feature for GSLB Service Groups uses DNS-aware service discovery to determine the member service resources of the DBS namespace identified in the autoscaler group.

Diagram: Citrix ADC GSLB DBA autoscale components with Cloud Load Balancers


Configuring Azure components

  1. Log in to your Azure Portal and create a new virtual machine from a Citrix ADC template


  2. Create an Azure Load Balancer


  3. Add the created Citrix ADC Backend Pools


  4. Create a Health Probe for port 80.

    Create a Load Balancing Rule utilizing the Frontend IP created from the Load Balancer.

    1. Protocol: TCP
    2. Backend Port: 80
    3. Backend pool: Citrix ADC created in step 1
    4. Health Probe: Created in step 4
    5. Session Persistence: None


Configure Citrix ADC GSLB domain based service

The following configurations summarize what is required to enable domain based services for autoscaling ADCs in a GSLB enabled environment.

Traffic management configurations

Note: It is required to configure the Citrix ADC with either a nameserver or a DNS vserver through which the ELB /ALB Domains will be resolved for the DBS Service Groups.

  1. Navigate to Traffic Management -> Load Balancing -> Servers


  2. Click Add to create a server, provide a name and FQDN corresponding to the A record (domain name) in Azure for the Azure Load Balancer (ALB)


  3. Repeat step 2 to add the second ALB from the second resource in Azure.

GSLB configurations

  1. Click the Add button to configure a GSLB Site
  2. Name the Site.

    Type is configured as Remote or Local based on which Citrix ADC you are configuring the site on. The Site IP Address is the IP address for the GSLB site. The GSLB site uses this IP address to communicate with the other GSLB sites. The Public IP address is required when using a cloud service where a particular IP is hosted on an external firewall or NAT device. The site should be configured as a Parent Site. Ensure the Trigger Monitors are set to ALWAYS. Also, be sure to check off the three boxes at the bottom for Metric Exchange, Network Metric Exchange, and Persistence Session Entry Exchange.

    Recommendation is to set Trigger monitor setting to MEPDOWN, please refer to Configuring a GSLB service group.



  3. Click Create, repeat steps 3 & 4 to configure the GSLB site for the other resource location in Azure (this can be configured on the same Citrix ADC)

  4. Navigate to Traffic Management -> GSLB -> Service Groups


    Click Add to add a new service group. Name the Service Group, use HTTP protocol, and then under Site Name choose the respective site that was created in the previous steps. Be sure to configure autoscale Mode as DNS and check off the boxes for State and Health Monitoring. Click OK to create the Service Group


  5. Click Service Group Members and select Server Based. Select the respective Elastic Load Balancing Server that was configured in the start of the run guide. Configure the traffic to go over port 80. Click Create.


  6. The Servicegroup Member Binding should populate with 2 instances that it is receiving from the Elastic Load Balancer.


  7. Repeat steps 5 & 6 to configure the Service Group for the second resource location in Azure. (This can be done from the same Citrix ADC GUI).

  8. The final step is to set up a GSLB Virtual Server. Navigate to Traffic Management -> GSLB -> Virtual Servers.

  9. Click Add to create the virtual server. Name the server, DNS Record Type is set as A, Service Type is set as HTTP, and check the boxes for Enable after Creating and AppFlow Logging. Click OK to create the GSLB Virtual Server.


  10. Once the GSLB Virtual Server is created, click No GSLB Virtual Server ServiceGroup Binding.


  11. Under ServiceGroup Binding use Select Service Group Name to select and add the Service Groups that were created in the previous steps.


  12. Next configure the GSLB Virtual Server Domain Binding by clicking No GSLB Virtual Server Domain Binding. Configure the FQDN and Bind, the rest of the settings can be left as the defaults.


  13. Configure the ADNS Service by clicking No Service. Add a Service Name, click New Server, and enter the IP Address of the ADNS server. Additionally if your ADNS is already configured you can select Existing Server and then choose your ADNS from the drop down menu. Make sure the Protocol is ADNS and the traffic is over Port 53.


  14. Configure the Method as LEASTCONNECTION and Backup Method as ROUNDROBIN.

  15. Click Done and verify that your GSLB Virtual Server is shown as Up.


Front End autoscale in Azure with VPX using the autoscale groups in Citrix Application Delivery Management

Autoscaling is a cloud computing method that automatically adds or removes resources depending upon actual usage. Autoscaling is useful whenever your site or application needs on-demand resource allocation to satisfy the fluctuating number of client requests or processing jobs.

The demand for web applications or services can vary significantly. Maintaining the correct number of Citrix ADC instances for different traffic needs is important. You can increase or decrease the network resources on Microsoft Azure depending on the demand. Thus, it provides cost optimization without compromising the performance.

Citrix Application Delivery Management (ADM) autoscaling maintains the exact number of Citrix ADC instances for fluctuating resource consumption. Citrix ADM determines the traffic flow based on the fluctuating resource consumption, it decides to scale out or scale in the Citrix ADC instances dynamically.

Create an autoscale group

  • Inside the ADC management console utility, navigate to Networks > autoscale.
  • Select Add.


Choose the cloud

In the Create autoscale Group tab, click Microsoft Azure, and click Add.


Basic parameters

Configure the basic parameters for the autoscale Group:

  1. Name: autoscale_demo
  2. Site: autoscale_demo_101
  3. Agent:
  4. Cloud Access Profile: autoscale_demp_cap
  5. Device: profile_for_azure
  6. Traffic Distribution Mode: DNS using Azure-DNS


Do not select Next yet. Add Availability Zones 1 and 2 by clicking the plus sign and set the tags to Testbed.


Select Next.

Autoscale parameters

Configure the autoscale parameters for the autoscale Group:

  1. CPU Usage: 10–30
  2. Memory Usage: 10–30
  3. Throughput Usage: 10–30
  4. Minimum Instances: 4
  5. Maximum Instances: 6
  6. Watch Time: 2
  7. Cooldown Period: 1
  8. Time to wait during Deprovision: 1
  9. DNS Time To Live: 10


Click Next.

Cloud parameters

Configure the cloud parameters for the autoscale group:

  1. Resource Group: LakshProv
  2. Product / License: Citrix ADC VPX Enterprise Edition - 1000 Mbps
  3. Azure VM Size: vCPUs: 4 Memory(GB): 14 Standard_DS3_V2
  4. Cloud Access Profile for ADC: autoscale_demo_cap
  5. Image: default
  6. Management: mgmt_demo_sq
  7. Client: client_demo_sq
  8. Server: server_demo_sq
  9. Management Subnet: mgmt_subnet_demo
  10. Client Subnet: client_subnet_demo
  11. Server Subnet: server_subnet_demo


Select Next.

A dialog box appears. Ensure the autoscale group gets created successfully.

Autoscale group successful

Configure the autoscale group

  1. Inside the ADC management console utility, navigate to Networks > Autoscale.
  2. Select the autoscale that was created and ensure that it is enabled.
  3. Click Configure.
  4. On the Choose StyleBook page select HTTP/SSL LoadBalancing StyleBook.

Configuration details

Enter the configuration details to be created with the StyleBook:

  1. Application Name: demo_app
  2. Name for the domain: demo_app
  3. Zone of the domain:
  4. Load Balanced App Virtual Port: 80
  5. Load Balanced App Protocol: HTTP


Check Backend Server Configuration.

  1. Autoscale Type: None

Check Backend Configuration for autoscale NONE


Click the plus sign next to Server IPs and Ports.

  1. Application Server IP Address:
  2. Application Server Port: 80
  3. Weight: 1

Click Create.


Repeat the previous step:

  1. Application Server IP Address:
  2. Application Server Port: 80
  3. Weight: 1

Click Create.


On the main StyleBook configuration page, click Create.

A dialog box appears. Ensure the StyleBook configuration gets created successfully.


Additional references

Deploy a Citrix ADC VPC instance on Microsoft Azure

Microsoft Azure Documentation

Assign multiple IP addresses to virtual machines using PowerShell

Create a VM with multiple NICs using PowerShell

autoscale backend-service VMs in Azure Public Cloud w/ Citrix ADC VPX

Citrix product documentation

Configure multiple IP addresses for a Citrix ADC VPX standalone instance

Configure multiple IP addresses for a Citrix ADC VPX instance in standalone mode by using PowerShell commands

Configure multiple Azure VIPs for a standalone VPX instance