Product Documentation

Citrix ADC and Microsoft Azure Validated Reference Design

Overview Citrix ADC VPX

Citrix ADC is an all-in-one application delivery controller that makes applications run up to five times better, reduces application ownership costs, optimizes the user experience, and ensures that applications are always available by using:

  • Advanced L4-7 load balancing and traffic management

  • Proven application acceleration such as HTTP compression and caching

  • An integrated application firewall for application security

  • Server offloading to significantly reduce costs and consolidate servers

As an undisputed leader of service and application delivery, Citrix ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, Citrix ADC combines high-speed load balancing and content switching, http compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. Citrix ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required.

Overview of Citrix ADC in Microsoft Azure

The Citrix ADC VPX virtual appliance is available as an image in the Microsoft Azure Marketplace. Citrix ADC VPX on Microsoft Azure Resource Manager (ARM) enables customers to leverage Azure cloud computing capabilities and use Citrix ADC load balancing and traffic management features for their business needs. You can deploy Citrix ADC VPX instances on ARM either as standalone instances or as high availability pairs in active-active or active-standby modes.

Limitations and usage guidelines

  • The Azure architecture does not accommodate support for the following features:
    • Clustering
    • IPv6
    • Gratuitous ARP (GARP)
    • L2 Mode
    • Tagged VLAN
    • Dynamic Routing
    • Virtual MAC (vMAC)
    • USIP
    • CloudBridge Connector
  • The Intranet IP (IIP) feature is not supported, because Azure does not provide the pool of IP addresses required for this feature. IIP is frequently used in VOIP, SIP, or server-initiated-connection deployment.

  • If you expect that you might have to shut down and temporarily deallocate the Citrix ADC VPX virtual machine at any time, assign a static Internal IP address while creating the virtual machine. If you do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible.

  • In an Azure deployment, only the following Citrix ADC VPX models are supported: VPX 10, VPX 200, and VPX 1000. These virtual appliances can be deployed on any instance type that has two or more cores and more than 2 GB memory. See the Citrix ADC VPX data sheet.

  • The deployment ID that is generated by Azure during virtual machine provisioning is not visible to the user in ARM. You cannot use the deployment ID to deploy Citrix ADC VPX appliance on ARM.

Use cases

Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, Citrix ADC on Azure combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the Azure Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premise Citrix ADC deployments. The net result is that Citrix ADC on Azure enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud datacenters.

Production delivery

Enterprises actively embracing Azure as an infrastructure-as-a-service (IaaS) offering for production delivery of applications can now front-end those applications with the same cloud networking platform used by the largest websites and cloud service providers in the world. Extensive offload, acceleration, and security capabilities can be leveraged to enhance performance and reduce costs.

Hybrid Cloud designs

With Citrix ADC on Azure, hybrid clouds that span enterprise datacenters and extend into Azure can benefit from the same Citrix ADC cloud networking platform, significantly easing the transition of applications and workloads back and forth between a private datacenter and Azure. The full suite of Citrix ADC capabilities, ranging from intelligent database load balancing with DataStream to unprecedented application visibility with AppFlow® and real-time monitoring and response with Action Analytics, can be leveraged with Citrix ADC on Azure.

Business continuity

Enterprises looking to use Azure as part of their disaster recovery and business continuity plans can rely upon Citrix ADC global server load balancing running both on-premise and within Azure to continuously monitor availability and performance of both enterprise datacenters and Azure environments, ensuring users are always sent to the optimal location.

Development and testing

Enterprises running production delivery on-premise but using Azure for development and testing can now include Citrix ADC within their Azure test environments, speeding time-to-production due to better mimicry of the production implementation within their test environments. In each use case, network architects can also leverage Citrix CloudBridge— configured either as a standalone instance or as feature of a Citrix ADC platinum edition instance—to secure and optimize the connection between the enterprise datacenters and the Azure Cloud, thereby speeding data transfer/synchronization and minimizing network costs

Network architecture

In ARM, a Citrix ADC VPX virtual machine (VM) resides in a virtual network. By default, a Citrix ADC VPX provisioned in Azure will operate in single-IP mode described in the following section.

A virtual Network Interface Card (NIC) is created on each Citrix ADC VM. The network security group (NSG) configured in the virtual network is bound to the NIC, and together they control the traffic flowing into the VM and out of the VM.

The NSG forwards the requests to the Citrix ADC VPX instance, and the VPX instance sends them to the servers. The responses from the servers follow the same path in reverse. You can configure NSG to control a single VPX VM, or with subnets and virtual networks and control traffic in multiple VPX VM deployments.

The NIC contains network configuration details such as the virtual network, subnets, internal IP address, and Public IP address.

While on ARM, it is good to know the following IP addresses used to access the VMs:

  • Public IP (PIP) address is the Internet-facing IP address configured directly on the virtual NIC of the Citrix ADC VM. This allows you to directly access a VM from the external network without the need to configure inbound and outbound rules on the NSG.
  • Citrix ADC IP (NSIP) address is internal IP address configured on the VM. It is non-routable.
  • Virtual IP address (VIP) is configured by using the NSIP and a port number. Clients access Citrix ADC services through the PIP address, and when the request reaches the NIC of the Citrix ADC VPX VM or the Azure load balancer, the VIP gets translated to internal IP (NSIP) and internal port number.
  • Internal IP address is the private internal IP address of the VM from the virtual network’s address space pool. This IP address cannot be reached from the external network. This IP address is by default dynamic unless you set it to static. Traffic from the internet is routed to this address according to the rules created on the NSG. The NSG works with the NIC to selectively send the right type of traffic to the right port on the NIC, which depends on the services configured on the VM.

Note:

In this document, PIP, VIP, and Instance Level PIP (ILPIP) mean the same thing and are used interchangeably.

The following figure shows how traffic flows from a client to a server through a Citrix ADC VPX instance provisioned in ARM.

image-citrix-adc-and-microsoft-azure-01

How Citrix ADC VPX works on Azure

In an on-premise deployment, a Citrix ADCVPX instance requires, at least three IP addresses:

  • Management IP address, called the Citrix ADC IP (NSIP) address
  • Subnet IP (SNIP) address for communicating with the server farm
  • Virtual server IP (VIP) address for accepting client requests

In an Azure deployment, only one IP address (a private (internal) address) is assigned to an instance during provisioning through DHCP.

To avoid this limitation, you can deploy a Citrix ADC VPX instance in Azure with a single IP architecture, where the three IP functions of a Citrix ADC appliance are multiplexed onto one IP address. This single IP address uses different port numbers to function as the NSIP, SNIP, and VIP.

The following image illustrates how a single IP address is used to perform the functions of NSIP, SNIP, and VIP.

image-citrix-adc-and-microsoft-azure-02

Traffic flow through port address translation

In an Azure deployment, when you provision the Citrix ADC VPX instance as a virtual machine (VM), Azure assigns a Public IP address and an internal IP address (non-routable) to the Citrix ADC virtual machine. Inbound and Outbound rules are defined on the NSG for the Citrix ADC instance, along with a public port and a private port for each rule defined. The Citrix ADC instance listens on the internal IP address and private port.

Any external request is received on the Citrix ADC VPX VM’s virtual NIC. The NIC is bound to the NSG, which specifies the private IP and private port combination for where to translate the request’s destination address and port (the Public IP address and port). ARM performs the port address translation (PAT) to map the Public IP address and port to the internal IP address and private port of the Citrix ADC virtual machine, and then forwards the traffic to the VM.

The following figure shows how Azure performs port address translation to direct traffic to the Citrix ADC internal IP address and private port.

image-citrix-adc-and-microsoft-azure-03

In this example, the Public IP address is assigned to the VM is 140.x.x.x, and the internal IP address is 10.x.x.x. When the inbound and outbound rules are defined, public HTTP port 80 is defined as the port on which the client requests are received, and a corresponding private port, 10080, is defined as the port on which the Citrix ADC VM listens. The client request is received on the Public IP address 140.x.x.x at port 80. Azure performs port address translation to map this address and port to internal IP address 10.x.x.x on private port 10080 and forwards the client request.

For information about port usage guidelines while, see the Port Usage Guidelines section.

For information about NSG and access control lists, click here.

Traffic flow through network address translation

You can also request a Public IP (PIP) address for your Citrix ADC VM (instance level). If you use this direct PIP at the VM level, you don’t need to define inbound and outbound rules to intercept the network traffic. The incoming request from the Internet is received on the VM directly. Azure performs network address translation (NAT) and forwards the traffic to the internal IP address of the Citrix ADC instance.

The following figure shows how Azure performs network address translation to map the Citrix ADC internal IP address.

image-citrix-adc-and-microsoft-azure-04

In this example, the Public IP assigned to the NSG is 140.x.x.x and the internal IP address is 10.x.x.x. When the inbound and outbound rules are defined, public HTTP port 80 is defined as the port on which the client requests are received, and a corresponding private port, 10080, is defined as the port on which the Citrix ADC virtual machine listens. The client request is received on the Public IP address (140.x.x.x). Azure performs network address translation to map the PIP to the internal IP address 10.x.x.x on port 10080, and forwards the client request.

Note:

Citrix ADC VPX VMs in high availability are controlled by external or internal load balancers that have inbound rules defined on them to control the load balancing traffic. The external traffic is first intercepted by these load balancers and the traffic is diverted according to the load balancing rules configured, which has backend pools, NAT rules, and health probes defined on the load balancers.

Assign multiple IP addresses

An Azure Virtual Machine (VM) has one or more network interfaces (NIC) attached to it. Any NIC can have one or more static or dynamic public and private IP addresses assigned to it. Assigning multiple IP addresses to a VM enables the following capabilities:

  • Hosting multiple websites or services with different IP addresses and SSL certificates on a single server.
  • Serve as a network virtual appliance, such as a firewall or load balancer.
  • The ability to add any of the private IP addresses for any of the NICs to an Azure Load Balancer back-end pool. In the past, only the primary IP address for the primary NIC could be added to a back-end pool. To learn more about how to load balance multiple IP configurations, read the Load balancing multiple IP configurations article.

Scenario

A VM with a single NIC is created and connected to a virtual network. The VM requires three different private IP addresses and two public IP addresses.

The IP addresses are assigned to the following IP configurations:

  • IPConfig-1: Assigns a dynamic private IP address (default) and a static public IP address.
  • IPConfig-2: Assigns a static private IP address and a static public IP address.
  • IPConfig-3: Assigns a dynamic private IP address and no public IP address.

image-citrix-adc-and-microsoft-azure-05

Every NIC attached to a VM has one or more IP configurations associated to it. Each configuration is assigned one static or dynamic private IP address. Each configuration may also have one public IP address resource associated to it. A public IP address resource has either a dynamic or static public IP address assigned to it. To learn more about IP addresses in Azure, read the IP addresses in Azure article. You can assign up to 250 private IP addresses to each NIC. While you can assign multiple public IP addresses to each NIC, there are limits to how many public IP addresses that can be used in an Azure subscription. See the Azure limits article for details.

Note:

Multiple IP addresses cannot be assigned to resources created through the classic deployment model.

Create a VM with multiple NICs interfaces

You can create virtual machines (VMs) in Azure and attach multiple network interfaces (NICs) to each of your VMs. Multi NIC is a requirement for many network virtual appliances, such as application delivery and WAN optimization solutions. Multi NIC also provides more network traffic management functionality, including isolation of traffic between a front end NIC and back end NIC(s), or separation of data plane traffic from management plane traffic.

Scenario

This document will walk through a deployment that uses multiple NICs in VMs in a specific scenario. In this scenario, you have a two-tiered IaaS workload hosted in Azure. Each tier is deployed in its own subnet in a virtual network (VNet). The front end tier is composed of several web servers, grouped together in a load balancer set for high availability. The back end tier is composed of several database servers. These database servers will be deployed with two NICs each, one for database access, the other for management. The scenario also includes Network Security Groups (NSGs) to control what traffic is allowed to each subnet, and NIC in the deployment. The figure below shows the basic architecture of this scenario.

image-citrix-adc-and-microsoft-azure-06

Port usage guidelines

You can configure additional inbound and outbound rules in NSG when creating the Citrix ADC VM or after the VM is provisioned. Each inbound and outbound rule is associated with a public port and a private port.

Before configuring NSG rules, note the following guidelines regarding the port numbers you can use:

  1. The following ports are reserved by the Citrix ADC virtual machine. You cannot define these as private ports when using the Public IP address for requests from the Internet.

    Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000.

    However, if you want Internet-facing services such as the VIP to use a standard port (for example, port 443), you have to create port mapping by using the NSG. The standard port is then mapped to a different port that is configured on the Citrix ADC for this VIP service.

    For example, a VIP service might be running on port 8443 on the Citrix ADC instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is actually directed to private port 8443.

  2. Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG.

  3. Azure load balancer does not work with Public IP address. High availability does not work for traffic that uses PIP associated with VPX instance instead of PIP configured on the load balancer. For more information about configuring Citrix ADC VPX HA in ARM, see Configuring Citrix ADC VPX in High Avail ability Mode in Azure.

  4. In a Citrix ADC Gateway deployment, you need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured.

Note: You must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the backend server, the VIP port number has to be specified along with the VIP URL (for example, url:port). Note: In ARM, a Citrix ADC VPX VM is associated with two IP addresses. Public IP address and Internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure VIP in VPX, use this combination of internal IP address and port number. Example: If the VPN virtual server FQDN is vip.test.com, and the VPN virtual server is running on port 8443, the call-back URL will be: https://vip.test.com:8443.


Configuration steps

Provisioning the resource group

On the Microsoft Azure Portal page, log into the Azure Resource Manager portal with your user name and password. (In the ARM portal, clicking an option in one pane opens a new pane to the right. Navigate from one pane to another to configure your device.)

Create a new resource group to serve as a container for all your resources. Use the resource group to deploy, manage, and monitor your resources as a group.

Create a network security group

Create a network security group (NSG) to assign inbound and outbound rules to control the incoming and outgoing traffic within the virtual network. NSG allows you to define security rules for a single virtual machine and also to define security rules for a virtual network subnet.

Configuring a virtual network and subnets

Virtual networks in ARM provide a layer of security and isolation to your services. VMs and services that are part of the same virtual network can access each other.

For example, create a virtual network with a reserved CIDR block of 192.168.0.0/16 and two subnets having CIDR blocks as 192.168.1.0/24 and 192.168.2.0/24 respectively.

  1. In the Create virtual network pane, enter the following values, and then click Create.

    • Name of the virtual network
    • Address space – type the reserved IP address block for the virtual network
    • Subnet – type the name of the first subnet (you will create the second subnet later in this step)
    • Subnet address range – type the reserved IP address block of the subnet
    • Resource group – select the resource group created earlier from the drop-down list

Configuring the second subnet

  1. Select the newly created virtual network from the All Resources pane and in the Settings pane, click Subnets.

  2. Click +Subnet and create the second subnet by entering the following details.

    • Name of the second subnet
    • Address range – type the Reserved IP address block of the second subnet
    • Network security group – select the NSG from the drop-down list

Configuring a storage account

The ARM IaaS infrastructure storage includes all services where we can store data in the form of blobs, tables, queues, and files. You can also create applications using these forms of storage data in ARM.

Create a storage account to store all your data

  1. Click +New > Storage > Storage account.

  2. In the Create Storage account pane, enter the following details:

    • Name of the account
    • Deployment mode – make sure to select Resource Manager
    • Account kind – select General purpose from the drop-down list
    • Replication – select Locally redundant storage from the drop-down list
    • Resource group – select the newly created resource group from the drop-down list
  3. Click Create.

Configuring an availability set

An availability set guarantees that at least one VM is kept up and running in case of planned or unplanned maintenance. Two or more VMs under the same ‘availability set’ are placed on different fault domains to achieve redundant services.

  1. Click +New and search for availability set.

  2. Select Availability set entity from the list. Click Create.

  3. In the Create availability set pane, enter the following details:
    • Name of the set
    • Resource group – select the newly created resource group from the drop-down list
  4. Click Create

Provisioning the Citrix ADC instance

Create an instance of Citrix ADC VPX in the virtual network. Obtain the Citrix ADC VPX image from the Azure marketplace, and then use the Azure Resource Manager portal to create a Citrix ADC VPX instance.

Before you begin creating the Citrix ADC VPX instance, make sure that you have created a virtual network with required subnets in which the instance will reside. You can create virtual networks during VM provisioning, but without the flexibility to create different subnets. For more information, see Create a Virtual Network Using the Azure Portal article.

Optional: Configure DNS server and VPN connectivity to allow a VM to access Internet resources.

Note: Citrix recommends that you create resource group, network security group, virtual network, and other entities before you provision the Citrix ADC VPX VM, so that the network information is available during provisioning.

  1. Click +New > Networking.
  2. Click See All and in the Networking pane, click Citrix ADC VPX Bring Your Own License.
  3. Click Create.

    Note: As a quick way to find any entity on ARM portal, you can also type the name of the entity in the Azure Marketplace search box and press <Enter>. Type Citrix ADC in the search box to find the Citrix ADC images.

  4. Select Citrix ADC 12.0 VPX Bring Your Own License.
  5. Populate your details.
  6. Purchase and Deploy my Citrix ADC, after passing the validation.
  7. It is recommended to set your IP Addresses to Static.

Note: Ensure to select the latest image. Your Citrix ADC image might have the release number in the name.

Create a VM with multiple IP addresses using PowerShell

The steps that follow explain how to create an example VM with multiple IP addresses, as described in the scenario. Change variable names and IP address types as required for your implementation.

The configuration steps covered are:

  1. Create a VM with multiple IP addresses

  2. Add IP addresses to a VM

  3. Add IP addresses to a VM operating system

  4. Validation (Windows)

  5. Validation (Linux)

Please refer to the following Microsoft Azure documentation: Assign multiple IP addresses to virtual machines using PowerShell.

Configure the Citrix ADC port translation

  1. Click on Network Interfaces for your VM, the Citrix ADC.
  2. Click on your Network security group.
  3. Click on Inbound security rules.
  4. Allow SSH and HTTP in the Security Group for Inbound connections.

At this point, you can log into the Citrix ADC instance and configure the features and settings desired for your Azure environment.

Note: When logging into the Citrix ADC for the first time, the wizard may ask for a Subnet IP address. This is not required on Citrix ADC Azure instances as they only use a single IP address for all functions. Skip this step when prompted and continue to the default configuration login page.

Microsoft Azure resource manager portal

The infrastructure for your application is typically made up of many components – maybe a virtual machine, storage account, and virtual network, or a web app, database, database server, and third party services. You do not see these components as separate entities, instead you see them as related and interdependent parts of a single entity. You want to deploy, manage, and monitor them as a group. Azure Resource Manager enables you to work with the resources in your solution as a group. You can deploy, update, or delete all the resources for your solution in a single, coordinated operation. You use a template for deployment and that template can work for different environments such as testing, staging, and production. Resource Manager provides security, auditing, and tagging features to help you manage your resources after deployment.

Terminology

If you are new to Azure Resource Manager, there are some terms you might not be familiar with:

  • resource - A manageable item that is available through Azure. Some common resources are a virtual machine, storage account, web app, database, and virtual network, but there are many more.
  • resource group - A container that holds related resources for an Azure solution. The resource group can include all the resources for the solution, or only those resources that you want to manage as a group. You decide how you want to allocate resources to resource groups based on what makes the most sense for your organization. See Resource groups.
  • resource provider - A service that supplies the resources you can deploy and manage through Resource Manager. Each resource provider offers operations for working with the resources that are deployed. Some common resource providers are Microsoft.Compute, which supplies the virtual machine resource, Microsoft.Storage, which supplies the storage account resource, and Microsoft.Web, which supplies resources related to web apps. See Resource providers.
  • Resource Manager template - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group. It also defines the dependencies between the deployed resources. The template can be used to deploy the resources consistently and repeatedly. See Template deployment.
  • declarative syntax - Syntax that lets you state “Here is what I intend to create” without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure.

Reference: Azure Resource Manager overview

Citrix ADC Multi-NIC overview

Citrix ADC virtual instances running on the Azure platform are capable of attaching multiple virtual network intexwface cards “NIC’s” to a standalone virtual Citrix ADC appliance.

This is a common scenario when distributed architectures are desired in Azure, such as Application Tiers and Database Tiers.

A second common Citrix ADC use case for multiple NIC’s, is the desire to segregate network zones in the Azure environment. An example of the segregation could be to allow internet sourced traffic to terminate on one interface (DMZ or Public), and the internal Web and Application services to be Private.

In this two-leg networking scenario, the Citrix ADC virtual appliance will require a minimum of two virtual NIC’s to be presented. One virtual NIC for the Public network and one virtual NIC for the Private network.

Additionally, the multi-nic configuration in Azure will require the use of multiple subnet networks to accommodate the segregated NIC’s. This component is configured using the VNET component of the Azure Portal.

Example: Multiple vNIC’s attached to an Azure VM

image-citrix-adc-and-microsoft-azure-07

Note: Different VM sizes support a varying number of NICs, so size your VM accordingly. https://docs.microsoft.com/en-us/azure/virtual-machines/linux/sizes

Citrix ADC Multi-Nic Configuration

Create the resource group

PowerShellCopy

New-AzureRmResourceGroup -Name "myResourceGroup" -Location "EastUS"

image-citrix-adc-and-microsoft-azure-08

Create the VNET and Subnets

A common scenario is for a virtual network to have two or more subnets. One subnet may be for front-end traffic, the other for back-end traffic. To connect to both subnets, you then use multiple NICs on your VM.

Define two virtual network subnets with New-AzureRmVirtualNetworkSubnetConfig. The following example defines the subnets for mySubnetFrontEnd and mySubnetBackEnd:

PowerShellCopy

$mySubnetFrontEnd = New-AzureRmVirtualNetworkSubnetConfig -Name "mySubnetFrontEnd" `
 -AddressPrefix "192.168.1.0/24"
$mySubnetBackEnd = New-AzureRmVirtualNetworkSubnetConfig -Name "mySubnetBackEnd" `
 -AddressPrefix "192.168.2.0/24"

Create your virtual network and subnets with New-AzureRmVirtualNetwork. The following example creates a virtual network named myVnet:

PowerShellCopy

$myVnet = New-AzureRmVirtualNetwork -ResourceGroupName "myResourceGroup" `
 -Location "EastUs" `
 -Name "myVnet" `
 -AddressPrefix "192.168.0.0/16" `
 -Subnet $mySubnetFrontEnd,$mySubnetBackEnd

image-citrix-adc-and-microsoft-azure-09

Create the network security group

Typically you also create a network security group to filter network traffic to the VM and a load balancer to distribute traffic across multiple VMs.

Create and configure vNICs

Create two NICs with New-AzureRmNetworkInterface. Attach one NIC to the front-end subnet and one NIC to the back-end subnet. The following example creates NICs named myNic1 and myNic2:

PowerShellCopy

$frontEnd = $myVnet.Subnets|?{$_.Name -eq 'mySubnetFrontEnd'}
$myNic1 = New-AzureRmNetworkInterface -ResourceGroupName "myResourceGroup" `
 -Name "myNic1" `
 -Location "EastUs" `
 -SubnetId $frontEnd.Id
$backEnd = $myVnet.Subnets|?{$_.Name -eq 'mySubnetBackEnd'}
$myNic2 = New-AzureRmNetworkInterface -ResourceGroupName "myResourceGroup" `
 -Name "myNic2" `
 -Location "EastUs" `
 -SubnetId $backEnd.Id

image-citrix-adc-and-microsoft-azure-10

Create VM and attach vNICs

Now start to build your VM configuration. Each VM size has a limit for the total number of NICs that you can add to a VM. For more information, see Windows VM sizes.

Attach the two NICs that you previously created with Add-AzureRmVMNetworkInterface:

PowerShellCopy

$vmConfig = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $myNic1.Id -Primary
$vmConfig = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $myNic2.Id

Finally, create your VM with New-AzureRmVM:

PowerShellCopy

New-AzureRmVM -VM $vmConfig -ResourceGroupName "myResourceGroup" -Location "EastUs"

image-citrix-adc-and-microsoft-azure-11

Citrix ADC multiple IP addresses overview

In this use case, a standalone Citrix ADC VPX appliance can be configured with a single or multiple vNIC that is connected to a virtual network (VNET). The vNIC is associated with three IP configurations (ipconfig), each serving a different purpose.

Example: Multiple VIP’s attached to a vNIC

image-citrix-adc-and-microsoft-azure-12

When you assign multiple IP configurations to a NIC, one configuration must be assigned as the -Primary.

$MyNIC.IpConfigurations | Format-Table Name, PrivateIPAddress, PublicIPAddress, Primary

Note:

Public IP addresses have a nominal fee. To learn more about IP address pricing, read the IP address pricing page. There is a limit to the number of public IP addresses that can be used in a subscription. To learn more about the limits, read the Azure limits article.

Add a private IP address

To add a private IP address to a NIC, you must create an IP configuration. The following command creates a configuration with a static IP address of 10.0.0.7. When specifying a static IP address, it must be an unused address for the subnet. It’s recommended that you first test the address to ensure it’s available by entering the Test-AzureRmPrivateIPAddressAvailability -IPAddress 10.0.0.7 -VirtualNetwork $myVnet command. If the IP address is available, the output returns True. If it’s not available, the output returns False, and a list of addresses that are available.

Add-AzureRmNetworkInterfaceIpConfig -Name IPConfig-4 -NetworkInterface `
$MyNIC -Subnet $Subnet -PrivateIpAddress 10.0.0.7

image-citrix-adc-and-microsoft-azure-13

Add a public IP address

A public IP address is added by associating a public IP address resource to either a new IP configuration or an existing IP configuration. Complete the steps in one of the sections that follow, as you require.

$MyPublicIp3 = New-AzureRmPublicIpAddress `
 -Name "MyPublicIp3" `
 -ResourceGroupName $RgName `
 -Location $Location -AllocationMethod Static

image-citrix-adc-and-microsoft-azure-14

Associate the public IP address resource to an existing VM

A public IP address resource can only be associated to an IP configuration that doesn’t already have one associated

Set-AzureRmNetworkInterfaceIpConfig `
 -Name IpConfig-3 `
 -NetworkInterface $mynic `
 -Subnet $Subnet `
 -PublicIpAddress $myPublicIp3

image-citrix-adc-and-microsoft-azure-15

Citrix ADC HA overview

You can deploy a pair of Citrix ADC virtual appliances with multiple NICs in an active-passive high availability (HA) setup on Azure. Each NIC can contain multiple IP addresses. An active-passive deployment requires:

  • An HA Independent Network Configuration (INC) configuration
  • The Azure Load Balancer (ALB) in Direct Server Return (DSR) mode

All traffic goes through the primary node. The secondary node remains in standby mode until the primary node fails.

In an active-passive deployment, the ALB floating public IP (PIP) addresses are added as the VIP addresses in each Citrix ADC node. In HA-INC configuration, the VIP addresses are floating and SNIP addresses are instance specific. ALB monitors each Citrix ADC instances by sending health probe at every 5 seconds and redirects traffic to that instance only that sends health probes response on regular interval. So in an HA setup, the primary node responds to health probes and secondary does not. If the primary instances misses two consecutive health probes, ALB does not redirect traffic to that instance. On failover, the new primary starts responding to health probes and the ALB redirects traffic to it. The standard Citrix ADC HA failover time is three seconds. The total failover time that might take for traffic switching can be maximum of 13 seconds.

You can deploy a Citrix ADC pair in active-passive HA mode in two ways by using:

  • Citrix ADC Standard HA template: use this option to configure an HA pair with the default option of three subnets and six NICs.
  • Windows PowerShell commands: use this option to configure an HA pair according to your subnet and NIC requirements.

Citrix ADC HA configuration - Powershell

Check Configuring an HA Setup with Multiple IP Addresses and NICs by Using PowerShell Commands for the Azure Powershell commands.

Citrix ADC HA configuration - Azure portal

You can quickly and efficiently deploy a pair of Citrix ADC instances in HA-INC mode by using the standard template. The template creates two nodes, with three subnets and six NICs. The subnets are for management, client, and server-side traffic, and each subnet has two NICs for both the VPX instances.

The Citrix ADC 12.0 HA Pair template is available at the Azure Marketplace.

To use the template:

  1. From Azure Marketplace, select and initiate the Citrix solution template. The template appears.
  2. Ensure deployment type is Resource Manager and select Create.
  3. The Basics page appears. Create a Resource Group and select OK.

    image-citrix-adc-and-microsoft-azure-16

  4. The General Settings page appears. Type the details and select OK.

    image-citrix-adc-and-microsoft-azure-17

  5. The Network Setting page appears. Check the vnet and subnet configurations, edit the required settings, and select OK.

    image-citrix-adc-and-microsoft-azure-18

  6. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm.

  7. The Buy page appears. Select Purchase to complete the deployment.

It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group to see the configuration details, such as LB rules, back-end pools, health probes, and so on, in the Azure portal. The HA pair appears as VPX0 and VPX1.

If further modifications are required for your HA setup, such as creating more security rules and ports, you can do that from the Azure portal.

image-citrix-adc-and-microsoft-azure-19

Next, you need to configure the load-balancing vserver with the ALB public IP (PIP) address, on each node. To find the ALB PIP, select ALB > Frontend IP configuration.

image-citrix-adc-and-microsoft-azure-20


Citrix ADC GSLB and DBS Backend Autoscale with Cloud Load Balancer

GSLB and DBS Overview

Citrix ADC GSLB support using DBS (Domain Based Services) for Cloud load balancers allows for the auto discovery of dynamic cloud services using a cloud load balancer solution. This configuration allows the Citrix ADC to implement Global Server Load Balancing Domain-Name Based Services (GSLB DBS) in an Active-Active environment. DBS allows the scaling of backend resources in an Amazon Web Services (AWS) and Microsoft Azure environments from DNS discovery. This section covers integrations between Citrix ADC in the AWS and Azure Auto Scaling environments. The final section of the document details the ability to set up a HA pair of Citrix ADCs that span two different Availability Zones (AZs) specific to an AWS or Azure region.

Required prerequisites:

The prerequisites for the Citrix ADC GSLB Service Groups include a functioning Amazon Web Services / Microsoft Azure environment with the knowledge and ability to configure Security Groups, Linux Web Servers, Citrix ADCs within AWS, Elastic IPs, and Elastic Load Balancers.

GSLB DBS Service integration requires Citrix ADC version 12.0.57 for AWS ELB and Microsoft Azure ALB load balancer instances.

Citrix ADC GSLB Service Group Feature Enhancements

GSLB Service Group entity: Citrix ADC version 12.0.57

GSLB Service Group is introduced which supports autoscale using DBS dynamic discovery.

DBS Feature Components (domain based service) shall be bound to the GSLB service group

Example:

 > add server sydney_server LB-Sydney-xxxxxxxxxx.ap-southeast-2.elb.amazonaws.com
 > add gslb serviceGroup sydney_sg HTTP -autoScale DNS -siteName sydney
 > bind gslb serviceGroup sydney_sg sydney_server 80

Domain-Name based services – Azure ALB

GLSB DBS utilizes the FQDN of your Azure Load Balancer to dynamically update the GSLB Service Groups to include the back-end servers that are being created and deleted within Azure. To configure this feature, we point the Citrix ADC to our Azure Load Balancer to dynamically route to different servers in Azure without having to manually update the Citrix ADC every time an instance is created and deleted within Azure. The Citrix ADC DBS feature for GSLB Service Groups uses DNS aware service discovery to determine the member service resources of the DBS namespace identified in the AutoScaler group.

Diagram: Citrix ADC GSLB DBA AutoScale components with Cloud Load Balancers

image-citrix-adc-and-microsoft-azure-21

Configuring Azure components

  1. Log in to your Azure Portal and create a new virtual machine from a Citrix ADC template

    image-citrix-adc-and-microsoft-azure-22

  2. Create an Azure Load Balancer

    image-citrix-adc-and-microsoft-azure-23

  3. Add the created Citrix ADC Backend Pools

    image-citrix-adc-and-microsoft-azure-24

  4. Create a Health Probe for port 80.

    Create a Load Balancing Rule utilizing the Frontend IP created from the Load Balancer.

    1. Protocol: TCP
    2. Backend Port: 80
    3. Backend pool: Citrix ADC created in step 1
    4. Health Probe: Created in step 4
    5. Session Persistence: None

    image-citrix-adc-and-microsoft-azure-25

Configure Citrix ADC GSLB domain based service

Traffic management configurations

Note: It is required to configure the Citrix ADC with either a nameserver or a dns vserver through which the ELB /ALB Domains will be resolved for the DBS Service Groups.

  1. Navigate to Traffic Management -> Load Balancing -> Servers

    image-citrix-adc-and-microsoft-azure-26

  2. Click Add to create a server, provide a name and FQDN corresponding to the A record (domain name) in Azure for the Azure Load Balancer (ALB)

    image-citrix-adc-and-microsoft-azure-27

  3. Repeat step 2 to add the second ALB from the second resource in Azure.

GSLB configurations

  1. Click the Add button to configure a GSLB Site
  2. Name the Site.

    Type is configured as Remote or Local based on which Citrix ADC you are configuring the site on. The Site IP Address is the IP address for the GSLB site. The GSLB site uses this IP address to communicate with the other GSLB sites. The Public IP address is required when using a cloud service where a particular IP is hosted on an external firewall or NAT device. The site should be configured as a Parent Site. Ensure the Trigger Monitors are set to ALWAYS and be sure to check off the three boxes at the bottom for Metric Exchange, Network Metric Exchange, and Persistence Session Entry Exchange.

    Recommendation is to set Trigger monitor setting to MEPDOWN, please refer to Configuring a GSLB service group.

    ![image-citrix-adc-and-microsoft-azure-28]/en-us/advanced-concepts/media/image-citrix-adc-and-microsoft-azure-28.png)

    image-citrix-adc-and-microsoft-azure-29

  3. Click Create, repeat steps 3 & 4 to configure the GSLB site for the other resource location in Azure (this can be configured on the same Citrix ADC)

  4. Navigate to Traffic Management -> GSLB -> Service Groups

    image-citrix-adc-and-microsoft-azure-30

    Click Add to add a new service group. Name the Service Group, use HTTP protocol, and then under Site Name choose the respective site that was created in the previous steps. Be sure to configure AutoScale Mode as DNS and check off the boxes for State and Health Monitoring. Click OK to create the Service Group

    image-citrix-adc-and-microsoft-azure-31

  5. Click Service Group Members and select Server Based. Select the respective Elastic Load Balancing Server that was configured in the start of the runguide. Configure the traffic to go over port 80. Click Create.

    image-citrix-adc-and-microsoft-azure-32

  6. The Servicegroup Member Binding should populate with 2 instances that it is receiving from the Elastic Load Balancer.

    image-citrix-adc-and-microsoft-azure-33

  7. Repeat steps 5 & 6 to configure the Service Group for the second resource location in Azure. (This can be done from the same Citrix ADC GUI).

  8. The final step is to set up a GSLB Virtual Server. Navigate to Traffic Management -> GSLB -> Virtual Servers.

  9. Click Add to create the virtual server. Name the server, DNS Record Type is set as A, Service Type is set as HTTP, and check the boxes for Enable after Creating and AppFlow Logging. Click OK to create the GSLB Virtual Server.

    image-citrix-adc-and-microsoft-azure-34

  10. Once the GSLB Virtual Server is created, click No GSLB Virtual Server ServiceGroup Binding.

    image-citrix-adc-and-microsoft-azure-35

  11. Under ServiceGroup Binding use Select Service Group Name to select and add the Service Groups that were created in the previous steps.

    image-citrix-adc-and-microsoft-azure-36

  12. Next configure the GSLB Virtual Server Domain Binding by clicking on No GSLB Virtual Server Domain Binding. Configure the FQDN and Bind, the rest of the settings can be left as the defaults.

    image-citrix-adc-and-microsoft-azure-37

  13. Configure the ADNS Service by clicking on No Service. Add a Service Name, click New Server and enter the IP Address of the ADNS server. Additionally if your ADNS is already configured you can select Existing Server and then choose your ADNS from the drop down menu. Make sure the Protocol is ADNS and the traffic is over Port 53.

    image-citrix-adc-and-microsoft-azure-38

  14. Configure the Method as LEASTCONNECTION and Backup Method as ROUNDROBIN.

  15. Click Done and verify that your GSLB Virtual Server is shown as Up.

    image-citrix-adc-and-microsoft-azure-39

Additional references

Deploy a Citrix ADC VPC instance on Microsoft Azure

Microsoft Azure Documentation

Assign multiple IP addresses to virtual machines using PowerShell

Create a VM with multiple NICs using PowerShell

AutoScale Backend-Service VM’s in Azure Public Cloud w/ Citrix ADC VPX

Citrix product documentation

Configure multiple IP addresses for a Citrix ADC VPX standalone instance

Configure multiple IP addresses for a Citrix ADC VPX instance in standalone mode by using PowerShell commands

Configure multiple Azure VIPs for a standalone VPX instance