Jump to content
Welcome to our new Citrix community!

Citrix Tech Zone - Document History

Showing documents in Tech Briefs, Poc Guides, Citrix Features Explained, Citrix Demo Series, Tech Insights, Design Decisions, Diagrams Posters, Reference Architectures, Deployment Guides and Tech Papers posted in for the last 365 days.

This stream auto-updates

  1. Today
  2. Overview The Citrix Virtual Delivery Agent for macOS (Citrix VDA for macOS) enables HDX access to macOS Remote desktop from any device with the Citrix Workspace App installed. The Citrix VDA for macOS is designed and engineered as “yet another VDA” backed by Citrix's leading HDX technologies within the DaaS/CVAD product family. It adheres to the existing Citrix product architecture. It follows all the common roadmap of HDX features and all interfaces defined between critical components in DaaS/CVAD to ensure that our customers' knowledge and experience can be fully reused in this new VDA. This deployment guide provides the steps to Install and configure the Citrix VDA for macOS to a non-Active Directory joined macOS desktop. The following steps are covered in the guide: Prepare the installation of the non-domain joined VDA. Install the Citrix VDA for macOS. Create the Delivery Group Connect via Citrix Gateway Service to the Citrix VDA for macOS. Note: The Citrix VDA for macOS is currently in public tech preview. This guide will be updated once this feature is Generally Available.. Prerequisites Any Apple Silicon (M1, M2, and M3 families) based macOS device. macOS Venture 13 or Sonoma 14. The following network ports must be open on the macOS device: 443 (TCP/UDP for outbound traffic to Citrix Cloud). 1494 and 2598 (Optional. TCP/UDP inbound. Not required while CWA connects via Citrix Gateway Service; Only required when CWA connects to Citrix VDA for macOS directly or via on-premise NetScalar gateway) An existing Citrix DaaS subscription Citrix Workspace app 2402 or later (Windows, Linux, Mac) Citrix VDA for macOS is downloaded to your macOS device from Citrix Downloads. Note: The EAR for Citrix VDA for macOS should not be used in any production environment under any circumstances. . Prepare the Installation 1. Login to Citrix DaaS, open Web Studio, and select Machine Catalogs. 2. Click Create Machine Catalog. 3. Select Remote PC Access and click Next. (The Single-session OS option is also supported.) 4. Select "I want users to connect to the same (static) desktop each time they log on" and click Next. 5. Select the minimum functional level for this catalog and click Next on the Machine Accounts page. 6. Click Next on the Scopes page. 7. Click Next on the Workspace Environment Management (Optional) page. 8. Leave the "Enable VDA upgrade" unchecked and click Next. 9. Name your Machine Catalog and click Finish. 10. The Machine Catalog is now created. 11. Right-click on the Citrix VDA for macOS Machine Catalog and select Manage Enrollment Tokens. 12. Click Generate. 13. Enter your Token name, select Use current date and time for start, enter 100 into the Specify how many times the token can register VDAs, choose an appropriate end date for the token to allow VDA registrations, and click Generate. 14. Click Copy. 15. Optionally, click Download to download this token for later usage. 16. You will now see your active Enrollment token. Click Close. Install the VDA 1. On your macOS device, download .Net 6.0 from https://dotnet.microsoft.com/en-us/download/dotnet/6.0. 2. Select .NET Runtime v 6.0.29 and choose the macOS Arm64 download link. 3. Install the Arm64 .Net Runtime package for macOS and check the installation directory path using the command: which dotnet 4. To begin the VDA installation, select the Citrix VDA for macOS installer and double-click it. 5. Click Continue. 6. Click Continue on the License Agreement. 7. Click Agree. 8. Select Install for all users of this computer and click Continue. 9. Click Install. 10. Enter the administrator password and click Install Software if prompted. 11. The Citrix VDA for macOS installs. 12. Choose the required option on the vdaconfig UI and click Open Screen Recording Preference to enable Citrix Graphics Service. Then click Open System Settings. 13. Enable Citrix Graphics Service and close the window. 14. Click Open Accessibility Preferences. 15. Enabled the Citrix Input Service. 16. Validate that .NET 6.0 is installed correctly. 17. Copy the Enrollment Token from Notepad, paste it into Enroll with Token, then click Enroll. 18. Enter the administrator password if prompted and click OK. 19. Once your enrollment is successful, you will receive the enrolled successfully message as seen here: 20. Click Close. Your Citrix VDA for macOS has been installed. 21. Return to Citrix DaaS Web Studio console and verify that the macOS device is registered with the Machine Catalog. Create Delivery Group 1. Within Citrix DaaS Web Studio, select Delivery Groups, then click Create Delivery Group. 2. Select your Citrix VDA for macOS Machine Catalog and click Next. 3. Select which users can access the Delivery Group and click Next. 4. Add the required Desktop Assignment Rule and click Next. 5. Select Next on the App Protection screen. 6. Click Next on the Scopes window. 7. Select the appropriate License Assignment for your Citrix DaaS deployment and click Next. 8. Click Next on the Policy Set window. 9. Provide a name for your Delivery Group and click Finish. 10. Your Delivery Group is now created and ready for user access. Enable Rendezvous Our deployment will contain non-domain-joined macOS devices. The Rendezvous v2 protocol is used in this scenario, so no Cloud Connectors are required. However, we must enable the Rendezvous Citrix policy for our deployment to work. 1. Within Citrix Web Studio, select Policies and then click Create Policy. 2. Select ICA within View by Category, then scroll to and select Rendezvous Protocol. Click Allow. 3. Click Next. 4. Select Filtered users and computers and expand the Delivery Group. 5. Select Allow in the Mode drop-down menu, and then choose your Citrix VDA for macOS Delivery Group, select Enable, and click Save. 6. review your filters, then click Next. 7. Select Enable policy, provide a Policy name, then click Finish. 8. The Rendezvous policy is now active and enabled. Launch the macOS VDA 1. Open your Citrix Workspace app or browser, enter your Citrix Workspace URL, and log in. Your Citrix VDA for macOS desktop will be available for launch. 2. Launch the Desktop. 3. Your remote session via Citrix HDX to your macOS device will begin. Summary This guide walked you through the installation and configuration of the Citrix VDA for macOS. This consisted of preparing the Citrix DaaS environment by creating a Machine Catalog and capturing an Enrollment Token, installing and registering the Citrix VDA on macOS device, and creating a Delivery Group with user assignments. As a reminder, the Citrix VDA for macOS is currently in Public Tech Preview. This deployment guide will be updated during the preview period and when the feature goes GA. For more information, please visit the Citrix VDA for macOS product documentation.
  3. Last week
  4. Earlier
  5. Overview This guide aims to provide an overview of using Terraform to create a complete Citrix DaaS Resource Location on Google Cloud Platform (GCP). At the end of the process, you created: A new Citrix Cloud Resource Location (RL) running on Google Cloud Platform (GCP) 2 Cloud Connector Virtual Machines registered with the Domain and the Resource Location A Hypervisor Connection and a Hypervisor Pool pointing to the new Resource Location in Google Cloud Platform (GCP) A Machine Catalog based on the uploaded Master Image VHD or on a Google Cloud Platform (GCP)-based Master Image A Delivery Group based on the Machine Catalog with full Autoscale Support Example policies and policy scopes bound to the Delivery Group What is Terraform Terraform is an Infrastructure-as-Code (IaC) tool that defines cloud and on-prem resources in easy-readable configuration files rather than through a GUI. IaC allows you to build, change, and manage your infrastructure safely and consistently by defining resource configurations. These configurations can be versioned, reused, and shared and are created in its native declarative configuration language known as HashiCorp Configuration Language (HCL), or optionally using JSON. Terraform creates and manages resources on Cloud platforms and other services through their application programming interfaces (APIs). Terraform providers are compatible with virtually any platform or service with an accessible API. More information about Terraform can be found at https://developer.hashicorp.com/terraform/intro. Installation HashiCorp distributes Terraform as a binary package. You can also install Terraform using popular package managers. In this example, we use Chocolatey for Windows to deploy Terraform. Chocolatey is a free and open-source package management system for Windows. Install the Terraform package from the CLI. Installation of Chocolatey Open a PowerShell shell with Administrative rights and paste the following command: Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) Chocolatey downloads and installs all necessary components automatically: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString (https://community.chocolatey.org/install.ps1)) Forcing web requests to allow TLS v1.2 (Required for requests to Chocolatey.org) Getting latest version of the Chocolatey package for download. Not using proxy. Getting Chocolatey from https://community.chocolatey.org/api/v2/package/chocolatey/2.2.2. Downloading https://community.chocolatey.org/api/v2/package/chocolatey/2.2.2 to C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall\chocolatey.zip Not using proxy. Extracting C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall\chocolatey.zip to C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall Installing Chocolatey on the local machine Creating ChocolateyInstall as an environment variable (targeting 'Machine') Setting ChocolateyInstall to 'C:\ProgramData\chocolatey' WARNING: It's very likely you will need to close and reopen your shell before you can use choco. Restricting write permissions to Administrators We are setting up the Chocolatey package repository. The packages themselves go to 'C:\ProgramData\chocolatey\lib' (i.e. C:\ProgramData\chocolatey\lib\yourPackageName). A shim file for the command line goes to 'C:\ProgramData\chocolatey\bin' and points to an executable in 'C:\ProgramData\chocolatey\lib\yourPackageName'. Creating Chocolatey folders if they do not already exist. chocolatey.nupkg file not installed in lib. Attempting to locate it from bootstrapper. PATH environment variable does not have C:\ProgramData\chocolatey\bin in it. Adding... WARNING: Not setting tab completion: Profile file does not exist at 'C:\TACG\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1'. Chocolatey (choco.exe) is now ready. You can call choco from anywhere, command line or powershell by typing choco. Run choco /? for a list of functions. You may need to shut down and restart powershell and/or consoles first prior to using choco. Ensuring Chocolatey commands are on the path Ensuring chocolatey.nupkg is in the lib folder PS C:\TACG> Run choco --v to check if Chocolatey was installed successfully: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> choco --v Chocolatey v2.2.2 PS C:\TACG> Installation of Terraform After the successful installation of Chocolatey, you can install Terraform by running this command on the PowerShell session: choco install terraform .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> choco install terraform Chocolatey v2.2.2 Installing the following packages: terraform By installing, you accept licenses for the packages. Progress: Downloading terraform 1.6.4... 100% terraform v1.7.4 [Approved] terraform package files install completed. Performing other installation steps. The package terraform wants to run 'chocolateyInstall.ps1'. Note: If you don't run this script, the installation will fail. Note: To confirm automatically next time, use '-y' or consider: choco feature enable -n allowGlobalConfirmation Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): A Removing old terraform plug-ins Downloading terraform 64 bit from 'https://releases.hashicorp.com/terraform/1.7.4/terraform_1.7.4_windows_amd64.zip' Progress: 100% - Completed download of C:\TACG\AppData\Local\Temp\chocolatey\terraform\1.7.4\terraform_1.7.4_windows_amd64.zip (25.05 MB). Download of terraform_1.7.4_windows_amd64.zip (25.05 MB) completed. Hashes match. Extracting C:\TACG\AppData\Local\Temp\chocolatey\terraform\1.7.4\terraform_1.7.4_windows_amd64.zip to C:\ProgramData\chocolatey\lib\terraform\tools... C:\ProgramData\chocolatey\lib\terraform\tools ShimGen has successfully created a shim for terraform.exe The install of terraform was successful. Software installed to 'C:\ProgramData\chocolatey\lib\terraform\tools' Chocolatey installed 1/1 packages. See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log). PS C:\TACG> Run terraform -version to check if Terraform was installed successfully: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> terraform -version Terraform v1.7.4 on windows_amd64 PS C:\TACG> The installation of Terraform is now completed. Terraform - Basics and Commands Terraform Block The terraform {} block contains Terraform settings, including the required providers to provision your infrastructure. Terraform installs providers from the Terraform Registry. Providers The provider block configures the specified provider. A provider is a plug-in that Terraform uses to create and manage your resources. Providing multiple provider blocks in the Terraform configuration enables managing resources from different providers. Resources Resource blocks define the components of the infrastructure - physical, virtual, or logical. These blocks contain arguments to configure the resource. The provider's reference lists the required and optional arguments for each resource. The core Terraform workflow consists of three stages Write: You define resources that are deployed, altered, or deleted. Plan: Terraform creates an execution plan describing the infrastructure it creates, updates, or destroys based on the existing infrastructure and your configuration. Apply: On approval, Terraform does the proposed operations in the correct order, respecting any resource dependencies. Terraform does not only add complete configurations, it also allows you to change previously added configurations. For example, changing the DNS servers of a NIC of a Google Cloud Platform (GCP) VM does not require redeploying the whole configuration - Terraform only alters the needed resources. Terraform Provider for Citrix Citrix has developed a custom Terraform provider for automating Citrix product deployments and configurations. You can use Terraform with Citrix's provider to manage your Citrix products via Infrastructure as Code. Terraform provides higher efficiency and consistency in infrastructure management and better reusability in infrastructure configuration. The provider defines individual units of infrastructure and currently supports both Citrix Virtual Apps and Desktops and Citrix DaaS solutions. You can automate the creation of a site setup including host connections, machine catalogs, and delivery groups. You can deploy resources in Google Cloud Platform (GCP), AWS, and Azure, as well as supported on-premises Hypervisors. Terraform expects to be invoked from a working directory that contains configuration files written in the Terraform language. Terraform uses configuration content from this directory and also uses the directory to store settings, cached plug-ins and modules, and state data. A working directory must be initialized before Terraform can do any operations. Initialize the working directory by using the command: terraform init .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> terraform init Initializing the backend... Successfully configured the backend "local"! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plug-ins... - Finding citrix/citrix versions matching ">= 0.5.4"... - Installing citrix/citrix v0.5.4... - Installed citrix/citrix v0.5.4 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plug-ins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG> The provider defines how Terraform can interact with the underlying API. Configurations must declare which providers they require so Terraform can install and use them. Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry or load them from a local mirror or cache. Example: Terraform configuration files - provider.tf The file provider.tf contains the information on the target site where to apply the configuration. Depending on whether it is a Citrix Cloud site or a Citrix On-Premises site, the provider needs to be configured differently: .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } terraform { required_version = ">= 1.7.4" required_providers { citrix = { source = "citrix/citrix" version = ">=0.5.4" } } } # Configure the Citrix Cloud Provider provider "citrix" { customer_id = "${var.CC_CustomerID}" client_id = "${var.CC_APIKey-ClientID}" client_secret = "${var.CC_APIKey-ClientSecret}" } A guide for creating a secure API client can be found in Citrix Developer Docs and will be shown later. .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } terraform { required_version = ">= 1.7.4" required_providers { citrix = { source = "citrix/citrix" version = ">=0.5.4" } } } # Configure the Citrix On-Premises Provider provider "citrix" { hostname = "${var.CVAD_DDC_HostName}" client_id = "${var.CVAD_Admin_Account_UN}" #Domain\\Username client_secret = "${var.CVAD_Admin_Account_Password}}" } Example - Schema used for the Provider configuration client_id (String): Client-ID for Citrix DaaS service authentication For Citrix On-Premises customers: Use this variable to specify the Domain-Admin Username. For Citrix Cloud customers: Use this variable to specify Cloud API Key Client ID. Can be set via the Environment Variable CITRIX_CLIENT_ID. client_secret (String, Sensitive): Client Secret for Citrix DaaS service authentication For Citrix On-Premises customers: Use this variable to specify the Domain-Admin Password. For Citrix Cloud customers: Use this variable to specify Cloud API Key Client Secret. Can be set via the Environment Variable CITRIX_CLIENT_SECRET. customer_id (String): Citrix Cloud customer ID Only applicable for Citrix Cloud customers. Can be set via the Environment Variable CITRIX_CUSTOMER_ID. disable_ssl_verification (Boolean): Disable SSL verification against the target DDC Only applicable to on-premises customers. Citrix Cloud customers do not need this option. Set to true to skip SSL verification only when the target DDC does not have a valid SSL certificate issued by a trusted CA. When set to true, please make sure that your provider config is set for a known DDC hostname. It is recommended to configure a valid certificate for the target DDC. Can be set via the Environment Variable CITRIX_DISABLE_SSL_VERIFICATION. environment (String): Citrix Cloud environment of the customer Only applicable for Citrix Cloud customers. Available options: Production, Staging, Japan, JapanStaging. Can be set via the Environment Variable CITRIX_ENVIRONMENT. hostname (String) : Hostname/base URL of Citrix DaaS service For Citrix On-Premises customers (Required): Use this variable to specify the Delivery Controller hostname. For Citrix Cloud customers (Optional): Use this variable to override the Citrix DaaS service hostname. Can be set via the Environment Variable CITRIX_HOSTNAME. Deploying a Citrix Cloud Resource location on Google Cloud Platform (GCP) using Terraform Overview This guide showcases the possibility of creating a complete Citrix Cloud Resource Location on Google Cloud Platform (GCP) using Terraform. We want to reduce manual interventions to the absolute minimum. All Terraform configuration files can be found later on GitHub - we will update this guide as soon as the GitHub repository is ready. In this guide, we will use an existing domain and will not deploy a new domain. For further instructions on deploying a new domain, please refer to the guide Citrix DaaS and Terraform—Automatic Deployment of a Resource Location on Microsoft Azure. Please note that this guide will be reworked soon! The AD deployment used for this guide consists of a Hub-and-Spoke model - each Resource Location running on a Hypervisor/Hyperscaler is connected to the main Domain Controller using an IPSec-based Site-to-Site VPN. Each Resource Location has its own sub-domain. The Terraform flow is split into different parts: Part One - this part can be run on any computer where Terraform is installed : Creating the initially needed Resources on Google Cloud Platform (GCP) : Creating all needed Firewall rules and Tags on Google Cloud Platform (GCP) Creating all needed PowerShell scripts on Google Cloud Platform (GCP) Creating all needed IP- and DHCP configurations on Google Cloud Platform (GCP) Creating a needed Storage Bucket and its configuration on Google Cloud Platform (GCP) Creating a Windows Server 2022-based Master Image VM used for deploying the Machine Catalog in step 3 Creating two Windows Server 2022-based VMs, which will be used as Cloud Connector VMs in Step 2 Creating a Windows Server 2022-based VM acting as an Administrative workstation for running Terraform steps 2 and 3—this is necessary because you will use WinRM for further configuration and deployment in steps 2 and 3! Creating all necessary scripts for joining the VMs to the existing sub-domain Putting the VMs into the existing sub-domain Part Two - this part can only be run on the previously created Administrative VM as the deployment of steps 2 and 3 relies heavily on WinRM: Configuring the three previously created Virtual Machines on Google Cloud Platform (GCP): Installing the needed software on the CCs Installing the needed software on the Admin-VM Creating the necessary Resources in Citrix Cloud: Creating a Resource Location in Citrix Cloud Uploading all relevant configurations to the Cloud Connector VMs Configuring the 2 Cloud Connectors Registering the 2 Cloud Connectors in the newly created Resource Location Part Three: Creating the Machine Catalog and Delivery Group in Citrix Cloud: Retrieving the Site- and Zone-ID of the Resource Location Creating a dedicated Hypervisor Connection to Google Cloud Platform (GCP) Creating a dedicated Hypervisor Resource Pool Creating a Machine Catalog (MC) in the newly created Resource Location Creating a Delivery Group (DG) based on the MC in the newly created Resource Location Setting the AutoScale configuration for the created Delivery Group Deploying some sample policies and binding them to the created Delivery Group Determine if WinRM connections/communications are functioning We strongly recommend a quick check to determine the communication before starting the Terraform scripts: Open a PowerShell console and type: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} test-wsman -ComputerName <IP-Address of the computer you want to reach> -Credential <IP-Address of the computer you want to reach>\administrator -Authentication Basic The response should look like: wsmid : http://schemas.dmtf.org/wbem/wsman/identity/1/wsmanidentity.xsd ProtocolVersion : http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd ProductVendor : Microsoft Corporation ProductVersion : OS: 0.0.0 SP: 0.0 Stack: 3.0 Another possibility is to open a PowerShell console and type: Enter-PSSession -ComputerName <IP-Address of the computer you want to reach> -Credential <IP-Address of the computer you want to reach>\administrator The response should look like: [10.156.0.5]: PS C:\Users\Administrator\Documents> A short Terraform script also checks if the communication via WinRM between the Admin-VM and, in this example, the CC1-VM is working as intended: .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } locals { #### Test the WinRM communication #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context TerraformTestWinRMScript = <<-EOT $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $FileNameForData = 'C:\temp\xdinst\Processes.txt' If (Test-Path $FileNameForData) {Remove-Item -Path $FileNameForData -Force} Get-Process | Out-File -FilePath 'C:\temp\xdinst\Processes.txt' } EOT } #### Write script into local data-directory resource "local_file" "WriteWinRMTestScriptIntoDataDirectory" { filename = "${path.module}/data/Terraform-Test-WinRM.ps1" content = local.TerraformTestWinRMScript } resource "null_resource" "CreateTestScriptOnCC1" { connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_CC1-IP timeout = var.Provisioner_Timeout } provisioner "file" { source = "${path.module}/data/Terraform-Test-WinRM.ps1" destination = "C:/temp/xdinst/Terraform-Test-WinRM.ps1" } provisioner "remote-exec" { inline = [ "powershell -File 'C:/temp/xdinst/Terraform-Test-WinRM.ps1'" ] } } If you can see something like the output below... .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} null_resource.CreateTestScriptOnCC1: Creating... null_resource.CreateTestScriptOnCC1: Provisioning with 'remote-exec'... null_resource.CreateTestScriptOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CreateTestScriptOnCC1 (remote-exec): Host: 10.156.0.5 null_resource.CreateTestScriptOnCC1 (remote-exec): Port: 5985 null_resource.CreateTestScriptOnCC1 (remote-exec): User: administrator null_resource.CreateTestScriptOnCC1 (remote-exec): Password: true null_resource.CreateTestScriptOnCC1 (remote-exec): HTTPS: false null_resource.CreateTestScriptOnCC1 (remote-exec): Insecure: false null_resource.CreateTestScriptOnCC1 (remote-exec): NTLM: false null_resource.CreateTestScriptOnCC1 (remote-exec): CACert: false null_resource.CreateTestScriptOnCC1 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CreateTestScriptOnCC1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/Terraform-Test-WinRM.ps1 #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CreateTestScriptOnCC1: Creation complete after 3s [id=394829982371734209] ...then you can be sure that the provisioning using WinRM is working as intended! Configuration using variables All needed configuration settings are stored in the corresponding Variables that need to be set. Some Configuration settings are propagated throughout the whole Terraform configuration... You need to start each of the 3 modules manually using the Terraform workflow terraform init, terraform plan, and terraform apply in the corresponding module directory. Terraform then completes the necessary configuration steps of the corresponding module. File System structure Root-Directory Module 1: _CConGCP-Creation: Filename Purpose _CConGCP-Creation-Create.tf Resource configuration and primary flow definition _CConGCP-Creation-Create-variables.tf Definition of Variables _CConGCP-Creation-Create.auto.tfvars.json Setting the values of the Variables _CConGCP-Creation-Provider.tf Provider definition and configuration _CConGCP-Creation-Provider-variables.tf Definition of Variables _CConGCP-Creation-Provider.auto.tfvars.json Setting the values of the Variables TF-Domain-Join-Script.ps1 Powershell-Script for joining the Admin-VM to the Domain DATA-Directory All other needed files are placed in here (see later) Module 2: _CConGCP-Install: Filename Purpose _CConGCP-Install-CreatePreReqs.tf Resource configuration and primary flow definition _CConGCP-Install-CreatePreReqs-variables.tf Definition of Variables _CConGCP-Install-CreatePreReqs.auto.tfvars.json Setting the values of the Variables _CConGCP-Install-Provider.tf Provider definition and configuration _CConGCP-Install-Provider-variables.tf Definition of Variables _CConGCP-Install-Provider.auto.tfvars.json Setting the values of the Variables Module 3: _CConGCP-CCStuff: Filename Purpose _CConGCP-CCStuff-CreateCCEntities.tf Resource configuration and primary flow definition _CConGCP-CCStuff-CreateCCEntities-variables.tf Definition of Variables _CConGCP-CCStuff-CreateCCEntities.auto.tfvars.json Setting the values of the Variables _CConGCP-CCStuff-CreateCCEntities-GCP.auto.tfvars.json Extracted Service Account details for further use _CConGCP-CCStuff-Provider.tf Provider definition and configuration _CConGCP-CCStuff-Provider-variables.tf Definition of Variables _CConGCP-CCStuff-Provider.auto.tfvars.json Setting the values of the Variables var.CC_Install_LogPath-based directory: Filename Purpose CitrixPosHSDK.exe DaaS Remote PoSH SDK installer cwc.json Configuration file for unattended Cloud Connector setup <gcp-project-id>.json Service Account used for Authentication GetBT.ps1, CreateRL.ps1, GetSiteID.ps1, GetZoneID.ps1, ... Various PowerShell scripts needed for deployment GetBT.txt, GetRLID.txt, GetSiteID.txt, GetZoneid.txt, ... txt-files containing the results of Powershell Change the settings in the .json files according to your needs. Before setting the corresponding settings or running the Terraform workflow, the following prerequisites are needed to ensure a smooth and error-free build. Prerequisites Installing Google Cloud Tools for PowerShell and gcloud CLI In this guide, we use gcloud CLI and PowerShell cmdlets to determine further needed information. Further information about gcloud CLI and the installation/configuration can be found at gcloud Command Line Interface, and further information about the PowerShell cmdlets for Google Cloud can be found at Cloud Tools for PowerShell. After installing gcloud CLI you need to configure it: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> gcloud init Welcome! This command will take you through the configuration of gcloud. Settings from your current configuration [default] are: accessibility: screen_reader: 'False' compute: region: us-east1 zone: us-east1-b core: account: XXXXXXXXXX@the-austrian-citrix-guy.at disable_usage_reporting: 'True' project: tacg-gcp-XXXXXXXXXXXX Pick configuration to use: [1] Re-initialize this configuration [default] with new settings [2] Create a new configuration [3] Switch to and re-initialize existing configuration: [tacg-gcp-n] Please enter your numeric choice: 1 Your current configuration has been set to: [default] You can skip diagnostics next time by using the following flag: gcloud init --skip-diagnostics Network diagnostic detects and fixes local network connection issues. Checking network connection...done. Reachability Check passed. Network diagnostic passed (1/1 checks passed). Choose the account you would like to use to perform operations for this configuration: [1] XXXXXXXXXX@XXXXXXXXXX [2] XXXXXXXXXX@the-austrian-citrix-guy.at [3] Log in with a new account Please enter your numeric choice: 2 You are logged in as: [XXXXXXXXXX@the-austrian-citrix-guy.at]. Reauthentication required. Please enter your password: Pick cloud project to use: [1] tacg-gcp-XXXXXXXXXXXX [2] Enter a project ID [3] Create a new project Please enter numeric choice or text value (must exactly match list item): 1 Your current project has been set to: [tacg-gcp-XXXXXXXXXXXX]. Do you want to configure a default Compute Region and Zone? (Y/n)? Y Which Google Compute Engine zone would you like to use as project default? If you do not specify a zone via a command line flag while working with Compute Engine resources, the default is assumed. [1] us-east1-b [2] us-east1-c [3] us-east1-d [4] us-east4-c [5] us-east4-b [6] us-east4-a [7] us-central1-c [8] us-central1-a [9] us-central1-f [10] us-central1-b [11] us-west1-b [12] us-west1-c [13] us-west1-a [14] europe-west4-a [15] europe-west4-b [16] europe-west4-c [17] europe-west1-b [18] europe-west1-d [19] europe-west1-c [20] europe-west3-c [21] europe-west3-a [22] europe-west3-b [23] europe-west2-c [24] europe-west2-b [25] europe-west2-a [26] asia-east1-b [27] asia-east1-a [28] asia-east1-c [29] asia-southeast1-b [30] asia-southeast1-a [31] asia-southeast1-c [32] asia-northeast1-b [33] asia-northeast1-c [34] asia-northeast1-a [35] asia-south1-c [36] asia-south1-b [37] asia-south1-a [38] australia-southeast1-b [39] australia-southeast1-c [40] australia-southeast1-a [41] southamerica-east1-b [42] southamerica-east1-c [43] southamerica-east1-a [44] africa-south1-a [45] africa-south1-b [46] africa-south1-c [47] asia-east2-a [48] asia-east2-b [49] asia-east2-c [50] asia-northeast2-a Did not print [72] options. Too many options [122]. Enter "list" at prompt to print choices fully. Please enter numeric choice or text value (must exactly match list item): 20 Your project default Compute Engine zone has been set to [europe-west3-c]. You can change it by running [gcloud config set compute/zone NAME]. Your project default Compute Engine region has been set to [europe-west3]. You can change it by running [gcloud config set compute/region NAME]. Your Google Cloud SDK is configured and ready to use! * Commands that require authentication will use XXXXXXXXXX@the-austrian-citrix-guy.at by default * Commands will reference project `tacg-gcp-XXXXXXXXXX` by default * Compute Engine commands will use region `europe-west3` by default * Compute Engine commands will use zone `europe-west3-c` by default Run `gcloud help config` to learn how to change individual settings This gcloud configuration is called [default]. You can create additional configurations if you work with multiple accounts and/or projects. Run `gcloud topic configurations` to learn more. Some things to try next: * Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command. * Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting * Run `gcloud cheat-sheet` to see a roster of go-to `gcloud` commands. PS C:\TACG> Enabling all needed APIs Citrix Cloud interacts with your Google Cloud project by using several different APIs. These APIs aren't necessarily enabled by default, but they're necessary for Citrix Virtual Delivery Agent (VDA) fleet creation and lifecycle management. For Citrix Cloud to function, the following Google APIs must be enabled on your project: Compute Engine API Cloud Resource Manager API Identity and Access Management (IAM) API Cloud Build API Cloud Domain Name System (DNS) API To enable the APIs, paste the following commands into gcloud CLI: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} gcloud services enable compute.googleapis.com gcloud services enable cloudresourcemanager.googleapis.com gcloud services enable iam.googleapis.com gcloud services enable cloudbuild.googleapis.com gcloud services enable dns.googleapis.com Existing Google Cloud Platform (GCP) entities We anticipate that the following resources already exist and are already configured on Google Cloud Platform (GCP) : A working tenant All needed rights for the IAM user on the tenant A working Network structure with at least one subnet in the VPC A security group configured for allowing inbound connections from the subnet and partially from the Internet: WinRM-HTTP, WinRM-HTTPS, UDP, DNS (UDP and TCP), ICMP (for testing purposes), HTTP, HTTPS, TCP (for testing purposes), RDP. No blocking rules for outbound connections should be in place An access key with its secret (see a description of how to create the key later on) No bottlenecks/quotas that might block the deployment You can get the needed information about the Network configuration by using gcloud: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> gcloud compute networks list NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 default AUTO REGIONAL PS C:\TACG> PS C:\TACG> gcloud compute networks subnets list --regions europe-west3 NAME REGION NETWORK RANGE STACK_TYPE IPV6_ACCESS_TYPE INTERNAL_IPV6_PREFIX EXTERNAL_IPV6_PREFIX default europe-west3 default 10.156.0.0/20 IPV4_ONLY PS C:\TACG> PS C:\TACG> gcloud compute networks list --uri https://www.googleapis.com/compute/v1/projects/tacg-gcp-XXXXXXXXXX/global/networks/default PS C:\TACG> PS C:\TACG> gcloud compute networks subnets list --uri --regions europe-west3 https://www.googleapis.com/compute/v1/projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default PS C:\TACG> Note the Network-Name (in this example default), the Subnet-Name, the Network-URI and the Subnet-URI of the Network configuration you want to use and put it into the corresponding .auto.tvars.json file. Using gcloud to enable GPA and check if GPA is enabled: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> gcloud compute networks subnets list --filter=default NAME REGION NETWORK RANGE STACK_TYPE IPV6_ACCESS_TYPE INTERNAL_IPV6_PREFIX EXTERNAL_IPV6_PREFIX default us-central1 default 10.128.0.0/20 IPV4_ONLY default europe-west1 default 10.132.0.0/20 IPV4_ONLY default us-west1 default 10.138.0.0/20 IPV4_ONLY default asia-east1 default 10.140.0.0/20 IPV4_ONLY default us-east1 default 10.142.0.0/20 IPV4_ONLY default asia-northeast1 default 10.146.0.0/20 IPV4_ONLY default asia-southeast1 default 10.148.0.0/20 IPV4_ONLY default us-east4 default 10.150.0.0/20 IPV4_ONLY default australia-southeast1 default 10.152.0.0/20 IPV4_ONLY default europe-west2 default 10.154.0.0/20 IPV4_ONLY default europe-west3 default 10.156.0.0/20 IPV4_ONLY default southamerica-east1 default 10.158.0.0/20 IPV4_ONLY default asia-south1 default 10.160.0.0/20 IPV4_ONLY default northamerica-northeast1 default 10.162.0.0/20 IPV4_ONLY default europe-west4 default 10.164.0.0/20 IPV4_ONLY default europe-north1 default 10.166.0.0/20 IPV4_ONLY default us-west2 default 10.168.0.0/20 IPV4_ONLY default asia-east2 default 10.170.0.0/20 IPV4_ONLY default europe-west6 default 10.172.0.0/20 IPV4_ONLY default asia-northeast2 default 10.174.0.0/20 IPV4_ONLY default asia-northeast3 default 10.178.0.0/20 IPV4_ONLY default us-west3 default 10.180.0.0/20 IPV4_ONLY default us-west4 default 10.182.0.0/20 IPV4_ONLY default asia-southeast2 default 10.184.0.0/20 IPV4_ONLY default europe-central2 default 10.186.0.0/20 IPV4_ONLY default northamerica-northeast2 default 10.188.0.0/20 IPV4_ONLY default asia-south2 default 10.190.0.0/20 IPV4_ONLY default australia-southeast2 default 10.192.0.0/20 IPV4_ONLY default southamerica-west1 default 10.194.0.0/20 IPV4_ONLY default us-east7 default 10.196.0.0/20 IPV4_ONLY default europe-west8 default 10.198.0.0/20 IPV4_ONLY default europe-west9 default 10.200.0.0/20 IPV4_ONLY default us-east5 default 10.202.0.0/20 IPV4_ONLY default europe-southwest1 default 10.204.0.0/20 IPV4_ONLY default us-south1 default 10.206.0.0/20 IPV4_ONLY default me-west1 default 10.208.0.0/20 IPV4_ONLY default europe-west12 default 10.210.0.0/20 IPV4_ONLY default me-central1 default 10.212.0.0/20 IPV4_ONLY default europe-west10 default 10.214.0.0/20 IPV4_ONLY default me-central2 default 10.216.0.0/20 IPV4_ONLY default africa-south1 default 10.218.0.0/20 IPV4_ONLY default us-west8 default 10.220.0.0/20 IPV4_ONLY PS C:\TACG> PS C:\TACG> gcloud compute networks subnets update default --region=europe-west3 --enable-private-ip-google-access Updated [https://www.googleapis.com/compute/v1/projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default]. PS C:\TACG> PS C:\TACG> gcloud compute networks subnets describe default --region=europe-west3 --format="get(privateIpGoogleAccess)" True Using gcloud to determine all used internal IPs: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> gcloud asset search-all-resources --asset-types='compute.googleapis.com/Instance' --query="networks/default" --format=json | jq ".[].additionalAttributes.internalIPs" [ "10.156.0.2" ] PS C:\TACG> As you must be aware of any default quotas that might break the deployment of the Worker VMs, you can get the quota information by using gcloud: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> gcloud compute project-info describe --project tacg-gcp-XXXXXXXXXX commonInstanceMetadata: fingerprint: mhAFevpXe_g= items: - key: serial-port-enable value: 'TRUE' kind: compute#metadata creationTimestamp: '2023-12-01T04:09:39.536-08:00' defaultNetworkTier: PREMIUM defaultServiceAccount: XXXXXXXXXXXX-compute@developer.gserviceaccount.com id: '826031XXXXXXXXXXXX' kind: compute#project name: tacg-gcp-XXXXXXXXXX quotas: - limit: 1000.0 metric: SNAPSHOTS usage: 0.0 - limit: 5.0 metric: NETWORKS usage: 1.0 - limit: 100.0 metric: FIREWALLS usage: 8.0 - limit: 100.0 metric: IMAGES usage: 0.0 - limit: 8.0 metric: STATIC_ADDRESSES usage: 0.0 - limit: 200.0 metric: ROUTES usage: 0.0 - limit: 15.0 metric: FORWARDING_RULES usage: 0.0 - limit: 50.0 metric: TARGET_POOLS usage: 0.0 - limit: 75.0 metric: HEALTH_CHECKS usage: 0.0 - limit: 8.0 metric: IN_USE_ADDRESSES usage: 0.0 - limit: 50.0 metric: TARGET_INSTANCES usage: 0.0 - limit: 10.0 metric: TARGET_HTTP_PROXIES usage: 0.0 - limit: 10.0 metric: URL_MAPS usage: 0.0 - limit: 50.0 metric: BACKEND_SERVICES usage: 0.0 - limit: 100.0 metric: INSTANCE_TEMPLATES usage: 2.0 - limit: 5.0 metric: TARGET_VPN_GATEWAYS usage: 0.0 - limit: 10.0 metric: VPN_TUNNELS usage: 0.0 - limit: 3.0 metric: BACKEND_BUCKETS usage: 0.0 - limit: 10.0 metric: ROUTERS usage: 0.0 - limit: 10.0 metric: TARGET_SSL_PROXIES usage: 0.0 - limit: 10.0 metric: TARGET_HTTPS_PROXIES usage: 0.0 - limit: 10.0 metric: SSL_CERTIFICATES usage: 0.0 - limit: 100.0 metric: SUBNETWORKS usage: 0.0 - limit: 10.0 metric: TARGET_TCP_PROXIES usage: 0.0 - limit: 32.0 metric: CPUS_ALL_REGIONS usage: 0.0 - limit: 10.0 metric: SECURITY_POLICIES usage: 0.0 - limit: 100.0 metric: SECURITY_POLICY_RULES usage: 0.0 - limit: 1000.0 metric: XPN_SERVICE_PROJECTS usage: 0.0 - limit: 20.0 metric: PACKET_MIRRORINGS usage: 0.0 - limit: 100.0 metric: NETWORK_ENDPOINT_GROUPS usage: 0.0 - limit: 6.0 metric: INTERCONNECTS usage: 0.0 - limit: 5000.0 metric: GLOBAL_INTERNAL_ADDRESSES usage: 0.0 - limit: 5.0 metric: VPN_GATEWAYS usage: 0.0 - limit: 100.0 metric: MACHINE_IMAGES usage: 0.0 - limit: 20.0 metric: SECURITY_POLICY_CEVAL_RULES usage: 0.0 - limit: 0.0 metric: GPUS_ALL_REGIONS usage: 0.0 - limit: 5.0 metric: EXTERNAL_VPN_GATEWAYS usage: 0.0 - limit: 1.0 metric: PUBLIC_ADVERTISED_PREFIXES usage: 0.0 - limit: 10.0 metric: PUBLIC_DELEGATED_PREFIXES usage: 0.0 - limit: 128.0 metric: STATIC_BYOIP_ADDRESSES usage: 0.0 - limit: 10.0 metric: NETWORK_FIREWALL_POLICIES usage: 0.0 - limit: 15.0 metric: INTERNAL_TRAFFIC_DIRECTOR_FORWARDING_RULES usage: 0.0 - limit: 15.0 metric: GLOBAL_EXTERNAL_MANAGED_FORWARDING_RULES usage: 0.0 - limit: 50.0 metric: GLOBAL_INTERNAL_MANAGED_BACKEND_SERVICES usage: 0.0 - limit: 50.0 metric: GLOBAL_EXTERNAL_MANAGED_BACKEND_SERVICES usage: 0.0 - limit: 50.0 metric: GLOBAL_EXTERNAL_PROXY_LB_BACKEND_SERVICES usage: 0.0 - limit: 250.0 metric: GLOBAL_INTERNAL_TRAFFIC_DIRECTOR_BACKEND_SERVICES usage: 0.0 selfLink: https://www.googleapis.com/compute/v1/projects/tacg-gcp-XXXXXXXXXX vmDnsSetting: ZONAL_ONLY xpnProjectStatus: UNSPECIFIED_XPN_PROJECT_STATUS PS C:\TACG> You can filter the Quotas or change them using the corresponding Quota pages on the Google Cloud console: Google Cloud Console - Quotas Getting the available Compute Images from Google Cloud Platform (GCP) We want to automatically deploy the virtual machines necessary for the Domain Controller and the Cloud Connectors - so we need detailed configuration settings: Get the available Compute images: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> gcloud compute images list --filter 'family ~ windows' NAME PROJECT FAMILY DEPRECATED STATUS windows-server-2016-dc-core-v20240313 windows-cloud windows-2016-core READY windows-server-2016-dc-v20240313 windows-cloud windows-2016 READY windows-server-2019-dc-core-v20240313 windows-cloud windows-2019-core READY windows-server-2019-dc-v20240313 windows-cloud windows-2019 READY windows-server-2022-dc-core-v20240313 windows-cloud windows-2022-core READY windows-server-2022-dc-v20240313 windows-cloud windows-2022 READY PS C:\TACG> Note the value of the NAME you want to use - for example windows-server-2022-dc-v20240313. Getting the available VM sizes from Google Cloud Platform (GCP) We need to determine the available VM sizes. A gcloud call helps us to list the available instance types on Google Cloud Platform (GCP): .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> gcloud compute machine-types list --filter="zone:( europe-west3-c )" NAME ZONE CPUS MEMORY_GB DEPRECATED c2-standard-16 europe-west3-c 16 64.00 c2-standard-30 europe-west3-c 30 120.00 c2-standard-4 europe-west3-c 4 16.00 c2-standard-60 europe-west3-c 60 240.00 c2-standard-8 europe-west3-c 8 32.00 c2d-highcpu-112 europe-west3-c 112 224.00 c2d-highcpu-16 europe-west3-c 16 32.00 c2d-highcpu-2 europe-west3-c 2 4.00 c2d-highcpu-32 europe-west3-c 32 64.00 c2d-highcpu-4 europe-west3-c 4 8.00 c2d-highcpu-56 europe-west3-c 56 112.00 c2d-highcpu-8 europe-west3-c 8 16.00 c2d-highmem-112 europe-west3-c 112 896.00 c2d-highmem-16 europe-west3-c 16 128.00 c2d-highmem-2 europe-west3-c 2 16.00 c2d-highmem-32 europe-west3-c 32 256.00 c2d-highmem-4 europe-west3-c 4 32.00 c2d-highmem-56 europe-west3-c 56 448.00 c2d-highmem-8 europe-west3-c 8 64.00 c2d-standard-112 europe-west3-c 112 448.00 c2d-standard-16 europe-west3-c 16 64.00 c2d-standard-2 europe-west3-c 2 8.00 c2d-standard-32 europe-west3-c 32 128.00 c2d-standard-4 europe-west3-c 4 16.00 c2d-standard-56 europe-west3-c 56 224.00 c2d-standard-8 europe-west3-c 8 32.00 e2-highcpu-16 europe-west3-c 16 16.00 e2-highcpu-2 europe-west3-c 2 2.00 e2-highcpu-32 europe-west3-c 32 32.00 e2-highcpu-4 europe-west3-c 4 4.00 e2-highcpu-8 europe-west3-c 8 8.00 e2-highmem-16 europe-west3-c 16 128.00 e2-highmem-2 europe-west3-c 2 16.00 e2-highmem-4 europe-west3-c 4 32.00 e2-highmem-8 europe-west3-c 8 64.00 e2-medium europe-west3-c 2 4.00 e2-micro europe-west3-c 2 1.00 e2-small europe-west3-c 2 2.00 e2-standard-16 europe-west3-c 16 64.00 e2-standard-2 europe-west3-c 2 8.00 e2-standard-32 europe-west3-c 32 128.00 e2-standard-4 europe-west3-c 4 16.00 e2-standard-8 europe-west3-c 8 32.00 f1-micro europe-west3-c 1 0.60 g1-small europe-west3-c 1 1.70 m1-megamem-96 europe-west3-c 96 1433.60 m1-ultramem-160 europe-west3-c 160 3844.00 m1-ultramem-40 europe-west3-c 40 961.00 m1-ultramem-80 europe-west3-c 80 1922.00 n1-highcpu-16 europe-west3-c 16 14.40 n1-highcpu-2 europe-west3-c 2 1.80 n1-highcpu-32 europe-west3-c 32 28.80 n1-highcpu-4 europe-west3-c 4 3.60 n1-highcpu-64 europe-west3-c 64 57.60 n1-highcpu-8 europe-west3-c 8 7.20 n1-highcpu-96 europe-west3-c 96 86.40 n1-highmem-16 europe-west3-c 16 104.00 n1-highmem-2 europe-west3-c 2 13.00 n1-highmem-32 europe-west3-c 32 208.00 n1-highmem-4 europe-west3-c 4 26.00 n1-highmem-64 europe-west3-c 64 416.00 n1-highmem-8 europe-west3-c 8 52.00 n1-highmem-96 europe-west3-c 96 624.00 n1-megamem-96 europe-west3-c 96 1433.60 DEPRECATED n1-standard-1 europe-west3-c 1 3.75 n1-standard-16 europe-west3-c 16 60.00 n1-standard-2 europe-west3-c 2 7.50 n1-standard-32 europe-west3-c 32 120.00 n1-standard-4 europe-west3-c 4 15.00 n1-standard-64 europe-west3-c 64 240.00 n1-standard-8 europe-west3-c 8 30.00 n1-standard-96 europe-west3-c 96 360.00 n1-ultramem-160 europe-west3-c 160 3844.00 DEPRECATED n1-ultramem-40 europe-west3-c 40 961.00 DEPRECATED n1-ultramem-80 europe-west3-c 80 1922.00 DEPRECATED n2-highcpu-16 europe-west3-c 16 16.00 n2-highcpu-2 europe-west3-c 2 2.00 n2-highcpu-32 europe-west3-c 32 32.00 n2-highcpu-4 europe-west3-c 4 4.00 n2-highcpu-48 europe-west3-c 48 48.00 n2-highcpu-64 europe-west3-c 64 64.00 n2-highcpu-8 europe-west3-c 8 8.00 n2-highcpu-80 europe-west3-c 80 80.00 n2-highcpu-96 europe-west3-c 96 96.00 n2-highmem-128 europe-west3-c 128 864.00 n2-highmem-16 europe-west3-c 16 128.00 n2-highmem-2 europe-west3-c 2 16.00 n2-highmem-32 europe-west3-c 32 256.00 n2-highmem-4 europe-west3-c 4 32.00 n2-highmem-48 europe-west3-c 48 384.00 n2-highmem-64 europe-west3-c 64 512.00 n2-highmem-8 europe-west3-c 8 64.00 n2-highmem-80 europe-west3-c 80 640.00 n2-highmem-96 europe-west3-c 96 768.00 n2-standard-128 europe-west3-c 128 512.00 n2-standard-16 europe-west3-c 16 64.00 n2-standard-2 europe-west3-c 2 8.00 n2-standard-32 europe-west3-c 32 128.00 n2-standard-4 europe-west3-c 4 16.00 n2-standard-48 europe-west3-c 48 192.00 n2-standard-64 europe-west3-c 64 256.00 n2-standard-8 europe-west3-c 8 32.00 n2-standard-80 europe-west3-c 80 320.00 n2-standard-96 europe-west3-c 96 384.00 n2d-highcpu-128 europe-west3-c 128 128.00 n2d-highcpu-16 europe-west3-c 16 16.00 n2d-highcpu-2 europe-west3-c 2 2.00 n2d-highcpu-224 europe-west3-c 224 224.00 n2d-highcpu-32 europe-west3-c 32 32.00 n2d-highcpu-4 europe-west3-c 4 4.00 n2d-highcpu-48 europe-west3-c 48 48.00 n2d-highcpu-64 europe-west3-c 64 64.00 n2d-highcpu-8 europe-west3-c 8 8.00 n2d-highcpu-80 europe-west3-c 80 80.00 n2d-highcpu-96 europe-west3-c 96 96.00 n2d-highmem-16 europe-west3-c 16 128.00 n2d-highmem-2 europe-west3-c 2 16.00 n2d-highmem-32 europe-west3-c 32 256.00 n2d-highmem-4 europe-west3-c 4 32.00 n2d-highmem-48 europe-west3-c 48 384.00 n2d-highmem-64 europe-west3-c 64 512.00 n2d-highmem-8 europe-west3-c 8 64.00 n2d-highmem-80 europe-west3-c 80 640.00 n2d-highmem-96 europe-west3-c 96 768.00 n2d-standard-128 europe-west3-c 128 512.00 n2d-standard-16 europe-west3-c 16 64.00 n2d-standard-2 europe-west3-c 2 8.00 n2d-standard-224 europe-west3-c 224 896.00 n2d-standard-32 europe-west3-c 32 128.00 n2d-standard-4 europe-west3-c 4 16.00 n2d-standard-48 europe-west3-c 48 192.00 n2d-standard-64 europe-west3-c 64 256.00 n2d-standard-8 europe-west3-c 8 32.00 n2d-standard-80 europe-west3-c 80 320.00 n2d-standard-96 europe-west3-c 96 384.00 t2d-standard-1 europe-west3-c 1 4.00 t2d-standard-16 europe-west3-c 16 64.00 t2d-standard-2 europe-west3-c 2 8.00 t2d-standard-32 europe-west3-c 32 128.00 t2d-standard-4 europe-west3-c 4 16.00 t2d-standard-48 europe-west3-c 48 192.00 t2d-standard-60 europe-west3-c 60 240.00 t2d-standard-8 europe-west3-c 8 32.00 PS C:\TACG> We need to filter the results to narrow down usable instances for the Cloud Connectors and the Admin-VM - we want to use instances with max. 2 vCPUs: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Filtered for CCs and Admin-VM: PS C:\TACG> gcloud compute machine-types list --filter="(CPUS<=2) AND (zone:europe-west3-c)" NAME ZONE CPUS MEMORY_GB DEPRECATED c2d-highcpu-2 europe-west3-c 2 4.00 c2d-highmem-2 europe-west3-c 2 16.00 c2d-standard-2 europe-west3-c 2 8.00 e2-highcpu-2 europe-west3-c 2 2.00 e2-highmem-2 europe-west3-c 2 16.00 e2-medium europe-west3-c 2 4.00 e2-micro europe-west3-c 2 1.00 e2-small europe-west3-c 2 2.00 e2-standard-2 europe-west3-c 2 8.00 f1-micro europe-west3-c 1 0.60 g1-small europe-west3-c 1 1.70 n1-highcpu-2 europe-west3-c 2 1.80 n1-highmem-2 europe-west3-c 2 13.00 n1-standard-1 europe-west3-c 1 3.75 n1-standard-2 europe-west3-c 2 7.50 n2-highcpu-2 europe-west3-c 2 2.00 n2-highmem-2 europe-west3-c 2 16.00 n2-standard-2 europe-west3-c 2 8.00 n2d-highcpu-2 europe-west3-c 2 2.00 n2d-highmem-2 europe-west3-c 2 16.00 n2d-standard-2 europe-west3-c 2 8.00 t2d-standard-1 europe-west3-c 1 4.00 t2d-standard-2 europe-west3-c 2 8.00 PS C:\TACG> We need to filter the results to narrow down usable instances for the Worker VMs - we want to use instances with max. 8 vCPUs: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Filtered for Worker: PS C:\TACG> gcloud compute machine-types list --filter="(CPUS<=8) AND (zone:europe-west3-c)" NAME ZONE CPUS MEMORY_GB DEPRECATED c2-standard-4 europe-west3-c 4 16.00 c2-standard-8 europe-west3-c 8 32.00 c2d-highcpu-2 europe-west3-c 2 4.00 c2d-highcpu-4 europe-west3-c 4 8.00 c2d-highcpu-8 europe-west3-c 8 16.00 c2d-highmem-2 europe-west3-c 2 16.00 c2d-highmem-4 europe-west3-c 4 32.00 c2d-highmem-8 europe-west3-c 8 64.00 c2d-standard-2 europe-west3-c 2 8.00 c2d-standard-4 europe-west3-c 4 16.00 c2d-standard-8 europe-west3-c 8 32.00 e2-highcpu-2 europe-west3-c 2 2.00 e2-highcpu-4 europe-west3-c 4 4.00 e2-highcpu-8 europe-west3-c 8 8.00 e2-highmem-2 europe-west3-c 2 16.00 e2-highmem-4 europe-west3-c 4 32.00 e2-highmem-8 europe-west3-c 8 64.00 e2-medium europe-west3-c 2 4.00 e2-micro europe-west3-c 2 1.00 e2-small europe-west3-c 2 2.00 e2-standard-2 europe-west3-c 2 8.00 e2-standard-4 europe-west3-c 4 16.00 e2-standard-8 europe-west3-c 8 32.00 f1-micro europe-west3-c 1 0.60 g1-small europe-west3-c 1 1.70 n1-highcpu-2 europe-west3-c 2 1.80 n1-highcpu-4 europe-west3-c 4 3.60 n1-highcpu-8 europe-west3-c 8 7.20 n1-highmem-2 europe-west3-c 2 13.00 n1-highmem-4 europe-west3-c 4 26.00 n1-highmem-8 europe-west3-c 8 52.00 n1-standard-1 europe-west3-c 1 3.75 n1-standard-2 europe-west3-c 2 7.50 n1-standard-4 europe-west3-c 4 15.00 n1-standard-8 europe-west3-c 8 30.00 n2-highcpu-2 europe-west3-c 2 2.00 n2-highcpu-4 europe-west3-c 4 4.00 n2-highcpu-8 europe-west3-c 8 8.00 n2-highmem-2 europe-west3-c 2 16.00 n2-highmem-4 europe-west3-c 4 32.00 n2-highmem-8 europe-west3-c 8 64.00 n2-standard-2 europe-west3-c 2 8.00 n2-standard-4 europe-west3-c 4 16.00 n2-standard-8 europe-west3-c 8 32.00 n2d-highcpu-2 europe-west3-c 2 2.00 n2d-highcpu-4 europe-west3-c 4 4.00 n2d-highcpu-8 europe-west3-c 8 8.00 n2d-highmem-2 europe-west3-c 2 16.00 n2d-highmem-4 europe-west3-c 4 32.00 n2d-highmem-8 europe-west3-c 8 64.00 n2d-standard-2 europe-west3-c 2 8.00 n2d-standard-4 europe-west3-c 4 16.00 n2d-standard-8 europe-west3-c 8 32.00 t2d-standard-1 europe-west3-c 1 4.00 t2d-standard-2 europe-west3-c 2 8.00 t2d-standard-4 europe-west3-c 4 16.00 t2d-standard-8 europe-west3-c 8 32.00 PS C:\TACG> Note the Name parameter of the Google Cloud Platform (GCP) instances you want to use. Creating a Service Account for Google Cloud Platform (GCP) Authentication or checking the rights of an existing Service Account Google Identity and Access Management (IAM) grants you granular access to specific Google Cloud resources. It is important to define WHO has access to WHICH resources and WHAT they can do with them. Service accounts live inside projects, similar to other resources you deploy on Google Cloud. A Google Cloud Build Service Account is provisioned automatically once the Google APIs are enabled. The Cloud Build service account is identifiable with an email address that begins with the Project Number. The Cloud Build Account requires the following three roles: Cloud Build Service Account (assigned by default) Compute Instance Admin Service Account User To enable Citrix Cloud to access all needed entities, you need to add the following roles: Compute Admin Storage Admin Cloud Build Editor Service Account User Cloud Datastore User You can find detailed information about assigning these roles in the Tech Zone article Getting Started with Citrix DaaS on Google Cloud. Use another gcloud-call to check if the Service Account you plan to use is assigned to the project and has the required IAM roles: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> gcloud projects list --impersonate-service-account=XXXXXXXXXX@XXXXXXXXXX.iam.gserviceaccount.com PROJECT_ID NAME PROJECT_NUMBER tacg-gcp-XXXXXXXXXX TACG-GCP XXXXXXXXXXXX PS C:\TACG> PS C:\TACG> gcloud projects get-iam-policy tacg-gcp-XXXXXXXXXX --flatten="bindings[].members" --format='table(bindings.role)' --filter="bindings.members:XXXXXXXXXX@XXXXXXXXXX.iam.gserviceaccount.com" ROLE roles/compute.admin roles/iam.securityAdmin roles/iam.cloudbuild.builds.editor roles/iam.storage.admin roles/iam.serviceAccountUser roles/iam.datastore.user ... roles/owner PS C:\TACG> Further Software Components for Configuration and Deployment The Terraform deployment needs actual versions of the following software components: Citrix Cloud Connector Installer: cwcconnector.exe. Download the Citrix Cloud Connector Installer Citrix Remote PowerShell SDK Installer: CitrixPoshSdk.exe. Download the Citrix Remote PowerShell SDK Installer These components are required during the workflow. The Terraform engine looks for these files. In this guide we anticipate that the necessary software can be downloaded from a Storage Repository - Terraform creates a Storage Bucket where all necessary software is uploaded to. The URIs of the Storage Repository can be set in the corresponding variables: For the Cloud Connector: "CC_Install_CWCURI":"https://storage.googleapis.com/XXXXXXXXXX/cwcconnector.exe" For the Remote Powershell SDK: "CC_Install_RPoSHURI":"https://storage.googleapis.com/XXXXXXXXXX/CitrixPoshSdk.exe" Creating a Secure Client in Citrix Cloud The Secure Client in Citrix Cloud is the same as the Access Key in Google Cloud Platform (GCP). It is used for Authentication. API clients in Citrix Cloud are always tied to one administrator and one customer and are not visible to other administrators. If you want to access more than one customer, you must create API clients within each customer. API clients are automatically restricted to the rights of the administrator that created it. For example, if an administrator is restricted to access only notifications, then the administrator’s API clients have the same restrictions: Reducing an administrator’s access also reduces the access of the API clients owned by that administrator Removing an administrator’s access also removes the administrator’s API clients. Select the Identity and Access Management option from the menu to create an API client. If this option does not appear, you do not have adequate permissions to create an API client. Contact your administrator to get the required permissions. Open Identity and Access Management in WebStudio: Click API Access, Secure Clients and put a name in the textbox adjacent to the button Create Client. After entering a name. click Create Client: After the Secure Client is created, copy and write down the shown ID and Secret: The Secret is only visible during creation - after closing the window, you cannot get it anymore. The client-id and client-secret fields are needed by the Citrix Terraform provider. The also needed customer-id field can be found in your Citrix Cloud details. Put the values in the corresponding .auto.tvars.json file: .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } { ... "CC_APIKey-ClientID":"f4xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxc15", "CC_APIKey-ClientSecret":"VxxxxxxxxxxxxxxxxxxA==", "CC_CustomerID": "uxxxxxxxxxxxxx", "CC_APIKey_Type": "client_credentials", ... } Creating a Bearer Token in Citrix Cloud The Bearer Token is needed to authorize some REST-API calls in Citrix Cloud. As the Citrix provider has not implemented all functionalities yet, some REST-API calls are still needed. The Bearer Token authorizes these calls. It is important to set the URI to call and the required parameters correct. The URI must follow this syntax: For example, [https://api-us.cloud.com/cctrustoauth2/{customerid}/tokens/clients], where {customerid} is your Customer ID you obtained from the Account Settings page. If your Customer ID is, for example, 1234567890 the URI is [https://api-us.cloud.com/cctrustoauth2/1234567890/tokens/clients] In this example, we use the Postman application to create a Bearer Token: Paste the correct URI into Postman´s address bar and select POST as the method. Verify the correct settings of the API call. ![Creating a Bearer Token using Postman](/en-us/tech-zone/build/media/deployment-guides_citrix-daas-terraform-Google Cloud Platform (GCP) -create-bearertoken.png) If everything is set correctly, Postman shows a Response containing a JSON-formatted file containing the Bearer token in the field access-token: The token is normally valid for 3600 seconds. Put the access-token value in the corresponding .auto.tvars.json file: .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } { ... "CC_APIKey-ClientID":"f4xxxxxx-xxxx-xxxx-xxxx-Xxxxxxxxxc15", "CC_APIKey-ClientSecret":"VxxxxxxxxxxxxxxxxxxA==", "CC_CustomerID": "uxxxxxxxxxxxxx", "CC_APIKey_Type": "client_credentials", "CC_APIKey_Bearer": "CWSAuth bearer=eyJhbGciOiJSUzI1NiIsI...0q0IW7SZFVzeBittWnEwTYOZ7Q", ... } You also can use PowerShell to request a Bearer Token - therefore, you need a valid Secure Client stored in Citrix Cloud: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} asnp Citrix* $key= "f4eXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" $secret= "VJCXXXXXXXXXXXXXX" $customer= "uXXXXXXXX" $XDStoredCredentials = Set-XDCredentials -StoreAs default -ProfileType CloudApi -CustomerId $customer -APIKey $key -SecretKey $secret $auth = Get-XDAuthentication $BT = $GLOBAL:XDAuthToken | Out-File "<Path where to store the Token\BT.txt" -NoNewline -Encoding Ascii Module 1: Create the initially needed Resources on Google Cloud Platform (GCP) This module is split into the following configuration parts: Creating the initially needed Resources on Google Cloud Platform (GCP): Creating all needed Firewall rules on Google Cloud Platform (GCP) Creating all needed internal IP addresses on Google Cloud Platform (GCP) Creating all needed IAM policies on Google Cloud Platform (GCP) Creating the Storage Bucket and uploading all required software items on Google Cloud Platform (GCP) Creating a Windows Server 2022-based Master Image VM used for deploying the Machine Catalog in Step 3 Creating two Windows Server 2022-based VMs, which will be used as Cloud Connector VMs in Step 2 Creating a Windows Server 2022-based VM acting as an Administrative workstation for running Terraform steps 2 and 3—this is necessary because you will use WinRM for further configuration and deployment in steps 2 and 3! Creating all necessary scripts for joining the VMs to the existing sub-domain Putting the VMs into the existing sub-domain Fetching and saving a valid Bearer Token Terraform automatically does all these steps. Please make sure you have configured the variables according to your needs. The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnGCP\_CCOnGCP-Creation> terraform init Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/template... - Finding hashicorp/google versions matching ">= 5.21.0"... - Finding mastercard/restapi versions matching "1.18.2"... - Finding citrix/citrix versions matching ">= 0.5.4"... - Installing mastercard/restapi v1.18.2... - Installed mastercard/restapi v1.18.2 (self-signed, key ID DCB8C431D71C30AB) - Installing citrix/citrix v0.5.4... - Installed citrix/citrix v0.5.4 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) - Installing hashicorp/template v2.2.0... - Installed hashicorp/template v2.2.0 (signed by HashiCorp) - Installing hashicorp/google v5.21.0... - Installed hashicorp/google v5.21.0 (signed by HashiCorp) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnGCP\_CCOnGCP-Creation> terraform plan data.template_file.Initial-Windows-BootScript-NoDC: Reading... data.template_file.Initial-Windows-BootScript-NoDC: Read complete after 0s [id=d6adcc3a41a5c143a357d5555931b532da28a66cb82bfb58fea2801950cb2844] data.google_compute_network.VPC: Reading... data.google_compute_subnetwork.Subnet: Reading... data.google_compute_subnetwork.Subnet: Read complete after 0s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default] data.google_compute_network.VPC: Read complete after 1s [id=projects/tacg-gcp-XXXXXXXXXX/global/networks/default] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # google_compute_address.internal-ip-adminvm will be created + resource "google_compute_address" "internal-ip-adminvm" { + address = "10.156.0.7" + address_type = "INTERNAL" + creation_timestamp = (known after apply) + effective_labels = (known after apply) + id = (known after apply) + label_fingerprint = (known after apply) + name = "internal-ip-avm" + network_tier = (known after apply) + prefix_length = (known after apply) + project = "tacg-gcp-XXXXXXXXXX" + purpose = (known after apply) + region = "europe-west3" + self_link = (known after apply) + subnetwork = "projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default" + terraform_labels = (known after apply) + users = (known after apply) } # google_compute_address.internal-ip-cc1 will be created + resource "google_compute_address" "internal-ip-cc1" { + address = "10.156.0.5" + address_type = "INTERNAL" + creation_timestamp = (known after apply) + effective_labels = (known after apply) + id = (known after apply) + label_fingerprint = (known after apply) + name = "internal-ip-cc1" + network_tier = (known after apply) + prefix_length = (known after apply) + project = "tacg-gcp-XXXXXXXXXX" + purpose = (known after apply) + region = "europe-west3" + self_link = (known after apply) + subnetwork = "projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default" + terraform_labels = (known after apply) + users = (known after apply) } # google_compute_address.internal-ip-cc2 will be created + resource "google_compute_address" "internal-ip-cc2" { + address = "10.156.0.6" + address_type = "INTERNAL" + creation_timestamp = (known after apply) + effective_labels = (known after apply) + id = (known after apply) + label_fingerprint = (known after apply) + name = "internal-ip-cc2" + network_tier = (known after apply) + prefix_length = (known after apply) + project = "tacg-gcp-XXXXXXXXXX" + purpose = (known after apply) + region = "europe-west3" + self_link = (known after apply) + subnetwork = "projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default" + terraform_labels = (known after apply) + users = (known after apply) } # google_compute_address.internal-ip-wmi will be created + resource "google_compute_address" "internal-ip-wmi" { + address = "10.156.0.8" + address_type = "INTERNAL" + creation_timestamp = (known after apply) + effective_labels = (known after apply) + id = (known after apply) + label_fingerprint = (known after apply) + name = "internal-ip-wmi" + network_tier = (known after apply) + prefix_length = (known after apply) + project = "tacg-gcp-XXXXXXXXXX" + purpose = (known after apply) + region = "europe-west3" + self_link = (known after apply) + subnetwork = "projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default" + terraform_labels = (known after apply) + users = (known after apply) } # google_compute_firewall.Allow-All-ICMP will be created + resource "google_compute_firewall" "Allow-All-ICMP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-all-icmp" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "all-icmp", ] + allow { + ports = [] + protocol = "icmp" } } # google_compute_firewall.Allow-All-TCP will be created + resource "google_compute_firewall" "Allow-All-TCP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-all-tcp" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "all-tcp", ] + allow { + ports = [ + "0-65534", ] + protocol = "tcp" } } # google_compute_firewall.Allow-All-UDP will be created + resource "google_compute_firewall" "Allow-All-UDP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-all-udp" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "all-udp", ] + allow { + ports = [ + "0-65534", ] + protocol = "udp" } } # google_compute_firewall.Allow-HTTP will be created + resource "google_compute_firewall" "Allow-HTTP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-http" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "0.0.0.0/0", ] + target_tags = [ + "http", ] + allow { + ports = [ + "80", ] + protocol = "tcp" } } # google_compute_firewall.Allow-HTTPS will be created + resource "google_compute_firewall" "Allow-HTTPS" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-https" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "0.0.0.0/0", ] + target_tags = [ + "https", ] + allow { + ports = [ + "443", ] + protocol = "tcp" } } # google_compute_firewall.Allow-RDP will be created + resource "google_compute_firewall" "Allow-RDP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-rdp" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "0.0.0.0/0", ] + target_tags = [ + "rdp", ] + allow { + ports = [ + "3389", ] + protocol = "tcp" } } # google_compute_firewall.Allow-SSH will be created + resource "google_compute_firewall" "Allow-SSH" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-ssh" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "ssh", ] + allow { + ports = [ + "22", ] + protocol = "tcp" } } # google_compute_firewall.Allow-WinRM will be created + resource "google_compute_firewall" "Allow-WinRM" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-winrm" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "winrm", ] + allow { + ports = [ + "5985", + "5986", ] + protocol = "tcp" } } # google_compute_instance.AdminVM will be created + resource "google_compute_instance" "AdminVM" { + can_ip_forward = false + cpu_platform = (known after apply) + current_status = (known after apply) + deletion_protection = false + effective_labels = (known after apply) + guest_accelerator = (known after apply) + hostname = "tacg-gcp-tf-avm.gcp.the-austrian-citrix-guy.at" + id = (known after apply) + instance_id = (known after apply) + label_fingerprint = (known after apply) + machine_type = "n2-standard-2" + metadata = { + "sysprep-specialize-script-ps1" = <<-EOT Start-Sleep -Seconds 60 #Set DNS Server address Set-DNSClientServerAddress -ServerAddresses '10.156.0.2' -InterfaceIndex (Get-NetAdapter).InterfaceIndex netdom.exe join $env:COMPUTERNAME /domain:gcp.the-austrian-citrix-guy.at /UserD:XXXXXXXXXX@gcp.the-austrian-citrix-guy.at /PasswordD:XXXXXXXXXXXX /reboot:5 EOT } + metadata_fingerprint = (known after apply) + min_cpu_platform = (known after apply) + name = "tacg-gcp-tf-avm" + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + tags = [ + "all-icmp", + "all-tcp", + "all-udp", + "http", + "https", + "rdp", + "winrm", ] + tags_fingerprint = (known after apply) + terraform_labels = (known after apply) + zone = "europe-west3-c" + boot_disk { + auto_delete = true + device_name = (known after apply) + disk_encryption_key_sha256 = (known after apply) + kms_key_self_link = (known after apply) + mode = "READ_WRITE" + source = (known after apply) + initialize_params { + image = "windows-server-2022-dc-v20240313" + labels = (known after apply) + provisioned_iops = (known after apply) + provisioned_throughput = (known after apply) + size = (known after apply) + type = (known after apply) } } + network_interface { + internal_ipv6_prefix_length = (known after apply) + ipv6_access_type = (known after apply) + ipv6_address = (known after apply) + name = (known after apply) + network = "default" + network_ip = "10.156.0.7" + stack_type = (known after apply) + subnetwork = "default" + subnetwork_project = (known after apply) + access_config { + nat_ip = (known after apply) + network_tier = (known after apply) } } } # google_compute_instance.CC1 will be created + resource "google_compute_instance" "CC1" { + can_ip_forward = false + cpu_platform = (known after apply) + current_status = (known after apply) + deletion_protection = false + effective_labels = (known after apply) + guest_accelerator = (known after apply) + hostname = "tacg-gcp-tf-cc1.gcp.the-austrian-citrix-guy.at" + id = (known after apply) + instance_id = (known after apply) + label_fingerprint = (known after apply) + machine_type = "n2-standard-2" + metadata = { + "sysprep-specialize-script-ps1" = <<-EOT Start-Sleep -Seconds 60 #Set DNS Server address Set-DNSClientServerAddress -ServerAddresses '10.156.0.2' -InterfaceIndex (Get-NetAdapter).InterfaceIndex netdom.exe join $env:COMPUTERNAME /domain:gcp.the-austrian-citrix-guy.at /UserD:XXXXXXXXXX@gcp.the-austrian-citrix-guy.at /PasswordD:XXXXXXXXXXXX /reboot:5 EOT } + metadata_fingerprint = (known after apply) + min_cpu_platform = (known after apply) + name = "tacg-gcp-tf-cc1" + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + tags = [ + "all-icmp", + "all-tcp", + "all-udp", + "http", + "https", + "rdp", + "winrm", ] + tags_fingerprint = (known after apply) + terraform_labels = (known after apply) + zone = "europe-west3-c" + boot_disk { + auto_delete = true + device_name = (known after apply) + disk_encryption_key_sha256 = (known after apply) + kms_key_self_link = (known after apply) + mode = "READ_WRITE" + source = (known after apply) + initialize_params { + image = "windows-server-2022-dc-v20240313" + labels = (known after apply) + provisioned_iops = (known after apply) + provisioned_throughput = (known after apply) + size = (known after apply) + type = (known after apply) } } + network_interface { + internal_ipv6_prefix_length = (known after apply) + ipv6_access_type = (known after apply) + ipv6_address = (known after apply) + name = (known after apply) + network = "default" + network_ip = "10.156.0.5" + stack_type = (known after apply) + subnetwork = "default" + subnetwork_project = (known after apply) + access_config { + nat_ip = (known after apply) + network_tier = (known after apply) } } } # google_compute_instance.CC2 will be created + resource "google_compute_instance" "CC2" { + can_ip_forward = false + cpu_platform = (known after apply) + current_status = (known after apply) + deletion_protection = false + effective_labels = (known after apply) + guest_accelerator = (known after apply) + hostname = "tacg-gcp-tf-cc2.gcp.the-austrian-citrix-guy.at" + id = (known after apply) + instance_id = (known after apply) + label_fingerprint = (known after apply) + machine_type = "n2-standard-2" + metadata = { + "sysprep-specialize-script-ps1" = <<-EOT Start-Sleep -Seconds 60 #Set DNS Server address Set-DNSClientServerAddress -ServerAddresses '10.156.0.2' -InterfaceIndex (Get-NetAdapter).InterfaceIndex netdom.exe join $env:COMPUTERNAME /domain:gcp.the-austrian-citrix-guy.at /UserD:XXXXXXXXXX@gcp.the-austrian-citrix-guy.at /PasswordD:XXXXXXXXXXXX /reboot:5 EOT } + metadata_fingerprint = (known after apply) + min_cpu_platform = (known after apply) + name = "tacg-gcp-tf-cc2" + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + tags = [ + "all-icmp", + "all-tcp", + "all-udp", + "http", + "https", + "rdp", + "winrm", ] + tags_fingerprint = (known after apply) + terraform_labels = (known after apply) + zone = "europe-west3-c" + boot_disk { + auto_delete = true + device_name = (known after apply) + disk_encryption_key_sha256 = (known after apply) + kms_key_self_link = (known after apply) + mode = "READ_WRITE" + source = (known after apply) + initialize_params { + image = "windows-server-2022-dc-v20240313" + labels = (known after apply) + provisioned_iops = (known after apply) + provisioned_throughput = (known after apply) + size = (known after apply) + type = (known after apply) } } + network_interface { + internal_ipv6_prefix_length = (known after apply) + ipv6_access_type = (known after apply) + ipv6_address = (known after apply) + name = (known after apply) + network = "default" + network_ip = "10.156.0.6" + stack_type = (known after apply) + subnetwork = "default" + subnetwork_project = (known after apply) + access_config { + nat_ip = (known after apply) + network_tier = (known after apply) } } } # google_compute_instance.WMI will be created + resource "google_compute_instance" "WMI" { + can_ip_forward = false + cpu_platform = (known after apply) + current_status = (known after apply) + deletion_protection = false + effective_labels = (known after apply) + guest_accelerator = (known after apply) + hostname = "tacg-gcp-tf-wmi.gcp.the-austrian-citrix-guy.at" + id = (known after apply) + instance_id = (known after apply) + label_fingerprint = (known after apply) + machine_type = "n2-standard-2" + metadata = { + "sysprep-specialize-script-ps1" = <<-EOT Start-Sleep -Seconds 60 #Set DNS Server address Set-DNSClientServerAddress -ServerAddresses '10.156.0.2' -InterfaceIndex (Get-NetAdapter).InterfaceIndex netdom.exe join $env:COMPUTERNAME /domain:gcp.the-austrian-citrix-guy.at /UserD:XXXXXXXXXX@gcp.the-austrian-citrix-guy.at /PasswordD:XXXXXXXXXXXX /reboot:5 EOT } + metadata_fingerprint = (known after apply) + min_cpu_platform = (known after apply) + name = "tacg-gcp-tf-wmi" + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + tags = [ + "all-icmp", + "all-tcp", + "all-udp", + "http", + "https", + "rdp", + "winrm", ] + tags_fingerprint = (known after apply) + terraform_labels = (known after apply) + zone = "europe-west3-c" + boot_disk { + auto_delete = true + device_name = (known after apply) + disk_encryption_key_sha256 = (known after apply) + kms_key_self_link = (known after apply) + mode = "READ_WRITE" + source = (known after apply) + initialize_params { + image = "windows-server-2022-dc-v20240313" + labels = (known after apply) + provisioned_iops = (known after apply) + provisioned_throughput = (known after apply) + size = (known after apply) + type = (known after apply) } } + network_interface { + internal_ipv6_prefix_length = (known after apply) + ipv6_access_type = (known after apply) + ipv6_address = (known after apply) + name = (known after apply) + network = "default" + network_ip = "10.156.0.8" + stack_type = (known after apply) + subnetwork = "default" + subnetwork_project = (known after apply) + access_config { + nat_ip = (known after apply) + network_tier = (known after apply) } } } Plan: 24 to add, 0 to change, 0 to destroy. Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now. PS C:\TACG\_CCOnGCP\_CCOnGCP-Creation> terraform apply data.template_file.Initial-Windows-BootScript-NoDC: Reading... data.template_file.Initial-Windows-BootScript-NoDC: Read complete after 0s [id=d6adcc3a41a5c143a357d5555931b532da28a66cb82bfb58fea2801950cb2844] data.google_compute_network.VPC: Reading... data.google_compute_subnetwork.Subnet: Reading... data.google_compute_subnetwork.Subnet: Read complete after 0s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default] data.google_compute_network.VPC: Read complete after 1s [id=projects/tacg-gcp-XXXXXXXXXX/global/networks/default] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # google_compute_address.internal-ip-adminvm will be created + resource "google_compute_address" "internal-ip-adminvm" { + address = "10.156.0.7" + address_type = "INTERNAL" + creation_timestamp = (known after apply) + effective_labels = (known after apply) + id = (known after apply) + label_fingerprint = (known after apply) + name = "internal-ip-avm" + network_tier = (known after apply) + prefix_length = (known after apply) + project = "tacg-gcp-XXXXXXXXXX" + purpose = (known after apply) + region = "europe-west3" + self_link = (known after apply) + subnetwork = "projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default" + terraform_labels = (known after apply) + users = (known after apply) } # google_compute_address.internal-ip-cc1 will be created + resource "google_compute_address" "internal-ip-cc1" { + address = "10.156.0.5" + address_type = "INTERNAL" + creation_timestamp = (known after apply) + effective_labels = (known after apply) + id = (known after apply) + label_fingerprint = (known after apply) + name = "internal-ip-cc1" + network_tier = (known after apply) + prefix_length = (known after apply) + project = "tacg-gcp-XXXXXXXXXX" + purpose = (known after apply) + region = "europe-west3" + self_link = (known after apply) + subnetwork = "projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default" + terraform_labels = (known after apply) + users = (known after apply) } # google_compute_address.internal-ip-cc2 will be created + resource "google_compute_address" "internal-ip-cc2" { + address = "10.156.0.6" + address_type = "INTERNAL" + creation_timestamp = (known after apply) + effective_labels = (known after apply) + id = (known after apply) + label_fingerprint = (known after apply) + name = "internal-ip-cc2" + network_tier = (known after apply) + prefix_length = (known after apply) + project = "tacg-gcp-XXXXXXXXXX" + purpose = (known after apply) + region = "europe-west3" + self_link = (known after apply) + subnetwork = "projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default" + terraform_labels = (known after apply) + users = (known after apply) } # google_compute_address.internal-ip-wmi will be created + resource "google_compute_address" "internal-ip-wmi" { + address = "10.156.0.8" + address_type = "INTERNAL" + creation_timestamp = (known after apply) + effective_labels = (known after apply) + id = (known after apply) + label_fingerprint = (known after apply) + name = "internal-ip-wmi" + network_tier = (known after apply) + prefix_length = (known after apply) + project = "tacg-gcp-XXXXXXXXXX" + purpose = (known after apply) + region = "europe-west3" + self_link = (known after apply) + subnetwork = "projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/subnetworks/default" + terraform_labels = (known after apply) + users = (known after apply) } # google_compute_firewall.Allow-All-ICMP will be created + resource "google_compute_firewall" "Allow-All-ICMP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-all-icmp" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "all-icmp", ] + allow { + ports = [] + protocol = "icmp" } } # google_compute_firewall.Allow-All-TCP will be created + resource "google_compute_firewall" "Allow-All-TCP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-all-tcp" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "all-tcp", ] + allow { + ports = [ + "0-65534", ] + protocol = "tcp" } } # google_compute_firewall.Allow-All-UDP will be created + resource "google_compute_firewall" "Allow-All-UDP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-all-udp" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "all-udp", ] + allow { + ports = [ + "0-65534", ] + protocol = "udp" } } # google_compute_firewall.Allow-HTTP will be created + resource "google_compute_firewall" "Allow-HTTP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-http" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "0.0.0.0/0", ] + target_tags = [ + "http", ] + allow { + ports = [ + "80", ] + protocol = "tcp" } } # google_compute_firewall.Allow-HTTPS will be created + resource "google_compute_firewall" "Allow-HTTPS" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-https" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "0.0.0.0/0", ] + target_tags = [ + "https", ] + allow { + ports = [ + "443", ] + protocol = "tcp" } } # google_compute_firewall.Allow-RDP will be created + resource "google_compute_firewall" "Allow-RDP" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-rdp" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "0.0.0.0/0", ] + target_tags = [ + "rdp", ] + allow { + ports = [ + "3389", ] + protocol = "tcp" } } # google_compute_firewall.Allow-SSH will be created + resource "google_compute_firewall" "Allow-SSH" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-ssh" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "ssh", ] + allow { + ports = [ + "22", ] + protocol = "tcp" } } # google_compute_firewall.Allow-WinRM will be created + resource "google_compute_firewall" "Allow-WinRM" { + creation_timestamp = (known after apply) + destination_ranges = (known after apply) + direction = (known after apply) + enable_logging = (known after apply) + id = (known after apply) + name = "tacg-gcp-tf-fw-allow-winrm" + network = "default" + priority = 1000 + project = "tacg-gcp-XXXXXXXXXX" + self_link = (known after apply) + source_ranges = [ + "10.156.0.0/20", ] + target_tags = [ + "winrm", ] + allow { + ports = [ + "5985", + "5986", ] + protocol = "tcp" } } # google_storage_bucket.Prereqs will be created + resource "google_storage_bucket" "Prereqs" { + effective_labels = (known after apply) + force_destroy = true + id = (known after apply) + location = "EU" + name = "XXXXXXXXXX" + project = (known after apply) + public_access_prevention = (known after apply) + rpo = (known after apply) + self_link = (known after apply) + storage_class = "STANDARD" + terraform_labels = (known after apply) + uniform_bucket_level_access = (known after apply) + url = (known after apply) } # google_storage_bucket_access_control.Prereqs will be created + resource "google_storage_bucket_access_control" "Prereqs" { + bucket = "XXXXXXXXXX" + domain = (known after apply) + email = (known after apply) + entity = "allUsers" + id = (known after apply) + role = "READER" } # google_storage_bucket_iam_binding.iam will be created + resource "google_storage_bucket_iam_binding" "iam" { + bucket = "XXXXXXXXXX" + etag = (known after apply) + id = (known after apply) + members = [ + "allUsers", ] + role = "roles/storage.objectViewer" } # google_storage_bucket_object.Prereqs will be created + resource "google_storage_bucket_object" "Prereqs" { + bucket = "XXXXXXXXXX" + content = (sensitive value) + content_type = (known after apply) + crc32c = (known after apply) + detect_md5hash = "different hash" + id = (known after apply) + kms_key_name = (known after apply) + md5hash = (known after apply) + media_link = (known after apply) + name = "domain-join-script.ps1" + output_name = (known after apply) + self_link = (known after apply) + source = "./TF-Domain-Join-Script.ps1" + storage_class = (known after apply) } Plan: 24 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes google_storage_bucket.Prereqs: Creating... google_storage_bucket.Prereqs: Creation complete after 2s [id=XXXXXXXXXX] google_storage_bucket_access_control.Prereqs: Creating... google_storage_bucket_object.Prereqs: Creating... google_storage_bucket_object.Prereqs: Creation complete after 1s [id=XXXXXXXXXX-domain-join-script.ps1] google_storage_bucket_access_control.Prereqs: Creation complete after 2s [id=XXXXXXXXXX/allUsers] google_compute_address.internal-ip-adminvm: Creating... google_compute_firewall.Allow-All-UDP: Creating... google_compute_firewall.Allow-SSH: Creating... google_compute_firewall.Allow-HTTPS: Creating... google_compute_firewall.Allow-All-TCP: Creating... google_compute_firewall.Allow-HTTP: Creating... google_compute_firewall.Allow-All-ICMP: Creating... google_compute_firewall.Allow-WinRM: Creating... google_compute_firewall.Allow-RDP: Creating... google_compute_address.internal-ip-cc2: Creating... google_compute_address.internal-ip-adminvm: Still creating... [10s elapsed] google_compute_firewall.Allow-All-TCP: Still creating... [10s elapsed] google_compute_firewall.Allow-All-ICMP: Still creating... [10s elapsed] google_compute_firewall.Allow-SSH: Still creating... [10s elapsed] google_compute_firewall.Allow-RDP: Still creating... [10s elapsed] google_compute_firewall.Allow-All-UDP: Still creating... [10s elapsed] google_compute_firewall.Allow-HTTPS: Still creating... [10s elapsed] google_compute_firewall.Allow-WinRM: Still creating... [10s elapsed] google_compute_firewall.Allow-HTTP: Still creating... [10s elapsed] google_storage_bucket.Prereqs: Creating... google_storage_bucket.Prereqs: Creation complete after 3s [id=XXXXXXXXXX] google_storage_bucket_access_control.Prereqs: Creating... google_storage_bucket_object.Prereqs: Creating... google_storage_bucket_iam_binding.iam: Creating... google_storage_bucket_object.Prereqs: Creation complete after 0s [id=XXXXXXXXXX-domain-join-script.ps1] google_storage_bucket_access_control.Prereqs: Creation complete after 1s [id=XXXXXXXXXX/allUsers] google_storage_bucket_iam_binding.iam: Creation complete after 10s [id=b/XXXXXXXXXX/roles/storage.objectViewer] google_compute_address.internal-ip-cc2: Still creating... [10s elapsed] google_compute_address.internal-ip-cc2: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/addresses/internal-ip-cc2] google_compute_address.internal-ip-wmi: Creating... google_compute_address.internal-ip-adminvm: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/addresses/internal-ip-avm] google_compute_address.internal-ip-cc1: Creating... google_compute_firewall.Allow-All-ICMP: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/global/firewalls/tacg-gcp-tf-fw-allow-all-icmp] google_compute_instance.CC2: Creating... ... **Output shortened ** ... google_compute_firewall.Allow-WinRM: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/global/firewalls/tacg-gcp-tf-fw-allow-winrm] google_compute_instance.AdminVM: Creating... google_compute_firewall.Allow-All-UDP: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/global/firewalls/tacg-gcp-tf-fw-allow-all-udp] google_compute_firewall.Allow-SSH: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/global/firewalls/tacg-gcp-tf-fw-allow-ssh] google_compute_firewall.Allow-RDP: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/global/firewalls/tacg-gcp-tf-fw-allow-rdp] google_compute_firewall.Allow-HTTP: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/global/firewalls/tacg-gcp-tf-fw-allow-http] google_compute_firewall.Allow-HTTPS: Creation complete after 12s [id=projects/tacg-gcp-XXXXXXXXXX/global/firewalls/tacg-gcp-tf-fw-allow-https] google_compute_firewall.Allow-All-TCP: Creation complete after 12s [id=projects/tacg-gcp-XXXXXXXXXX/global/firewalls/tacg-gcp-tf-fw-allow-all-tcp] google_compute_address.internal-ip-wmi: Still creating... [10s elapsed] google_compute_address.internal-ip-cc1: Still creating... [10s elapsed] google_compute_instance.CC2: Still creating... [10s elapsed] google_compute_instance.AdminVM: Still creating... [10s elapsed] google_compute_address.internal-ip-wmi: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/addresses/internal-ip-wmi] google_compute_instance.WMI: Creating... google_compute_address.internal-ip-cc1: Creation complete after 11s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/addresses/internal-ip-cc1] google_compute_instance.CC1: Creating... google_compute_instance.CC2: Creation complete after 13s [id=projects/tacg-gcp-XXXXXXXXXX/zones/europe-west3-c/instances/tacg-gcp-tf-cc2] google_compute_instance.AdminVM: Creation complete after 13s [id=projects/tacg-gcp-XXXXXXXXXX/zones/europe-west3-c/instances/tacg-gcp-tf-avm] google_compute_instance.WMI: Still creating... [10s elapsed] google_compute_instance.CC1: Still creating... [10s elapsed] google_compute_instance.WMI: Creation complete after 13s [id=projects/tacg-gcp-XXXXXXXXXX/zones/europe-west3-c/instances/tacg-gcp-tf-wmi] google_compute_instance.CC1: Creation complete after 13s [id=projects/tacg-gcp-XXXXXXXXXX/zones/europe-west3-c/instances/tacg-gcp-tf-cc1] Apply complete! Resources: 24 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnGCP\_CCOnGCP-Creation> As no errors occurred, Terraform has completed the creation and partial configuration of the relevant prerequisites on Google Cloud Platform (GCP). Example of successful creation: All VMs were successfully created: If errors during VM creation or Domain-join occur, look at the output of the serial console of the erroneous VM: Example: The startup script could not run: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnGCP\_CCOnGCP-Creation gcloud compute instances get-serial-port-output tacg-gcp-tf-avm --zone europe-west3-c CSM BBS Table full. BdsDxe: loading Boot0001 "UEFI Google PersistentDisk " from PciRoot(0x0)/Pci(0x3,0x0)/Scsi(0x1,0x0) BdsDxe: starting Boot0001 "UEFI Google PersistentDisk " from PciRoot(0x0)/Pci(0x3,0x0)/Scsi(0x1,0x0) UEFI: Attempting to start image. Description: UEFI Google PersistentDisk FilePath: PciRoot(0x0)/Pci(0x3,0x0)/Scsi(0x1,0x0) OptionNumber: 1. 2024/03/22 12:24:08 GCEGuestAgent: GCE Agent Started (version 20240109.00) 2024/03/22 12:24:09 GCEGuestAgent: Adding route to metadata server on adapter with index 5 2024/03/22 12:24:09 GCEGuestAgent: Starting the scheduler to run jobs 2024/03/22 12:24:09 GCEGuestAgent: Scheduler - start: [] 2024/03/22 12:24:09 GCEGuestAgent: Skipping scheduling credential generation job, failed to reach client credentials endpoint(instance/credentials/certs) with error: error connecting to metadata server, status code: 404 2024/03/22 12:24:09 GCEGuestAgent: Failed to schedule job MTLS_MDS_Credential_Boostrapper with error: ShouldEnable() returned false, cannot schedule job MTLS_MDS_Credential_Boostrapper 2024/03/22 12:24:09 GCEGuestAgent: Starting the scheduler to run jobs 2024/03/22 12:24:10 GCEGuestAgent: Scheduling job: telemetryJobID 2024/03/22 12:24:10 GCEGuestAgent: Scheduling job "telemetryJobID" to run at 24.000000 hr interval 2024/03/22 12:24:10 GCEGuestAgent: Successfully scheduled job telemetryJobID 2024/03/22 12:25:01 GCEInstanceSetup: Enable google_osconfig_agent during the specialize configuration pass. 2024/03/22 12:25:04 GCEInstanceSetup: Starting sysprep specialize phase. 2024/03/22 12:25:05 GCEInstanceSetup: All networks set to DHCP. 2024/03/22 12:25:05 GCEInstanceSetup: VirtIO network adapter detected. 2024/03/22 12:25:08 GCEInstanceSetup: MTU set to 1460 for IPv4 and IPv6 using PowerShell for interface 5 - Google VirtIO Ethernet Adapter. Build 20348 2024/03/22 12:25:08 GCEInstanceSetup: Running 'route' with arguments '/p add 169.254.169.254 mask 255.255.255.255 0.0.0.0 if 5 metric 1' 2024/03/22 12:25:08 GCEInstanceSetup: --> OK! 2024/03/22 12:25:08 GCEInstanceSetup: Added persistent route to metadata netblock to netkvm adapter. 2024/03/22 12:25:08 GCEInstanceSetup: Getting hostname from metadata server. 2024/03/22 12:25:08 GCEInstanceSetup: Renamed from WIN-6LJJ4HG704O to tacg-gcp-tf-avm. 2024/03/22 12:25:08 GCEInstanceSetup: Configuring WinRM... 2024/03/22 12:25:09 GCEInstanceSetup: Running 'C:\Program Files\Google\Compute Engine\tools\certgen.exe' with arguments '-outDir C:\Windows\TEMP\cert -hostname tacg-gcp-tf-avm' 2024/03/22 12:25:10 GCEInstanceSetup: --> written C:\Windows\TEMP\cert\cert.p12 2024/03/22 12:25:10 GCEInstanceSetup: Waiting for WinRM to be running... 2024/03/22 12:25:11 GCEInstanceSetup: Setup of WinRM complete. 2024/03/22 12:25:11 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: Starting specialize scripts (version dev). 2024/03/22 12:25:11 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: Found sysprep-specialize-script-url in metadata. 2024/03/22 12:25:15 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: Failed to download GCS path: error reading object "domain-join-script.ps1": Get "https://storage.googleapis.com/XXXXXXXXXX/domain-join-script.ps1": metadata: GCE metadata "instance/service-accounts/default/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdevstorage.full_control%2Chttps%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform" not defined 2024/03/22 12:25:15 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: Trying unauthenticated download 2024/03/22 12:25:28 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: sysprep-specialize-script-url: The network path was not found. 2024/03/22 12:25:28 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: sysprep-specialize-script-url: 2024/03/22 12:25:28 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: sysprep-specialize-script-url: The command failed to complete successfully. 2024/03/22 12:25:28 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: sysprep-specialize-script-url: 2024/03/22 12:25:28 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: sysprep-specialize-script-url exit status 0 2024/03/22 12:25:28 C:\Program Files\Google\Compute Engine\metadata_scripts\GCEMetadataScripts.exe: Finished running specialize scripts. 2024/03/22 12:25:28 GCEInstanceSetup: Finished with sysprep specialize phase, restarting... 2024/03/22 12:25:44 GCEGuestAgent: Error watching metadata: context canceled 2024/03/22 12:25:44 GCEGuestAgent: GCE Agent Stopped CSM BBS Table full. BdsDxe: loading Boot0003 "Windows Boot Manager" from HD(2,GPT,AB360148-CB4B-41C0-88C9-F1583EB09E72,0x8000,0x32000)/\EFI\Microsoft\Boot\bootmgfw.efi BdsDxe: starting Boot0003 "Windows Boot Manager" from HD(2,GPT,AB360148-CB4B-41C0-88C9-F1583EB09E72,0x8000,0x32000)/\EFI\Microsoft\Boot\bootmgfw.efi UEFI: Attempting to start image. Description: Windows Boot Manager FilePath: HD(2,GPT,AB360148-CB4B-41C0-88C9-F1583EB09E72,0x8000,0x32000)/\EFI\Microsoft\Boot\bootmgfw.efi OptionNumber: 3. 2024/03/22 12:26:46 GCEGuestAgent: GCE Agent Started (version 20240109.00) 2024/03/22 12:26:47 GCEGuestAgent: Starting the scheduler to run jobs 2024/03/22 12:26:47 GCEGuestAgent: Scheduler - start: [] 2024/03/22 12:26:47 GCEGuestAgent: Skipping scheduling credential generation job, failed to reach client credentials endpoint(instance/credentials/certs) with error: error connecting to metadata server, status code: 404 2024/03/22 12:26:47 GCEGuestAgent: Failed to schedule job MTLS_MDS_Credential_Boostrapper with error: ShouldEnable() returned false, cannot schedule job MTLS_MDS_Credential_Boostrapper 2024/03/22 12:26:47 GCEGuestAgent: Starting the scheduler to run jobs 2024-03-22T12:26:48.0295Z OSConfigAgent Info: OSConfig Agent (version 20231207.01.0+win@1) started. 2024/03/22 12:26:48 GCEGuestAgent: Scheduling job: telemetryJobID 2024/03/22 12:26:48 GCEGuestAgent: Scheduling job "telemetryJobID" to run at 24.000000 hr interval 2024/03/22 12:26:48 GCEGuestAgent: Successfully scheduled job telemetryJobID 2024/03/22 12:26:48 GCEGuestAgent: Scheduler - added: [now 2024-03-22 12:26:48.2174847 +0000 GMT entry 1 next 2024-03-23 12:26:48 +0000 GMT] Specify --start=6654 in the next get-serial-port-output invocation to get only the new output starting from here. PS C:\TACG\_CCOnGCP\_CCOnGCP-Creation Now, the next step can be started. Module 2: Install and Configure all Resources in Google Cloud Platform (GCP) This module is split into the following configuration parts: Configuring the three previously created Virtual Machines on Google Cloud Platform (GCP): Installing the needed software on the Cloud Connectors Installing the needed software on the Admin-VM Creating the necessary Resources in Citrix Cloud: Creating a Resource Location in Citrix Cloud Configuring the 2 Cloud Connectors Registering the 2 Cloud Connectors in the newly created Resource Location Our provider does not currently support creating a Resource Location on Citrix Cloud. Therefore, we use a PowerShell script to create one using a REST-API call. Please make sure you have configured the variables according to your needs by using the corresponding .auto.tfvars.json file. Terraform runs various scripts before creating the Cloud Connectors' configuration to determine needed information, such as the Site ID, the Zone ID, and the Resource Location ID. These IDs are used in other scripts or files - for example, the parameter file for deploying the Cloud Connector needs the Resource Location ID of the Resource Location, which Terraform creates automatically. Unfortunately, the REST-API provider does not return the ID of the newly created Resource Location, so we need to run PowerShell after the creation of the Resource Location: Example scripts - creating the configuration file for the unattended installation of the Cloud Controller software: At first, Terraform writes the configuration file without the Resource Location ID as the Resource Location is created later: .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } ### Create CWC-Installer configuration file based on variables and save it into Transfer directory resource "local_file" "CWC-Configuration" { depends_on = [restapi_object.CreateRL] content = jsonencode( { "customerName" = "${var.CC_CustomerID}", "clientId" = "${var.CC_APIKey-ClientID}", "clientSecret" = "${var.CC_APIKey-ClientSecret}", "resourceLocationId" = "XXXXXXXXXX", "acceptTermsOfService" = true } ) filename = "${var.CC_Install_LogPath}/DATA/cwc.json" } After installing further pre-requisites and creating the Resource Location, Terraform runs a PowerShell script to get the needed ID and updates the configuration file for the CWC installer: .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } #### Change RL-ID in CWC-JSON file to valid Zone-ID resource "local_file" "CreateValidCWCOnAVM" { depends_on = [ terraform_data.ZoneID2 ] content = <<-EOT Add-Content ${var.CC_Install_LogPath}/log.txt "`nCWC-Script started." #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" # Correct the Resource Location ID in cwc.json file $requestUri = "https://api-eu.cloud.com/resourcelocations" $CCBearerToken = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetBT.txt -Force $CCSiteID = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetSiteID.txt -Force $CCZoneID = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetZoneID.txt -Force $headers = @{ "Accept"="application/json"; "Authorization" = $CCBearerToken; "Citrix-CustomerId" = "${var.CC_CustomerID}"} $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json Add-Content ${var.CC_Install_LogPath}/log.txt "`nCWC-Response: $response" $RLs = ConvertFrom-Json $response $RLFiltered = $RLs.items | Where-Object name -in "${var.CC_RestRLName}" Add-Content ${var.CC_Install_LogPath}/log.txt $RLFiltered $RLID = $RLFiltered.id $OrigContent = Get-Content ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/log.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt $OrigContent $CorrContent = $OrigCOntent.Replace('XXXXXXXXXX', $RLID) | Out-File -FilePath ${var.CC_Install_LogPath}/DATA/cwc.json -NoNewline -Encoding Ascii $PathCompl = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" Set-Content -Path $PathCompl -Value $RLID -NoNewline -Encoding Ascii Add-Content ${var.CC_Install_LogPath}/log.txt "`ncwc.json corrected." Add-Content ${var.CC_Install_LogPath}/log.txt "`nCWC-Script completed." } EOT filename = "${var.CC_Install_LogPath}/DATA/CreateValidCWCOnAVM.ps1" } The Terraform configuration contains some idle time slots to make sure that background operations on Google Cloud Platform (GCP) or the VMs can completed before the next configuration steps occur. We have seen different elapsed configuration times related to different loads on the Google Cloud Platform (GCP) systems! Before running Terraform, we cannot see the Resource Location: The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnGCP\_CCOnGCP-Install> terraform init Initializing the backend... Initializing provider plugins... - terraform.io/builtin/terraform is built in to Terraform - Finding latest version of hashicorp/time... - Finding latest version of hashicorp/null... - Finding hashicorp/google versions matching ">= 5.21.0"... - Finding citrix/citrix versions matching ">= 0.5.4"... - Finding latest version of hashicorp/local... - Installing hashicorp/time v0.11.1... - Installed hashicorp/time v0.11.1 (signed by HashiCorp) - Installing hashicorp/null v3.2.2... - Installed hashicorp/null v3.2.2 (signed by HashiCorp) - Installing hashicorp/google v5.22.0... - Installed hashicorp/google v5.22.0 (signed by HashiCorp) - Installing citrix/citrix v0.5.4... - Installed citrix/citrix v0.5.4 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) - Installing hashicorp/local v2.5.1... - Installed hashicorp/local v2.5.1 (signed by HashiCorp) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnGCP\_CCOnGCP-Install> terraform plan data.google_compute_address.IPOfCC1: Reading... data.google_compute_address.IPOfCC2: Reading... data.google_compute_address.IPOfCC1: Read complete after 0s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/addresses/internal-ip-cc1] data.google_compute_address.IPOfCC2: Read complete after 0s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/addresses/internal-ip-cc2] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # local_file.CWC-Configuration will be created + resource "local_file" "CWC-Configuration" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/cwc.json" + id = (known after apply) } # local_file.CreateRLScript will be created + resource "local_file" "CreateRLScript" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/CreateRL.ps1" + id = (known after apply) } # local_file.CreateValidCWCOnAVM will be created + resource "local_file" "CreateValidCWCOnAVM" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/CreateValidCWCOnAVM.ps1" + id = (known after apply) } # local_file.GetBearerToken will be created + resource "local_file" "GetBearerToken" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/GetBT.ps1" + id = (known after apply) } # local_file.GetSiteIDScript will be created + resource "local_file" "GetSiteIDScript" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/GetSiteID.ps1" + id = (known after apply) } # local_file.GetZoneIDScript will be created + resource "local_file" "GetZoneIDScript" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/GetZoneID.ps1" + id = (known after apply) } # local_file.InstallCWCOnCC will be created + resource "local_file" "InstallCWCOnCC" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/InstallCWCOnCC.ps1" + id = (known after apply) } # local_file.InstallPoSHSDKOnAVM will be created + resource "local_file" "InstallPoSHSDKOnAVM" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/InstallPoSHSDKOnAVM.ps1" + id = (known after apply) } # local_file.Log will be created + resource "local_file" "Log" { + content = "Directory created." + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/log.txt" + id = (known after apply) } # local_file.LogData will be created + resource "local_file" "LogData" { + content = "Directory created." + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/log.txt" + id = (known after apply) } # local_file.RestartCC will be created + resource "local_file" "RestartCC" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/RestartCC.ps1" + id = (known after apply) } # null_resource.CallRebootScriptOnCC1 will be created + resource "null_resource" "CallRebootScriptOnCC1" { + id = (known after apply) } # null_resource.CallRebootScriptOnCC2 will be created + resource "null_resource" "CallRebootScriptOnCC2" { + id = (known after apply) } # null_resource.CallRequiredScriptsOnCC1 will be created + resource "null_resource" "CallRequiredScriptsOnCC1" { + id = (known after apply) } # null_resource.CallRequiredScriptsOnCC2 will be created + resource "null_resource" "CallRequiredScriptsOnCC2" { + id = (known after apply) } # null_resource.ExecuteInstallPoSHSDKOnAVM will be created + resource "null_resource" "ExecuteInstallPoSHSDKOnAVM" { + id = (known after apply) } # null_resource.UploadRequiredComponentsToCC1 will be created + resource "null_resource" "UploadRequiredComponentsToCC1" { + id = (known after apply) } # null_resource.UploadRequiredComponentsToCC2 will be created + resource "null_resource" "UploadRequiredComponentsToCC2" { + id = (known after apply) } # terraform_data.ExecuteCreateValidCWCOnAVM will be created + resource "terraform_data" "ExecuteCreateValidCWCOnAVM" { + id = (known after apply) } # terraform_data.GetBT will be created + resource "terraform_data" "GetBT" { + id = (known after apply) } # terraform_data.ResourceLocation will be created + resource "terraform_data" "ResourceLocation" { + id = (known after apply) } # terraform_data.SiteID will be created + resource "terraform_data" "SiteID" { + id = (known after apply) } # terraform_data.ZoneID will be created + resource "terraform_data" "ZoneID" { + id = (known after apply) } # terraform_data.ZoneID2 will be created + resource "terraform_data" "ZoneID2" { + id = (known after apply) } # time_sleep.wait_1800_seconds_CC1 will be created + resource "time_sleep" "wait_1800_seconds_CC1" { + create_duration = "1800s" + id = (known after apply) } # time_sleep.wait_1800_seconds_CC2 will be created + resource "time_sleep" "wait_1800_seconds_CC2" { + create_duration = "1800s" + id = (known after apply) } # time_sleep.wait_60_seconds will be created + resource "time_sleep" "wait_60_seconds" { + create_duration = "60s" + id = (known after apply) } # time_sleep.wait_900_seconds will be created + resource "time_sleep" "wait_900_seconds" { + create_duration = "900s" + id = (known after apply) } Plan: 30 to add, 0 to change, 0 to destroy. Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now. PS C:\TACG\_CCOnGCP\_CCOnGCP-Install> terraform apply data.google_compute_address.IPOfCC2: Reading... data.google_compute_address.IPOfCC1: Reading... data.google_compute_address.IPOfCC1: Read complete after 0s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/addresses/internal-ip-cc1] data.google_compute_address.IPOfCC2: Read complete after 0s [id=projects/tacg-gcp-XXXXXXXXXX/regions/europe-west3/addresses/internal-ip-cc2] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # local_file.CWC-Configuration will be created + resource "local_file" "CWC-Configuration" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/cwc.json" + id = (known after apply) } # local_file.CreateRLScript will be created + resource "local_file" "CreateRLScript" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/CreateRL.ps1" + id = (known after apply) } # local_file.CreateValidCWCOnAVM will be created + resource "local_file" "CreateValidCWCOnAVM" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/CreateValidCWCOnAVM.ps1" + id = (known after apply) } # local_file.GetBearerToken will be created + resource "local_file" "GetBearerToken" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/GetBT.ps1" + id = (known after apply) } # local_file.GetSiteIDScript will be created + resource "local_file" "GetSiteIDScript" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/GetSiteID.ps1" + id = (known after apply) } # local_file.GetZoneIDScript will be created + resource "local_file" "GetZoneIDScript" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/GetZoneID.ps1" + id = (known after apply) } # local_file.InstallCWCOnCC will be created + resource "local_file" "InstallCWCOnCC" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/InstallCWCOnCC.ps1" + id = (known after apply) } # local_file.InstallPoSHSDKOnAVM will be created + resource "local_file" "InstallPoSHSDKOnAVM" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/InstallPoSHSDKOnAVM.ps1" + id = (known after apply) } # local_file.Log will be created + resource "local_file" "Log" { + content = "Directory created." + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/log.txt" + id = (known after apply) } # local_file.LogData will be created + resource "local_file" "LogData" { + content = "Directory created." + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/log.txt" + id = (known after apply) } # null_resource.CallRequiredScriptsOnCC1 will be created + resource "null_resource" "CallRequiredScriptsOnCC1" { + id = (known after apply) } # null_resource.CallRequiredScriptsOnCC2 will be created + resource "null_resource" "CallRequiredScriptsOnCC2" { + id = (known after apply) } # null_resource.ExecuteInstallPoSHSDKOnAVM will be created + resource "null_resource" "ExecuteInstallPoSHSDKOnAVM" { + id = (known after apply) } # null_resource.UploadRequiredComponentsToCC1 will be created + resource "null_resource" "UploadRequiredComponentsToCC1" { + id = (known after apply) } # null_resource.UploadRequiredComponentsToCC2 will be created + resource "null_resource" "UploadRequiredComponentsToCC2" { + id = (known after apply) } # terraform_data.ExecuteCreateValidCWCOnAVM will be created + resource "terraform_data" "ExecuteCreateValidCWCOnAVM" { + id = (known after apply) } # terraform_data.GetBT will be created + resource "terraform_data" "GetBT" { + id = (known after apply) } # terraform_data.ResourceLocation will be created + resource "terraform_data" "ResourceLocation" { + id = (known after apply) } # terraform_data.SiteID will be created + resource "terraform_data" "SiteID" { + id = (known after apply) } # terraform_data.ZoneID2 will be created + resource "terraform_data" "ZoneID2" { + id = (known after apply) } # time_sleep.wait_60_seconds will be created + resource "time_sleep" "wait_60_seconds" { + create_duration = "60s" + id = (known after apply) } # time_sleep.wait_900_seconds will be created + resource "time_sleep" "wait_900_seconds" { + create_duration = "900s" + id = (known after apply) } # local_file.InstallCWCOnCC will be created + resource "local_file" "InstallCWCOnCC" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/InstallCWCOnCC.ps1" + id = (known after apply) } # local_file.Log will be created + resource "local_file" "Log" { + content = "Directory created." + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/log.txt" + id = (known after apply) } # local_file.LogData will be created + resource "local_file" "LogData" { + content = "Directory created." + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/log.txt" + id = (known after apply) } # null_resource.CallRequiredScriptsOnCC1 will be created + resource "null_resource" "CallRequiredScriptsOnCC1" { + id = (known after apply) } # null_resource.CallRequiredScriptsOnCC2 will be created + resource "null_resource" "CallRequiredScriptsOnCC2" { + id = (known after apply) } # null_resource.UploadRequiredComponentsToCC1 will be created + resource "null_resource" "UploadRequiredComponentsToCC1" { + id = (known after apply) } # null_resource.UploadRequiredComponentsToCC2 will be created + resource "null_resource" "UploadRequiredComponentsToCC2" { + id = (known after apply) } Plan: 30 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes null_resource.ExecuteInstallPoSHSDKOnAVM: Creating... null_resource.ExecuteInstallPoSHSDKOnAVM: Provisioning with 'local-exec'... null_resource.ExecuteInstallPoSHSDKOnAVM (local-exec): Executing: ["PowerShell" "-Command" " c:/temp/xdinst/DATA/InstallPoSHSDKOnAVM.ps1"] local_file.GetBearerToken: Creating... local_file.Log: Creating... local_file.InstallPoSHSDKOnAVM: Creating... local_file.Log: Creation complete after 0s [id=d725ce92ca8335439a5d83acf47f3ca2c957a515] local_file.CWC-Configuration: Creating... local_file.InstallPoSHSDKOnAVM: Creation complete after 0s [id=e384e2f2f26dd62c79ac8e6f0c25b98c1a82008c] local_file.GetBearerToken: Creation complete after 0s [id=d97ec7cb0f4a1247b43864df53c69c7956507eb5] local_file.CWC-Configuration: Creation complete after 0s [id=0257d2b92f197fa4d043b6cd4c4959be284dddc2] local_file.LogData: Creating... local_file.LogData: Creation complete after 0s [id=d725ce92ca8335439a5d83acf47f3ca2c957a515] null_resource.ExecuteInstallPoSHSDKOnAVM: Still creating... [10s elapsed] null_resource.ExecuteInstallPoSHSDKOnAVM: Still creating... [20s elapsed] ... ** Output shortened ** ... null_resource.ExecuteInstallPoSHSDKOnAVM: Still creating... [3m50s elapsed] null_resource.ExecuteInstallPoSHSDKOnAVM: Still creating... [4m0s elapsed] null_resource.ExecuteInstallPoSHSDKOnAVM: Creation complete after 4m4s [id=963532241712303576] terraform_data.GetBT: Creating... terraform_data.GetBT: Provisioning with 'local-exec'... terraform_data.GetBT (local-exec): Executing: ["PowerShell" "-File" "c:/temp/xdinst/DATA/GetBT.ps1"] terraform_data.GetBT: Creation complete after 10s [id=60c49067-1da2-caf5-1893-d4978527619f] local_file.CreateRLScript: Creating... local_file.CreateRLScript: Creation complete after 0s [id=e71a25d1aa907068420b19c3c61b58f54e256512] terraform_data.ResourceLocation: Creating... terraform_data.ResourceLocation: Provisioning with 'local-exec'... terraform_data.ResourceLocation (local-exec): Executing: ["PowerShell" "-File" "c:/temp/xdinst/DATA/CreateRL.ps1"] terraform_data.ResourceLocation: Creation complete after 1s [id=3d058474-6e42-69d2-a211-ce552bef2775] time_sleep.wait_900_seconds: Creating... time_sleep.wait_60_seconds: Creating... time_sleep.wait_900_seconds: Still creating... [10s elapsed] time_sleep.wait_60_seconds: Still creating... [10s elapsed] time_sleep.wait_900_seconds: Still creating... [20s elapsed] time_sleep.wait_60_seconds: Still creating... [20s elapsed] time_sleep.wait_900_seconds: Still creating... [30s elapsed] time_sleep.wait_60_seconds: Still creating... [30s elapsed] time_sleep.wait_900_seconds: Still creating... [40s elapsed] time_sleep.wait_60_seconds: Still creating... [40s elapsed] time_sleep.wait_900_seconds: Still creating... [50s elapsed] time_sleep.wait_60_seconds: Still creating... [51s elapsed] time_sleep.wait_900_seconds: Still creating... [1m0s elapsed] time_sleep.wait_60_seconds: Still creating... [1m1s elapsed] time_sleep.wait_60_seconds: Creation complete after 1m1s [id=2024-03-27T14:27:37Z] local_file.GetSiteIDScript: Creating... local_file.GetSiteIDScript: Creation complete after 0s [id=9d4d1c344ddddcdeacfbbf4e84525c1cbc6a8059] terraform_data.SiteID: Creating... terraform_data.SiteID: Provisioning with 'local-exec'... terraform_data.SiteID (local-exec): Executing: ["PowerShell" "-File" "c:/temp/xdinst/DATA/GetSiteID.ps1"] terraform_data.SiteID: Creation complete after 1s [id=6a5d6c03-cfa2-8b27-0ca4-c144fb1b44ee] time_sleep.wait_900_seconds: Still creating... [1m11s elapsed] time_sleep.wait_900_seconds: Still creating... [1m21s elapsed] time_sleep.wait_900_seconds: Still creating... [1m31s elapsed] ... ** Output shortened ** ... time_sleep.wait_900_seconds: Still creating... [14m41s elapsed] time_sleep.wait_900_seconds: Still creating... [14m51s elapsed] time_sleep.wait_900_seconds: Creation complete after 15m0s [id=2024-03-27T14:41:37Z] local_file.GetZoneIDScript: Creating... local_file.GetZoneIDScript: Creation complete after 0s [id=fe4217fc0b2887ee7d949cbbd15e3f338ce5ec13] terraform_data.ZoneID: Creating... terraform_data.ZoneID: Provisioning with 'local-exec'... terraform_data.ZoneID (local-exec): Executing: ["PowerShell" "-File" "c:/temp/xdinst/DATA/GetZoneID.ps1"] terraform_data.ZoneID: Creation complete after 1s [id=605cf1d3-df21-50fd-eea4-2b378bbc75de] terraform_data.ZoneID2: Creating... terraform_data.ZoneID2: Provisioning with 'local-exec'... terraform_data.ZoneID2 (local-exec): Executing: ["PowerShell" "-File" "c:/temp/xdinst/DATA/GetZoneID.ps1"] terraform_data.ZoneID2: Creation complete after 1s [id=8407eaeb-8b82-c81f-8f18-91e1c17c856e] local_file.CreateValidCWCOnAVM: Creating... local_file.CreateValidCWCOnAVM: Creation complete after 0s [id=8589dd5c009bc8ec60afe66f005270a5d6716790] terraform_data.ExecuteCreateValidCWCOnAVM: Creating... terraform_data.ExecuteCreateValidCWCOnAVM: Provisioning with 'local-exec'... terraform_data.ExecuteCreateValidCWCOnAVM (local-exec): Executing: ["PowerShell" "-File" "c:/temp/xdinst/DATA/CreateValidCWCOnAVM.ps1"] terraform_data.ExecuteCreateValidCWCOnAVM: Creation complete after 1s [id=fe29d6ab-b636-20c4-2546-36ee395f6ced] local_file.RestartCC: Creating... local_file.InstallCWCOnCC: Creating... local_file.Log: Creating... local_file.InstallCWCOnCC: Creation complete after 0s [id=83c2658962b3ca50411ecd6d7d3ab22958819dd1] local_file.RestartCC: Creation complete after 0s [id=c33d06fc1d6b6fec7b65e720320d1cbc71f02225] local_file.Log: Creation complete after 0s [id=d725ce92ca8335439a5d83acf47f3ca2c957a515] local_file.LogData: Creating... local_file.LogData: Creation complete after 0s [id=d725ce92ca8335439a5d83acf47f3ca2c957a515] null_resource.UploadRequiredComponentsToCC1: Creating... null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... null_resource.UploadRequiredComponentsToCC2: Creating... null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... null_resource.UploadRequiredComponentsToCC1: Still creating... [10s elapsed] null_resource.UploadRequiredComponentsToCC2: Still creating... [10s elapsed] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Creation complete after 13s [id=3185822947916041730] null_resource.CallRequiredScriptsOnCC1: Creating... null_resource.CallRequiredScriptsOnCC1: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Host: 10.156.0.5 null_resource.CallRequiredScriptsOnCC1 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC1 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC1 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC1 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Creation complete after 15s [id=979308782134187951] null_resource.CallRequiredScriptsOnCC2: Creating... null_resource.CallRequiredScriptsOnCC2: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Host: 10.156.0.6 null_resource.CallRequiredScriptsOnCC2 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC2 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC2 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC2 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnCC1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/InstallCWCOnCC.ps1 #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnCC2 (remote-exec): C:\Users\Administrator.TACG-GCP-TF-CC2>powershell -File c:/temp/xdinst/DATA/InstallCWCOnCC.ps1 null_resource.CallRequiredScriptsOnCC1: Still creating... [10s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [10s elapsed] null_resource.CallRequiredScriptsOnCC1: Still creating... [20s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [20s elapsed] null_resource.CallRequiredScriptsOnCC1: Still creating... [30s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [30s elapsed] null_resource.CallRequiredScriptsOnCC1: Still creating... [40s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [40s elapsed] null_resource.CallRequiredScriptsOnCC1: Still creating... [50s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [50s elapsed] null_resource.CallRequiredScriptsOnCC1: Still creating... [1m0s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [1m0s elapsed] null_resource.CallRequiredScriptsOnCC1: Still creating... [1m10s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [1m10s elapsed] null_resource.CallRequiredScriptsOnCC1: Still creating... [1m20s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [1m20s elapsed] null_resource.CallRequiredScriptsOnCC1: Still creating... [1m30s elapsed] null_resource.CallRequiredScriptsOnCC2: Still creating... [1m30s elapsed] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CallRequiredScriptsOnCC1: Still creating... [1m40s elapsed] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CallRequiredScriptsOnCC1: Creation complete after 1m41s [id=1683817289421392956] time_sleep.wait_1800_seconds_CC1: Creating... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CallRequiredScriptsOnCC2: Creation complete after 1m39s [id=5495494345692724821] time_sleep.wait_1800_seconds_CC2: Creating... time_sleep.wait_1800_seconds_CC1: Still creating... [10s elapsed] time_sleep.wait_1800_seconds_CC2: Still creating... [11s elapsed] time_sleep.wait_1800_seconds_CC1: Still creating... [20s elapsed] time_sleep.wait_1800_seconds_CC2: Still creating... [21s elapsed] time_sleep.wait_1800_seconds_CC1: Still creating... [30s elapsed] time_sleep.wait_1800_seconds_CC2: Still creating... [31s elapsed] time_sleep.wait_1800_seconds_CC1: Still creating... [40s elapsed] time_sleep.wait_1800_seconds_CC2: Still creating... [41s elapsed] time_sleep.wait_1800_seconds_CC1: Still creating... [50s elapsed] time_sleep.wait_1800_seconds_CC2: Still creating... [51s elapsed] ... ** Output shortened ** ... time_sleep.wait_1800_seconds_CC2: Still creating... [29m42s elapsed] time_sleep.wait_1800_seconds_CC1: Still creating... [29m52s elapsed] time_sleep.wait_1800_seconds_CC2: Still creating... [29m52s elapsed] time_sleep.wait_1800_seconds_CC1: Creation complete after 30m0s [id=2024-03-28T10:41:32Z] null_resource.CallRebootScriptOnCC1: Creating... null_resource.CallRebootScriptOnCC1: Provisioning with 'remote-exec'... null_resource.CallRebootScriptOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRebootScriptOnCC1 (remote-exec): Host: 10.156.0.5 null_resource.CallRebootScriptOnCC1 (remote-exec): Port: 5985 null_resource.CallRebootScriptOnCC1 (remote-exec): User: administrator null_resource.CallRebootScriptOnCC1 (remote-exec): Password: true null_resource.CallRebootScriptOnCC1 (remote-exec): HTTPS: false null_resource.CallRebootScriptOnCC1 (remote-exec): Insecure: false null_resource.CallRebootScriptOnCC1 (remote-exec): NTLM: false null_resource.CallRebootScriptOnCC1 (remote-exec): CACert: false time_sleep.wait_1800_seconds_CC2: Creation complete after 30m1s [id=2024-03-28T10:41:32Z] null_resource.CallRebootScriptOnCC2: Creating... null_resource.CallRebootScriptOnCC2: Provisioning with 'remote-exec'... null_resource.CallRebootScriptOnCC2 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRebootScriptOnCC2 (remote-exec): Host: 10.156.0.6 null_resource.CallRebootScriptOnCC2 (remote-exec): Port: 5985 null_resource.CallRebootScriptOnCC2 (remote-exec): User: administrator null_resource.CallRebootScriptOnCC2 (remote-exec): Password: true null_resource.CallRebootScriptOnCC2 (remote-exec): HTTPS: false null_resource.CallRebootScriptOnCC2 (remote-exec): Insecure: false null_resource.CallRebootScriptOnCC2 (remote-exec): NTLM: false null_resource.CallRebootScriptOnCC2 (remote-exec): CACert: false null_resource.CallRebootScriptOnCC1 (remote-exec): Connected! null_resource.CallRebootScriptOnCC2 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRebootScriptOnCC1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/RebootCC.ps1 null_resource.CallRebootScriptOnCC1 (remote-exec): Windows PowerShell null_resource.CallRebootScriptOnCC1 (remote-exec): Copyright (C) Microsoft Corporation. All rights reserved. null_resource.CallRebootScriptOnCC1 (remote-exec): Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CallRebootScriptOnCC2: Still creating... [10s elapsed] ... ** Output shortened ** ... Apply complete! Resources: 30 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnGCP\_CCOnGCP-Install> This configuration completes the creation and configuration of all initial resources: Installing the needed software on the Cloud Connectors Creating a Resource Location in Citrix Cloud Configuring the 2 Cloud Connectors Registering the 2 Cloud Connectors in the newly created Resource Location After successful runs of all needed scripts, we can see the new Resource Location in Citrix Cloud and the two Cloud Connectors bound to the Resource Location: The environment is now ready to deploy a Machine Catalog and a Delivery Group using Module 3. Module 3: Create all Resources in Google Cloud Platform (GCP) and Citrix Cloud This module is split into the following configuration parts: Creating a Hypervisor Connection to Google Cloud Platform (GCP) and a Hypervisor Pool Creating a Machine Catalog (MC) in the newly created Resource Location Creating a Delivery Group (DG) based on the MC in the newly created Resource Location Deploying some example policies using Terraform The Terraform configuration contains some idle time slots to ensure that background operations on Google Cloud Platform (GCP) or on the VMs can be completed before the next configuration steps occur. We have seen different elapsed configuration times related to different loads on the Google Cloud Platform (GCP) systems! Before Terraform can create the Hypervisor Connection and the Hypervisor Pool, Terraform must retrieve the Site-ID and Zone-ID of the newly created Resource Location. As the Citrix Terraform Provider currently has no Cloud-level functionalities implemented, Terraform needs PowerShell scripts to retrieve the IDs. It created the necessary scripts with all the needed variables, saved the scripts, and executed them in Module 2. After retrieving the IDs, Terraform configures a Hypervisor Connection to Google Cloud Platform (GCP) and a Hypervisor Resource Pool associated with the Hypervisor Connection. As soon as these prerequisites are completed, the Machine Catalog is created. After successfully creating the Hypervisor Connection, the Hypervisor Resource Pool, and the Machine Catalog, the last step of the deployment process starts - creating the Delivery Group. The Terraform configuration assumes that all machines in the created Machine Catalog are used in the Delivery Group and that Autoscale will be configured for this Delivery Group. More information about Autoscale can be found here: https://docs.citrix.com/en-us/tech-zone/learn/tech-briefs/autoscale.html The deployment of Citrix Policies is a new feature built in version 0.5.2. We need to know the internal policy name, as localized policy names and descriptions are unusable. Therefore, we need to use a PowerShell script to determine all internal names - some prerequisites are necessary for the script to work. You can use any machine but the Cloud Connectors! Install the Citrix Supportability Pack Install the Citrix Group Policy Management - scroll down to Group Policy After installation of the pre-requisites open a PowerShell console: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Import-Module "C:\TACG\Supportability Pack\Tools\Scout\Current\Utilities\Citrix.GroupPolicy.commands.psm1" -force new-psdrive -name LocalFarmGpo -psprovider CitrixGroupPolicy -controller localhost \ Get-PSDrive cd LocalFarmGpo: Get-CtxGroupPolicyConfiguration Type: User ProfileLoadTimeMonitoring_Threshold ICALatencyMonitoring_Enable ICALatencyMonitoring_Period ICALatencyMonitoring_Threshold EnableLossless ProGraphics FRVideos_Part FRVideosPath_Part FRStartMenu_Part FRStartMenuPath_Part FRSearches_Part FRSearchesPath_Part FRSavedGames_Part FRSavedGamesPath_Part FRPictures_Part FRPicturesPath_Part FRMusic_Part FRMusicPath_Part FRLinks_Part FRLinksPath_Part FRFavorites_Part FRFavoritesPath_Part FRDownloads_Part FRDownloadsPath_Part FRDocuments_Part FRDocumentsPath_Part FRDesktop_Part FRDesktopPath_Part FRContacts_Part FRContactsPath_Part FRAdminAccess_Part FRIncDomainName_Part FRAppData_Part FRAppDataPath_Part StorefrontAccountsList AllowFidoRedirection AllowWIARedirection ClientClipboardWriteAllowedFormats ClipboardRedirection ClipboardSelectionUpdateMode DesktopLaunchForNonAdmins DragDrop LimitClipboardTransferC2H LimitClipboardTransferH2C LossTolerantModeAvailable NonPublishedProgramLaunching PrimarySelectionUpdateMode ReadonlyClipboard RestrictClientClipboardWrite RestrictSessionClipboardWrite SessionClipboardWriteAllowedFormats FramesPerSecond PreferredColorDepthForSimpleGraphics VisualQuality ExtraColorCompression ExtraColorCompressionThreshold LossyCompressionLevel LossyCompressionThreshold ProgressiveHeavyweightCompression MinimumAdaptiveDisplayJpegQuality MovingImageCompressionConfiguration ProgressiveCompressionLevel ProgressiveCompressionThreshold TargetedMinimumFramesPerSecond ClientUsbDeviceOptimizationRules UsbConnectExistingDevices UsbConnectNewDevices UsbDeviceRedirection UsbDeviceRedirectionRules USBDeviceRulesV2 UsbPlugAndPlayRedirection TwainCompressionLevel TwainRedirection LocalTimeEstimation RestoreServerTime SessionTimeZone EnableSessionWatermark WatermarkStyle WatermarkTransparency WatermarkCustomText WatermarkIncludeClientIPAddress WatermarkIncludeConnectTime WatermarkIncludeLogonUsername WatermarkIncludeVDAHostName WatermarkIncludeVDAIPAddress EnableRemotePCDisconnectTimer SessionConnectionTimer SessionConnectionTimerInterval SessionDisconnectTimer SessionDisconnectTimerInterval SessionIdleTimer SessionIdleTimerInterval LossTolerantThresholds EnableServerConnectionTimer EnableServerDisconnectionTimer EnableServerIdleTimer ServerConnectionTimerInterval ServerDisconnectionTimerInterval ServerIdleTimerInterval MinimumEncryptionLevel AutoCreationEventLogPreference ClientPrinterRedirection DefaultClientPrinter PrinterAssignments SessionPrinters WaitForPrintersToBeCreated UpsPrintStreamInputBandwidthLimit DPILimit EMFProcessingMode ImageCompressionLimit UniversalPrintingPreviewPreference UPDCompressionDefaults InboxDriverAutoInstallation UniversalDriverPriority UniversalPrintDriverUsage AutoCreatePDFPrinter ClientPrinterAutoCreation ClientPrinterNames DirectConnectionsToPrintServers GenericUniversalPrinterAutoCreation PrinterDriverMappings PrinterPropertiesRetention ClientComPortRedirection ClientComPortsAutoConnection ClientLptPortRedirection ClientLptPortsAutoConnection MaxSpeexQuality MSTeamsRedirection MultimediaOptimization UseGPUForMultimediaOptimization VideoLoadManagement VideoQuality WebBrowserRedirectionAcl WebBrowserRedirectionAuthenticationSites WebBrowserRedirectionBlacklist WebBrowserRedirectionIwaSupport WebBrowserRedirectionProxy WebBrowserRedirectionProxyAuth MultiStream AutoKeyboardPopUp ComboboxRemoting MobileDesktop TabletModeToggle ClientKeyboardLayoutSyncAndIME EnableUnicodeKeyboardLayoutMapping HideKeyboardLayoutSwitchPopupMessageBox AllowVisuallyLosslessCompression DisplayLosslessIndicator OptimizeFor3dWorkload ScreenSharing UseHardwareEncodingForVideoCodec UseVideoCodecForCompression EnableFramehawkDisplayChannel AllowFileDownload AllowFileTransfer AllowFileUpload AsynchronousWrites AutoConnectDrives ClientDriveLetterPreservation ClientDriveRedirection ClientFixedDrives ClientFloppyDrives ClientNetworkDrives ClientOpticalDrives ClientRemoveableDrives HostToClientRedirection ReadOnlyMappedDrive SpecialFolderRedirection AeroRedirection DesktopWallpaper GraphicsQuality MenuAnimation WindowContentsVisibleWhileDragging AllowLocationServices AllowBidirectionalContentRedirection BidirectionalRedirectionConfig ClientURLs VDAURLs AudioBandwidthLimit AudioBandwidthPercent ClipboardBandwidthLimit ClipboardBandwidthPercent ComPortBandwidthLimit ComPortBandwidthPercent FileRedirectionBandwidthLimit FileRedirectionBandwidthPercent HDXMultimediaBandwidthLimit HDXMultimediaBandwidthPercent LptBandwidthLimit LptBandwidthLimitPercent OverallBandwidthLimit PrinterBandwidthLimit PrinterBandwidthPercent TwainBandwidthLimit TwainBandwidthPercent USBBandwidthLimit USBBandwidthPercent AllowRtpAudio AudioPlugNPlay AudioQuality ClientAudioRedirection EnableAdaptiveAudio MicrophoneRedirection FlashAcceleration FlashBackwardsCompatibility FlashDefaultBehavior FlashEventLogging FlashIntelligentFallback FlashLatencyThreshold FlashServerSideContentFetchingWhitelist FlashUrlColorList FlashUrlCompatibilityList HDXFlashLoadManagement HDXFlashLoadManagementErrorSwf Type: Computer WemCloudConnectorList VirtualLoopbackPrograms VirtualLoopbackSupport EnableAutoUpdateOfControllers AppFailureExclusionList EnableProcessMonitoring EnableResourceMonitoring EnableWorkstationVDAFaultMonitoring SelectedFailureLevel CPUUsageMonitoring_Enable CPUUsageMonitoring_Period CPUUsageMonitoring_Threshold VdcPolicyEnable EnableClipboardMetadataCollection EnableVdaDiagnosticsCollection XenAppOptimizationDefinitionPathData XenAppOptimizationEnabled ExclusionList_Part IncludeListRegistry_Part LastKnownGoodRegistry DefaultExclusionList ExclusionDefaultReg01 ExclusionDefaultReg02 ExclusionDefaultReg03 PSAlwaysCache PSAlwaysCache_Part PSEnabled PSForFoldersEnabled PSForPendingAreaEnabled PSPendingLockTimeout PSUserGroups_Part StreamingExclusionList_Part ApplicationProfilesAutoMigration DeleteCachedProfilesOnLogoff LocalProfileConflictHandling_Part MigrateWindowsProfilesToUserStore_Part ProfileDeleteDelay_Part TemplateProfileIsMandatory TemplateProfileOverridesLocalProfile TemplateProfileOverridesRoamingProfile TemplateProfilePath DisableConcurrentAccessToOneDriveContainer DisableConcurrentAccessToProfileContainer EnableVHDAutoExtend EnableVHDDiskCompaction GroupsToAccessProfileContainer_Part PreventLoginWhenMountFailed_Part ProfileContainerExclusionListDir_Part ProfileContainerExclusionListFile_Part ProfileContainerInclusionListDir_Part ProfileContainerInclusionListFile_Part ProfileContainerLocalCache DebugFilePath_Part DebugMode LogLevel_ActiveDirectoryActions LogLevel_FileSystemActions LogLevel_FileSystemNotification LogLevel_Information LogLevel_Logoff LogLevel_Logon LogLevel_PolicyUserLogon LogLevel_RegistryActions LogLevel_RegistryDifference LogLevel_UserName LogLevel_Warnings MaxLogSize_Part LargeFileHandlingList_Part LogonExclusionCheck_Part AccelerateFolderMirroring MirrorFoldersList_Part ProfileContainer_Part SyncDirList_Part SyncFileList_Part ExclusionListSyncDir_Part ExclusionListSyncFiles_Part DefaultExclusionListSyncDir ExclusionDefaultDir01 ExclusionDefaultDir02 ExclusionDefaultDir03 ExclusionDefaultDir04 ExclusionDefaultDir05 ExclusionDefaultDir06 ExclusionDefaultDir07 ExclusionDefaultDir08 ExclusionDefaultDir09 ExclusionDefaultDir10 ExclusionDefaultDir11 ExclusionDefaultDir12 ExclusionDefaultDir13 ExclusionDefaultDir14 ExclusionDefaultDir15 ExclusionDefaultDir16 ExclusionDefaultDir17 ExclusionDefaultDir18 ExclusionDefaultDir19 ExclusionDefaultDir20 ExclusionDefaultDir21 ExclusionDefaultDir22 ExclusionDefaultDir23 ExclusionDefaultDir24 ExclusionDefaultDir25 ExclusionDefaultDir26 ExclusionDefaultDir27 ExclusionDefaultDir28 ExclusionDefaultDir29 ExclusionDefaultDir30 SharedStoreFileExclusionList_Part SharedStoreFileInclusionList_Part SharedStoreProfileContainerFileSizeLimit_Part CPEnable CPMigrationFromBaseProfileToCPStore CPPathData CPSchemaPathData CPUserGroups_Part DATPath_Part ExcludedGroups_Part MigrateUserStore_Part OfflineSupport ProcessAdmins ProcessedGroups_Part PSMidSessionWriteBack PSMidSessionWriteBackReg PSMidSessionWriteBackSessionLock ServiceActive AppAccessControl_Part CEIPEnabled CredBasedAccessEnabled DisableDynamicConfig EnableVolumeReattach FreeRatio4Compaction_Part FSLogixProfileContainerSupport LoadRetries_Part LogoffRatherThanTempProfile MultiSiteReplication_Part NDefrag4Compaction NLogoffs4Compaction_Part OneDriveContainer_Part OrderedGroups_Part OutlookEdbBackupEnabled OutlookSearchRoamingConcurrentSession OutlookSearchRoamingConcurrentSession_Part OutlookSearchRoamingEnabled ProcessCookieFiles SyncGpoStateEnabled UserGroupLevelConfigEnabled UserStoreSelection_Part UwpAppsRoaming VhdAutoExpansionIncrement_Part VhdAutoExpansionLimit_Part VhdAutoExpansionThreshold_Part VhdContainerCapacity_Part VhdStorePath_Part UplCustomizedUserLayerSizeInGb UplGroupsUsingCustomizedUserLayerSize UplRepositoryPath UplUserExclusions UplUserLayerSizeInGb ConcurrentLogonsTolerance CPUUsage CPUUsageExcludedProcessPriority DiskUsage MaximumNumberOfSessions MemoryUsage MemoryUsageBaseLoad ApplicationLaunchWaitTimeout HDXAdaptiveTransport HDXDirect HDXDirectMode HDXDirectPortRange IcaListenerPortNumber IcaListenerTimeout LogoffCheckerStartupDelay RemoteCredentialGuard RendezvousProtocol RendezvousProxy SecureHDX VdaUpgradeProxy VirtualChannelWhiteList VirtualChannelWhiteListLogging VirtualChannelWhiteListLogThrottling AcceptWebSocketsConnections WebSocketsPort WSTrustedOriginServerList SessionReliabilityConnections SessionReliabilityPort SessionReliabilityTimeout IdleTimerInterval LoadBalancedPrintServers PrintServersOutOfServiceThreshold UpcHttpConnectTimeout UpcHttpReceiveTimeout UpcHttpSendTimeout UpcSslCgpPort UpcSslCipherSuite UpcSslComplianceMode UpcSslEnable UpcSslFips UpcSslHttpsPort UpcSslProtocolVersion UpsCgpPort UpsEnable UpsHttpPort HTML5VideoRedirection MultimediaAcceleration MultimediaAccelerationDefaultBufferSize MultimediaAccelerationEnableCSF MultimediaAccelerationUseDefaultBufferSize MultimediaConferencing WebBrowserRedirection MultiPortPolicy MultiStreamAssignment MultiStreamPolicy RtpAudioPortRange UDPAudioOnServer AllowLocalAppAccess URLRedirectionBlackList URLRedirectionWhiteList IcaKeepAlives IcaKeepAliveTimeout DisplayDegradePreference DisplayDegradeUserNotification DisplayMemoryLimit DynamicPreview ImageCaching LegacyGraphicsMode MaximumColorDepth QueueingAndTossing FramehawkDisplayChannelPortRange PersistentCache EnhancedDesktopExperience IcaRoundTripCalculation IcaRoundTripCalculationInterval IcaRoundTripCalculationWhenIdle ACRTimeout AutoClientReconnect AutoClientReconnectAuthenticationRequired AutoClientReconnectLogging ReconnectionUiTransparencyLevel AppProtectionPostureCheck AdvanceWarningFrequency AdvanceWarningMessageTitle AdvanceWarningPeriod AgentTaskInterval FinalForceLogoffMessageBody FinalForceLogoffMessageTitle ForceLogoffGracePeriod ForceLogoffMessageTitle ImageProviderIntegrationEnabled RebootMessageBody Using these names allows the deployment of policies by using the Terraform provider. Please make sure you have configured the variables according to your needs. Caution: Before running Terraform, no Terraform-related entities were available: No Terraform-related Hypervisor Connection to Google Cloud Platform exists on Citrix Cloud: No Terraform-related Machine Catalog for Google Cloud Platform exists on Citrix Cloud: No Terraform-related Delivery Group for Google Cloud Platform exists on Citrix Cloud: The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnGCP\_CCOnGCP-CCStuff> terraform init Initializing the backend... Initializing provider plugins... - Finding citrix/citrix versions matching ">= 0.5.4"... - Finding hashicorp/google versions matching ">= 5.21.0"... - Finding latest version of hashicorp/time... - Finding latest version of hashicorp/local... - Installing citrix/citrix v0.5.4... - Installed citrix/citrix v0.5.4 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) - Installing hashicorp/google v5.23.0... - Installed hashicorp/google v5.23.0 (signed by HashiCorp) - Installing hashicorp/time v0.11.1... - Installed hashicorp/time v0.11.1 (signed by HashiCorp) - Installing hashicorp/local v2.5.1... - Installed hashicorp/local v2.5.1 (signed by HashiCorp) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnGCP\_CCOnGCP-CCStuff> terraform plan data.local_file.LoadZoneID: Reading... data.local_file.GCPCredentials: Reading... data.local_file.LoadZoneID: Read complete after 0s [id=fec7404d513834378890c62d7e34d9d692f11957] data.local_file.GCPCredentials: Read complete after 0s [id=e153c83ee291d1531ec30503aacbb705602bde95] data.google_compute_network.GCPVPC: Reading... data.google_compute_network.GCPSubnet: Reading... data.google_compute_network.GCPSubnet: Read complete after 0s [id=projects/tacg-gcp-XXXXXXXXXX/global/networks/default] data.google_compute_network.GCPVPC: Read complete after 0s [id=projects/tacg-gcp-XXXXXXXXXX/global/networks/default] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # citrix_delivery_group.CreateDG will be created + resource "citrix_delivery_group" "CreateDG" { + associated_machine_catalogs = [ + { + machine_catalog = (known after apply) + machine_count = 1 }, ] + autoscale_settings = { + autoscale_enabled = true + disconnect_off_peak_idle_session_after_seconds = 0 + disconnect_peak_idle_session_after_seconds = 300 + log_off_off_peak_disconnected_session_after_seconds = 0 + log_off_peak_disconnected_session_after_seconds = 300 + off_peak_buffer_size_percent = 0 + off_peak_disconnect_action = "Nothing" + off_peak_disconnect_timeout_minutes = 0 + off_peak_extended_disconnect_action = "Nothing" + off_peak_extended_disconnect_timeout_minutes = 0 + off_peak_log_off_action = "Nothing" + peak_buffer_size_percent = 0 + peak_disconnect_action = "Nothing" + peak_disconnect_timeout_minutes = 0 + peak_extended_disconnect_action = "Nothing" + peak_extended_disconnect_timeout_minutes = 0 + peak_log_off_action = "Nothing" + power_off_delay_minutes = 30 + power_time_schemes = [ + { + days_of_week = [ + "Monday", + "Tuesday", + "Wednesday", + "Thursday", + "Friday", ] + display_name = "TACG-GCP-TF-AS-Weekdays" + peak_time_ranges = [ + "09:00-17:00", ] + pool_size_schedules = [ + { + pool_size = 1 + time_range = "09:00-17:00" }, ] + pool_using_percentage = false }, ] } + desktops = [ + { + description = "Terraform-based Delivery Group running on GCP" + enable_session_roaming = true + enabled = true + published_name = "DG-TF-TACG-GCP" + restricted_access_users = { + allow_list = [ + "TACG-GCP\\vdaallowed", ] } }, ] + id = (known after apply) + minimum_functional_level = "L7_20" + name = "DG-TF-TACG-GCP" + reboot_schedules = [ + { + days_in_week = [ + "Sunday", ] + frequency = "Weekly" + frequency_factor = 1 + ignore_maintenance_mode = true + name = "TACG-GCP-Reboot Schedule" + natural_reboot_schedule = false + reboot_duration_minutes = 0 + reboot_schedule_enabled = true + start_date = "2024-01-01" + start_time = "02:00" }, ] + restricted_access_users = { + allow_list = [ + "TACG-GCP\\vdaallowed", ] } + total_machines = (known after apply) } # citrix_gcp_hypervisor.CreateHypervisorConnection will be created + resource "citrix_gcp_hypervisor" "CreateHypervisorConnection" { + id = (known after apply) + name = "TACG-GCP-TF-HypConn" + service_account_credentials = (sensitive value) + service_account_id = "XXXXXXXXXX@XXXXXXXXXX.iam.gserviceaccount.com" + zone = "XXXXXXXX-XXXX-XXXX-XXXX-f6a1f864d69a" } # citrix_gcp_hypervisor_resource_pool.CreateHypervisorPool will be created + resource "citrix_gcp_hypervisor_resource_pool" "CreateHypervisorPool" { + hypervisor = (known after apply) + id = (known after apply) + name = "TACG-GCP-TF-HypConnPool" + project_name = "TACG-GCP" + region = "europe-west3" + subnets = [ + "default", ] + vpc = "default" } # citrix_machine_catalog.CreateMCSCatalog will be created + resource "citrix_machine_catalog" "CreateMCSCatalog" { + allocation_type = "Random" + description = "Terraform-based Machine Catalog" + id = (known after apply) + is_power_managed = true + is_remote_pc = false + minimum_functional_level = "L7_20" + name = "MC-TACG-GCP-TF" + provisioning_scheme = { + gcp_machine_config = { + master_image = "tacg-gcp-tf-wmi" + storage_type = "pd-standard" } + hypervisor = (known after apply) + hypervisor_resource_pool = (known after apply) + identity_type = "ActiveDirectory" + machine_account_creation_rules = { + naming_scheme = "TACG-GCP-W###" + naming_scheme_type = "Numeric" } + machine_domain_identity = { + domain = "gcp.the-austrian-citrix-guy.at" + domain_ou = "CN=Computers,DC=gcp,DC=the-austrian-citrix-guy,DC=at" + service_account = "Administrator" + service_account_password = (sensitive value) } + number_of_total_machines = 1 } + provisioning_type = "MCS" + session_support = "MultiSession" + zone = "XXXXXXXX-XXXX-XXXX-XXXX-f6a1f864d69a" } # time_sleep.wait_60_seconds will be created + resource "time_sleep" "wait_60_seconds" { + create_duration = "60s" + id = (known after apply) } # time_sleep.wait_60_seconds_1 will be created + resource "time_sleep" "wait_60_seconds_1" { + create_duration = "60s" + id = (known after apply) } Plan: 6 to add, 0 to change, 0 to destroy. Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now. PS C:\TACG\_CCOnGCP\_CCOnGCP-CCStuff> terraform apply data.local_file.GCPCredentials: Reading... data.local_file.LoadZoneID: Reading... data.local_file.LoadZoneID: Read complete after 0s [id=fec7404d513834378890c62d7e34d9d692f11957] data.local_file.GCPCredentials: Read complete after 0s [id=e153c83ee291d1531ec30503aacbb705602bde95] data.google_compute_network.GCPSubnet: Reading... data.google_compute_network.GCPVPC: Reading... data.google_compute_network.GCPSubnet: Read complete after 1s [id=projects/tacg-gcp-XXXXXXXXXX/global/networks/default] data.google_compute_network.GCPVPC: Read complete after 1s [id=projects/tacg-gcp-XXXXXXXXXX/global/networks/default] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # citrix_delivery_group.CreateDG will be created + resource "citrix_delivery_group" "CreateDG" { + associated_machine_catalogs = [ + { + machine_catalog = (known after apply) + machine_count = 1 }, ] + autoscale_settings = { + autoscale_enabled = true + disconnect_off_peak_idle_session_after_seconds = 0 + disconnect_peak_idle_session_after_seconds = 300 + log_off_off_peak_disconnected_session_after_seconds = 0 + log_off_peak_disconnected_session_after_seconds = 300 + off_peak_buffer_size_percent = 0 + off_peak_disconnect_action = "Nothing" + off_peak_disconnect_timeout_minutes = 0 + off_peak_extended_disconnect_action = "Nothing" + off_peak_extended_disconnect_timeout_minutes = 0 + off_peak_log_off_action = "Nothing" + peak_buffer_size_percent = 0 + peak_disconnect_action = "Nothing" + peak_disconnect_timeout_minutes = 0 + peak_extended_disconnect_action = "Nothing" + peak_extended_disconnect_timeout_minutes = 0 + peak_log_off_action = "Nothing" + power_off_delay_minutes = 30 + power_time_schemes = [ + { + days_of_week = [ + "Monday", + "Tuesday", + "Wednesday", + "Thursday", + "Friday", ] + display_name = "TACG-GCP-TF-AS-Weekdays" + peak_time_ranges = [ + "09:00-17:00", ] + pool_size_schedules = [ + { + pool_size = 1 + time_range = "09:00-17:00" }, ] + pool_using_percentage = false }, ] } + desktops = [ + { + description = "Terraform-based Delivery Group running on GCP" + enable_session_roaming = true + enabled = true + published_name = "DG-TF-TACG-GCP" + restricted_access_users = { + allow_list = [ + "TACG-GCP\\vdaallowed", ] } }, ] + id = (known after apply) + minimum_functional_level = "L7_20" + name = "DG-TF-TACG-GCP" + reboot_schedules = [ + { + days_in_week = [ + "Sunday", ] + frequency = "Weekly" + frequency_factor = 1 + ignore_maintenance_mode = true + name = "TACG-GCP-Reboot Schedule" + natural_reboot_schedule = false + reboot_duration_minutes = 0 + reboot_schedule_enabled = true + start_date = "2024-01-01" + start_time = "02:00" }, ] + restricted_access_users = { + allow_list = [ + "TACG-GCP\\vdaallowed", ] } + total_machines = (known after apply) } # citrix_gcp_hypervisor.CreateHypervisorConnection will be created + resource "citrix_gcp_hypervisor" "CreateHypervisorConnection" { + id = (known after apply) + name = "TACG-GCP-TF-HypConn" + service_account_credentials = (sensitive value) + service_account_id = "XXXXXXXXXX@XXXXXXXXXX.iam.gserviceaccount.com" + zone = "XXXXXXXX-XXXX-XXXX-XXXX-f6a1f864d69a" } # citrix_gcp_hypervisor_resource_pool.CreateHypervisorPool will be created + resource "citrix_gcp_hypervisor_resource_pool" "CreateHypervisorPool" { + hypervisor = (known after apply) + id = (known after apply) + name = "TACG-GCP-TF-HypConnPool" + project_name = "TACG-GCP" + region = "europe-west3" + subnets = [ + "default", ] + vpc = "default" } # citrix_machine_catalog.CreateMCSCatalog will be created + resource "citrix_machine_catalog" "CreateMCSCatalog" { + allocation_type = "Random" + description = "Terraform-based Machine Catalog" + id = (known after apply) + is_power_managed = true + is_remote_pc = false + minimum_functional_level = "L7_20" + name = "MC-TACG-GCP-TF" + provisioning_scheme = { + gcp_machine_config = { + master_image = "tacg-gcp-tf-wmi" + storage_type = "pd-standard" } + hypervisor = (known after apply) + hypervisor_resource_pool = (known after apply) + identity_type = "ActiveDirectory" + machine_account_creation_rules = { + naming_scheme = "TACG-GCP-W###" + naming_scheme_type = "Numeric" } + machine_domain_identity = { + domain = "gcp.the-austrian-citrix-guy.at" + domain_ou = "CN=Computers,DC=gcp,DC=the-austrian-citrix-guy,DC=at" + service_account = "XXXXXXXXXX" + service_account_password = (sensitive value) } + number_of_total_machines = 1 } + provisioning_type = "MCS" + session_support = "MultiSession" + zone = "XXXXXXXX-XXXX-XXXX-XXXX-f6a1f864d69a" } # time_sleep.wait_60_seconds will be created + resource "time_sleep" "wait_60_seconds" { + create_duration = "60s" + id = (known after apply) } # time_sleep.wait_60_seconds_1 will be created + resource "time_sleep" "wait_60_seconds_1" { + create_duration = "60s" + id = (known after apply) } Plan: 6 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes citrix_gcp_hypervisor.CreateHypervisorConnection: Creating... citrix_gcp_hypervisor.CreateHypervisorConnection: Still creating... [10s elapsed] citrix_gcp_hypervisor.CreateHypervisorConnection: Still creating... [20s elapsed] citrix_gcp_hypervisor.CreateHypervisorConnection: Still creating... [30s elapsed] citrix_gcp_hypervisor.CreateHypervisorConnection: Still creating... [40s elapsed] citrix_gcp_hypervisor.CreateHypervisorConnection: Creation complete after 41s [id=ceb89ce1-b09f-467d-8fe8-93e59e31bff6] citrix_gcp_hypervisor_resource_pool.CreateHypervisorPool: Creating... citrix_gcp_hypervisor_resource_pool.CreateHypervisorPool: Still creating... [10s elapsed] citrix_gcp_hypervisor_resource_pool.CreateHypervisorPool: Creation complete after 11s [id=506a698f-22bf-4b5f-a910-4618b41890f7] time_sleep.wait_60_seconds: Creating... time_sleep.wait_60_seconds: Still creating... [10s elapsed] time_sleep.wait_60_seconds: Still creating... [20s elapsed] time_sleep.wait_60_seconds: Still creating... [30s elapsed] time_sleep.wait_60_seconds: Still creating... [40s elapsed] time_sleep.wait_60_seconds: Still creating... [51s elapsed] time_sleep.wait_60_seconds: Creation complete after 1m0s [id=2024-04-04T07:33:16Z] citrix_machine_catalog.CreateMCSCatalog: Creating... citrix_machine_catalog.CreateMCSCatalog: Still creating... [10s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [20s elapsed] ... ** Output shortened ** ... citrix_machine_catalog.CreateMCSCatalog: Still creating... [20m20s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [20m30s elapsed] citrix_machine_catalog.CreateMCSCatalog: Creation complete after 20m35s [id=1410b841-877a-4c7d-bf57-15e310865f36] time_sleep.wait_60_seconds_1: Creating... time_sleep.wait_60_seconds_1: Still creating... [10s elapsed] time_sleep.wait_60_seconds_1: Still creating... [20s elapsed] time_sleep.wait_60_seconds_1: Still creating... [30s elapsed] time_sleep.wait_60_seconds_1: Still creating... [40s elapsed] time_sleep.wait_60_seconds_1: Still creating... [50s elapsed] time_sleep.wait_60_seconds_1: Creation complete after 1m0s [id=2024-04-04T07:54:51Z] citrix_delivery_group.CreateDG: Creating... citrix_delivery_group.CreateDG: Creation complete after 5s [id=9f458d5f-594f-49b5-b2c2-cb920442dc27] Apply complete! Resources: 6 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnGCP\_CCOnGCP-CCStuff> This configuration completes the full deployment of a Citrix Cloud Resource Location in Google Cloud Platform (GCP). The environment created by Terraform is now ready for usage, and all entities are in place: The Resource Location: The Hypervisor Connection and the Hypervisor Pool: The Machine Catalog: The Delivery Group and the Worker-VM: The Worker-VM: The AutoScale settings of the Delivery Group: The Desktop in the Library: The Desktop in Workspace App: Connection to the Worker-VMs Desktop: Appendix Examples of the Terraform scripts Module 1: CConGCP-Creation These are the Terraform configuration files for Module 1 (excerpts): provider.tf .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } # Terraform deployment of Citrix DaaS on Google Cloud Platform (GCP) ## Definition of all required Terraform providers terraform { required_version = ">= 1.7.5" required_providers { restapi = { source = "Mastercard/restapi" version = ">=1.18.2" } citrix = { source = "citrix/citrix" version = ">=0.5.4" } google = { source = "hashicorp/google" version = ">=5.21.0" } } } # Configure the Google GCP Provider provider "google" { credentials = file(var.CCOnGCP-Creation-Provider-GCPAuthFileJSON) project = var.CCOnGCP-Creation-Provider-GCPProject region = var.CCOnGCP-Creation-Provider-GCPRegion zone = var.CCOnGCP-Creation-Provider-GCPZone } # Configure the Citrix Provider provider "citrix" { customer_id = "${var.CC_CustomerID}" client_id = "${var.CC_APIKey-ClientID}" client_secret = "${var.CC_APIKey-ClientSecret}" } GetBearerToken.tf .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } ### Create PowerShell file for determining the BearerToken resource "local_file" "GetBearerToken" { content = <<-EOT asnp Citrix* $key= "${var.CC_APIKey-ClientID}" $secret= "${var.CC_APIKey-ClientSecret}" $customer= "${var.CC_CustomerID}" $XDStoredCredentials = Set-XDCredentials -StoreAs default -ProfileType CloudApi -CustomerId $customer -APIKey $key -SecretKey $secret $auth = Get-XDAuthentication $BT = $GLOBAL:XDAuthToken | Out-File "${path.module}/GetBT.txt" - Encoding Ascii - NoNewLine EOT filename = "${path.module}/GetBT.ps1" } ### Running GetBearertoken-Script to retrieve the Bearer Token resource "terraform_data" "GetBT" { depends_on = [ local_file.GetBearerToken ] provisioner "local-exec" { command = "${path.module}/GetBT.ps1" interpreter = ["PowerShell", "-File"] } } ### Retrieving the Bearer Token data "local_file" "Retrieve_BT" { depends_on = [ terraform_data.GetBT ] filename = "${path.module}/GetBT.txt" } output "terraform_data_BR_Read" { value = data.local_file.Retrieve_BT.content } create.tf .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } # Terraform deployment of Citrix DaaS on Google Cloud Platform ## Creation of all required entities - Networking ### Get VPC data "google_compute_network" "VPC" { name = "${lower(var.GCP_VPC_Name)}" } ### Get Subnet data "google_compute_subnetwork" "Subnet" { name = "${lower(var.GCP_VPC_Subnet_Name)}" } ### Create Firewall Rule - allow global HTTP resource "google_compute_firewall" "Allow-HTTP" { name = "${var.GCP_App_Global_Name}-fw-allow-http" network = data.google_compute_network.VPC.name allow { protocol = "tcp" ports = ["80"] } source_ranges = ["0.0.0.0/0"] target_tags = ["http"] } ### Create Firewall Rule - allow global HTTPS resource "google_compute_firewall" "Allow-HTTPS" { name = "${var.GCP_App_Global_Name}-fw-allow-https" network = data.google_compute_network.VPC.name allow { protocol = "tcp" ports = ["443"] } source_ranges = ["0.0.0.0/0"] target_tags = ["https"] } ### Create Firewall Rule - allow Subnet SSH resource "google_compute_firewall" "Allow-SSH" { name = "${var.GCP_App_Global_Name}-fw-allow-ssh" network = data.google_compute_network.VPC.name allow { protocol = "tcp" ports = ["22"] } source_ranges = ["${data.google_compute_subnetwork.Subnet.ip_cidr_range}"] target_tags = ["ssh"] } ### Create Firewall Rule - allow global RDP resource "google_compute_firewall" "Allow-RDP" { name = "${var.GCP_App_Global_Name}-fw-allow-rdp" network = data.google_compute_network.VPC.name allow { protocol = "tcp" ports = ["3389"] } source_ranges = ["0.0.0.0/0"] target_tags = ["rdp"] } ### Create Firewall Rule - allow Subnet WinRM resource "google_compute_firewall" "Allow-WinRM" { name = "${var.GCP_App_Global_Name}-fw-allow-winrm" network = data.google_compute_network.VPC.name allow { protocol = "tcp" ports = ["5985","5986"] } source_ranges = ["${data.google_compute_subnetwork.Subnet.ip_cidr_range}"] target_tags = ["winrm"] } ### Create Firewall Rule - allow Subnet All TCP resource "google_compute_firewall" "Allow-All-TCP" { name = "${var.GCP_App_Global_Name}-fw-allow-all-tcp" network = data.google_compute_network.VPC.name allow { protocol = "tcp" ports = ["0-65534"] } source_ranges = ["${data.google_compute_subnetwork.Subnet.ip_cidr_range}"] target_tags = ["all-tcp"] } ### Create Firewall Rule - allow Subnet All UDP resource "google_compute_firewall" "Allow-All-UDP" { name = "${var.GCP_App_Global_Name}-fw-allow-all-udp" network = data.google_compute_network.VPC.name allow { protocol = "udp" ports = ["0-65534"] } source_ranges = ["${data.google_compute_subnetwork.Subnet.ip_cidr_range}"] target_tags = ["all-udp"] } ### Create Firewall Rule - allow Subnet All ICMP resource "google_compute_firewall" "Allow-All-ICMP" { name = "${var.GCP_App_Global_Name}-fw-allow-all-icmp" network = data.google_compute_network.VPC.name allow { protocol = "icmp" } source_ranges = ["${data.google_compute_subnetwork.Subnet.ip_cidr_range}"] target_tags = ["all-icmp"] } ### Create static internal IP address for CC1 resource "google_compute_address" "internal-ip-cc1" { name ="internal-ip-cc1" subnetwork = data.google_compute_subnetwork.Subnet.id address_type = "INTERNAL" address = "${var.GCP_VPC_InternalIP_CC1}" region = "${var.CCOnGCP-Creation-Provider-GCPRegion}" } ### Create static internal IP address for CC2 resource "google_compute_address" "internal-ip-cc2" { name ="internal-ip-cc2" subnetwork = data.google_compute_subnetwork.Subnet.id address_type = "INTERNAL" address = "${var.GCP_VPC_InternalIP_CC2}" region = "${var.CCOnGCP-Creation-Provider-GCPRegion}" } ### Create static internal IP Address for Admin-VM resource "google_compute_address" "internal-ip-adminvm" { name ="internal-ip-avm" subnetwork = data.google_compute_subnetwork.Subnet.id address_type = "INTERNAL" address = "${var.GCP_VPC_InternalIP_AdminVM}" region = "${var.CCOnGCP-Creation-Provider-GCPRegion}" } ### Create static internal IP Address for WMI resource "google_compute_address" "internal-ip-wmi" { name ="internal-ip-wmi" subnetwork = data.google_compute_subnetwork.Subnet.id address_type = "INTERNAL" address = "${var.GCP_VPC_InternalIP_WMI}" region = "${var.CCOnGCP-Creation-Provider-GCPRegion}" } ### Create PowerShell file for joining the domain resource "local_file" "CreateDomainJoinScript" { content = <<-EOT Start-Sleep -Seconds 10 net user administrator /active:yes net user administrator ${var.GCP_App_Global_LocalAdminPW} netdom.exe join $env:COMPUTERNAME /domain:'${var.GCP_App_Global_FullDomainName}' /UserD:'${var.GCP_App_Global_DomainAdminUPN}' /PasswordD:'${var.GCP_App_Global_DomainAdminPW}' /reboot:5 EOT filename = "${path.module}/TF-Domain-Join-Script.ps1" } ### Create a Storage Bucket for storage of all needed stuff resource "google_storage_bucket" "Prereqs" { depends_on = [ local_file.CreateDomainJoinScript ] name = "${var.GCP_App_Global_Name}-storagebucket" location = "EU" storage_class = "STANDARD" force_destroy = true } resource "google_storage_bucket_access_control" "Prereqs" { bucket = google_storage_bucket.Prereqs.name role = "READER" entity = "allUsers" } resource "google_storage_bucket_iam_binding" "iam" { depends_on = [ google_storage_bucket.Prereqs ] bucket = google_storage_bucket.Prereqs.name members = [ "allUsers" ] role = "roles/storage.objectViewer" } #### Upload all required Software resource "google_storage_bucket_object" "Prereqs" { depends_on = [ google_storage_bucket.Prereqs ] name = "domain-join-script.ps1" source = "${path.module}/TF-Domain-Join-Script.ps1" bucket = google_storage_bucket.Prereqs.name } resource "google_storage_bucket_object" "CC" { depends_on = [ google_storage_bucket.Prereqs ] name = "cwcconnector.exe" source = "${path.module}/DATA/cwcconnector.exe" bucket = google_storage_bucket.Prereqs.name } resource "google_storage_bucket_object" "PoSH" { depends_on = [ google_storage_bucket.Prereqs ] name = "CitrixPoSHSDK.exe" source = "${path.module}/DATA/CitrixPoSHSDK.exe" bucket = google_storage_bucket.Prereqs.name } resource "google_storage_bucket_object" "CHC" { depends_on = [ google_storage_bucket.Prereqs ] name = "CloudHealthCheckInstaller_x64.msi" source = "${path.module}/DATA/CloudHealthCheckInstaller_x64.msi" bucket = google_storage_bucket.Prereqs.name } ## Create the VMs ### Create the CC1-VM resource "google_compute_instance" "CC1" { depends_on = [ google_compute_address.internal-ip-cc1,google_storage_bucket_object.Prereqs ] name = "${var.GCP_VM_CC1_Name}" hostname = "${var.GCP_VM_CC1_Name}.${var.GCP_App_Global_FullDomainName}" machine_type = "${var.GCP_VM_CC_InstanceType}" zone = "${var.CCOnGCP-Creation-Provider-GCPZone}" tags = ["rdp","http","https","winrm", "all-tcp", "all-udp", "all-icmp","http-server","https-server"] boot_disk { initialize_params { image = "${var.GCP_VM_CC_ImageName}" } } metadata = { sysprep-specialize-script-url = "https://storage.googleapis.com/tacg-gcp-tf-storagebucket/domain-join-script.ps1" } network_interface { network = data.google_compute_network.VPC.name subnetwork = data.google_compute_subnetwork.Subnet.name network_ip = "${var.GCP_VPC_InternalIP_CC1}" access_config { } } service_account { email = "${var.GCP_VM_ServiceAccount}" scopes = ["cloud-platform"] } } ### Create the CC2-VM resource "google_compute_instance" "CC2" { depends_on = [ google_compute_address.internal-ip-cc2,google_storage_bucket_object.Prereqs ] name = "${var.GCP_VM_CC2_Name}" hostname = "${var.GCP_VM_CC2_Name}.${var.GCP_App_Global_FullDomainName}" machine_type = "${var.GCP_VM_CC_InstanceType}" zone = "${var.CCOnGCP-Creation-Provider-GCPZone}" tags = ["rdp","http","https","winrm", "all-tcp", "all-udp", "all-icmp","http-server","https-server"] boot_disk { initialize_params { image = "${var.GCP_VM_CC_ImageName}" } } metadata = { sysprep-specialize-script-url = "https://storage.googleapis.com/tacg-gcp-tf-storagebucket/domain-join-script.ps1" } network_interface { network = data.google_compute_network.VPC.name subnetwork = data.google_compute_subnetwork.Subnet.name network_ip = "${var.GCP_VPC_InternalIP_CC2}" access_config { } } service_account { email = "${var.GCP_VM_ServiceAccount}" scopes = ["cloud-platform"] } } ### Create the Admin-VM resource "google_compute_instance" "AdminVM" { depends_on = [ google_compute_address.internal-ip-adminvm, google_storage_bucket_object.Prereqs ] name = "${var.GCP_VM_AdminVM_Name}" hostname = "${var.GCP_VM_AdminVM_Name}.${var.GCP_App_Global_FullDomainName}" machine_type = "${var.GCP_VM_CC_InstanceType}" zone = "${var.CCOnGCP-Creation-Provider-GCPZone}" tags = ["rdp","http","https","winrm", "all-tcp", "all-udp", "all-icmp","http-server","https-server"] boot_disk { initialize_params { image = "${var.GCP_VM_CC_ImageName}" } } metadata = { sysprep-specialize-script-url = "https://storage.googleapis.com/tacg-gcp-tf-storagebucket/domain-join-script.ps1" #sysprep-specialize-script-url = google_storage_bucket_object.Prereqs.name } network_interface { network = data.google_compute_network.VPC.name subnetwork = data.google_compute_subnetwork.Subnet.name network_ip = "${var.GCP_VPC_InternalIP_AdminVM}" access_config { } } service_account { email = "${var.GCP_VM_ServiceAccount}" scopes = ["cloud-platform"] } } ### Create the WMI-VM resource "google_compute_instance" "WMI" { depends_on = [ google_compute_address.internal-ip-wmi, google_storage_bucket_object.Prereqs ] name = "${var.GCP_VM_WMI_Name}" hostname = "${var.GCP_VM_WMI_Name}.${var.GCP_App_Global_FullDomainName}" machine_type = "${var.GCP_VM_CC_InstanceType_WMI}" zone = "${var.CCOnGCP-Creation-Provider-GCPZone}" tags = ["rdp","http","https","winrm", "all-tcp", "all-udp", "all-icmp","http-server","https-server"] boot_disk { initialize_params { image = "${var.GCP_VM_CC_ImageName}" } } metadata = { sysprep-specialize-script-url = "https://storage.googleapis.com/tacg-gcp-tf-storagebucket/domain-join-script.ps1" } network_interface { network = data.google_compute_network.VPC.name subnetwork = data.google_compute_subnetwork.Subnet.name network_ip = "${var.GCP_VPC_InternalIP_WMI}" access_config { } } service_account { email = "${var.GCP_VM_ServiceAccount}" scopes = ["cloud-platform"] } } Module 2: CConGCP-Install These are the Terraform configuration files for Module 2 (excerpts): provider.tf .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } # Terraform deployment of Citrix DaaS on Google Cloud Platform (GCP) ## Definition of all required Terraform providers terraform { required_version = ">= 1.7.5" required_providers { restapi = { source = "Mastercard/restapi" version = ">=1.18.2" } citrix = { source = "citrix/citrix" version = ">=0.5.4" } google = { source = "hashicorp/google" version = ">=5.21.0" } } } # Configure the Google GCP Provider provider "google" { credentials = file(var.CCOnGCP-Creation-Provider-GCPAuthFileJSON) project = var.CCOnGCP-Creation-Provider-GCPProject region = var.CCOnGCP-Creation-Provider-GCPRegion zone = var.CCOnGCP-Creation-Provider-GCPZone } # Configure the Citrix Provider provider "citrix" { customer_id = "${var.CC_CustomerID}" client_id = "${var.CC_APIKey-ClientID}" client_secret = "${var.CC_APIKey-ClientSecret}" } CreatePreReqs.tf .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } # Terraform deployment of Citrix DaaS on Google Cloud Platform locals { } ## Create all Pre-requisites ### Create local directory resource "local_file" "Log" { content = "Directory created." filename = "${var.CC_Install_LogPath}/log.txt" } resource "local_file" "LogData" { depends_on = [ local_file.Log ] content = "Directory created." filename = "${var.CC_Install_LogPath}/DATA/log.txt" } data "google_compute_address" "IPOfCC1" { name = "internal-ip-cc1" } data "google_compute_address" "IPOfCC2" { name = "internal-ip-cc2" } ### Create PowerShell file for installing the Citrix Remote PoSH SDK on AVM resource "local_file" "InstallPoSHSDKOnAVM" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" If(!(test-path -PathType container $path)) { New-Item -ItemType Directory -Path $path } Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript started." # Download Citrix Remote PowerShell SDK Invoke-WebRequest '${var.CC_Install_RPoSHURI}' -OutFile '${var.CC_Install_LogPath}/DATA/CitrixPoshSdk.exe' Add-Content ${var.CC_Install_LogPath}/log.txt "`nPowerShell SDK downloaded." # Install Citrix Remote PowerShell SDK Start-Process -Filepath "${var.CC_Install_LogPath}/DATA/CitrixPoshSdk.exe" -ArgumentList "-quiet" Add-Content ${var.CC_Install_LogPath}/log.txt "`nPowerShell SDK installed." # Timeout to settle all processes Start-Sleep -Seconds 60 Add-Content ${var.CC_Install_LogPath}/log.txt "`nTimeout elapsed." } EOT filename = "${var.CC_Install_LogPath}/DATA/InstallPoSHSDKOnAVM.ps1" } #### Execute Pre-Reqs-Script on AVM resource "null_resource" "ExecuteInstallPoSHSDKOnAVM" { provisioner "local-exec" { command = " ${var.CC_Install_LogPath}/DATA/InstallPoSHSDKOnAVM.ps1" interpreter = ["PowerShell", "-Command"] } } ### Create PowerShell file for extracting the GCP credentials in a Tarraform variable file on AVM resource "local_file" "ExtractGCPCredentialsToVarFile" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { Add-Content ${var.CC_Install_LogPath}/log.txt "`nGCP-Extraction Script started." $json = Get-Content '${var.CC_Install_LogPath}/${var.CCOnGCP-Creation-Provider-GCPAuthFileJSON}' | Out-String | ConvertFrom-Json $key = "`"tv_private_key`"" + ":" + "`"" + $json.private_key + "`"" $keycorr = [string]::join("\n",($key.Split("`n"))) $email = "`"tv_client_email`"" + ":" + "`"" + $json.client_email + "`"" $Path = 'c:/TACG/_CCOnGCP/_CCOnGCP-CCStuff/_CCOnGCP-CCStuff-CreateCCEntities-GCP.auto.tfvars.json' Set-Content -Path $Path -Encoding Ascii -Value "{" Add-Content -Path $Path -Encoding Ascii -Value ($keycorr + ",") Add-Content -Path $Path -Encoding Ascii -Value $email Add-Content -Path $Path -NoNewline -Encoding Ascii -Value '}' Add-Content ${var.CC_Install_LogPath}/log.txt "`nGCP-Extraction Script stopped." } EOT filename = "${var.CC_Install_LogPath}/DATA/ExtractGCPCredentialsToVarFile.ps1" } #### Execute Pre-Reqs-Script on AVM resource "null_resource" "ExecuteExtractGCPCredentialsToVarFile" { provisioner "local-exec" { command = " ${var.CC_Install_LogPath}/DATA/ExtractGCPCredentialsToVarFile.ps1" interpreter = ["PowerShell", "-Command"] } } ### Create CWC-Installer configuration file based on variables and save it into Transfer directory resource "local_file" "CWC-Configuration" { content = jsonencode( { "customerName" = "${var.CC_CustomerID}", "clientId" = "${var.CC_APIKey-ClientID}", "clientSecret" = "${var.CC_APIKey-ClientSecret}", "resourceLocationId" = "XXXXXXXXXX", "acceptTermsOfService" = true } ) filename = "${var.CC_Install_LogPath}/DATA/cwc.json" } ### Retrieving the BearerToken #### Create PowerShell script to download BearerToken resource "local_file" "GetBearerToken" { content = <<-EOT asnp Citrix* $key= "${var.CC_APIKey-ClientID}" $secret= "${var.CC_APIKey-ClientSecret}" $customer= "${var.CC_CustomerID}" $XDStoredCredentials = Set-XDCredentials -StoreAs default -ProfileType CloudApi -CustomerId $customer -APIKey $key -SecretKey $secret $auth = Get-XDAuthentication $BT = $GLOBAL:XDAuthToken | Out-File "${var.CC_Install_LogPath}/DATA/GetBT.txt" -NoNewline -Encoding Ascii EOT filename = "${var.CC_Install_LogPath}/DATA/GetBT.ps1" } #### Running GetBearertoken-Script to retrieve the Bearer Token resource "terraform_data" "GetBT" { depends_on = [ local_file.GetBearerToken, null_resource.ExecuteInstallPoSHSDKOnAVM] provisioner "local-exec" { command = "${var.CC_Install_LogPath}/DATA/GetBT.ps1" interpreter = ["PowerShell", "-File"] } } ### Create a dedicated Resource Location in Citrix Cloud #### Create the script to create a dedicated Resource Location in Citrix Cloud resource "local_file" "CreateRLScript" { depends_on = [ terraform_data.GetBT ] content = <<-EOT Add-Content ${var.CC_Install_LogPath}/log.txt "`nCreateRL-Script started." $CCCustomerID = "${var.CC_CustomerID}" $CCBearerToken = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetBT.txt -Force $CCName ="${var.CC_RestRLName}" $CCGuid = New-Guid # $requestUri = "https://api-eu.cloud.com/resourcelocations" $headers = @{ "Accept"="application/json"; "Authorization" = $CCBearerToken; "Citrix-CustomerId" = $CCCustomerID; "Content-Type" = "application/json" } $Body = @{ "id"=$CCGuid; "name" = $CCName; "internalOnly" = $false; "timeZone" = "GMT Standard Time"; "readOnly" = $false} $Bodyjson = $Body | Convertto-Json -Depth 3 $response = Invoke-RestMethod -Uri $requestUri -Method POST -Headers $headers -Body $Bodyjson -ContentType "application/json" Add-Content ${var.CC_Install_LogPath}/log.txt "`n$response" Add-Content ${var.CC_Install_LogPath}/log.txt "`nCreateRL-Script finished." EOT filename = "${var.CC_Install_LogPath}/DATA/CreateRL.ps1" } #### Running the Resource Location-Script to generate the Resource Location resource "terraform_data" "ResourceLocation" { depends_on = [ local_file.CreateRLScript ] provisioner "local-exec" { command = "${var.CC_Install_LogPath}/DATA/CreateRL.ps1" interpreter = ["PowerShell", "-File"] } } #### Wait 10 mins after RL creation to settle Zone creation resource "time_sleep" "wait_900_seconds" { depends_on = [ terraform_data.ResourceLocation ] create_duration = "900s" } #### Wait 1 mins after RL creation to settle Zone creation resource "time_sleep" "wait_60_seconds" { depends_on = [ terraform_data.ResourceLocation ] create_duration = "60s" } ### Create PowerShell file for determining the SiteID resource "local_file" "GetSiteIDScript" { depends_on = [time_sleep.wait_60_seconds] content = <<-EOT Add-Content ${var.CC_Install_LogPath}/log.txt "`nSiteID-Script started." $requestUri = "https://api-eu.cloud.com/cvad/manage/me" $CCBearerToken = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetBT.txt -Force $headers = @{ "Accept"="application/json"; "Authorization" = $CCBearerToken; "Citrix-CustomerId" = "${var.CC_CustomerID}" } $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Select-Object Customers $responsetojson = $response | Convertto-Json -Depth 3 $responsekorr = $responsetojson -replace("null","""empty""") $responsefromjson = $responsekorr | Convertfrom-json $SitesObj=$responsefromjson.Customers[0].Sites[0] $Export1 = $SitesObj -replace("@{Id=","") $SplittedString = $Export1.Split(";") $SiteID= $SplittedString[0] $PathCompl = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" Set-Content -Path $PathCompl -Value $SiteID -NoNewline -Encoding Ascii Add-Content ${var.CC_Install_LogPath}/log.txt "`nSiteID-Script successfully completed." EOT filename = "${var.CC_Install_LogPath}/DATA/GetSiteID.ps1" } #### Running the SiteID-Script to generate the SiteID resource "terraform_data" "SiteID" { depends_on = [ local_file.GetSiteIDScript ] provisioner "local-exec" { command = "${var.CC_Install_LogPath}/DATA/GetSiteID.ps1" interpreter = ["PowerShell", "-File"] } } ### Create PowerShell file for determining the ZoneID resource "local_file" "GetZoneIDScript" { depends_on = [time_sleep.wait_900_seconds,terraform_data.SiteID] content = <<-EOT Add-Content ${var.CC_Install_LogPath}/log.txt "`nZoneID-Script started." $requestUri = "https://api-eu.cloud.com/cvad/manage/Zones" $CCBearerToken = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetBT.txt -Force $CCSiteID = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetSiteID.txt -Force Add-Content ${var.CC_Install_LogPath}/log.txt "`nBearer-Token: $CCBearerToken" Add-Content ${var.CC_Install_LogPath}/log.txt "`nSiteID: $CCSiteID" $headers = @{ "Accept"="application/json"; "Authorization" = $CCBearerToken; "Citrix-CustomerId" = "${var.CC_CustomerID}"; "Citrix-InstanceId" = $CCSiteID } Add-Content ${var.CC_Install_LogPath}/log.txt "`nHeader: $headers" $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json Add-Content ${var.CC_Install_LogPath}/log.txt "`nResponse: $response" $responsedejson = $response | ConvertFrom-Json Add-Content ${var.CC_Install_LogPath}/log.txt "`nResponseDeJSON: $responsedejeson" $ZoneId = $responsedejson.Items | Where-Object { $_.Name -eq "${var.CC_RestRLName}" } | Select-Object id $Export1 = $ZoneId -replace("@{Id=","") $ZoneID = $Export1 -replace("}","") $PathCompl = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" Set-Content -Path $PathCompl -Value $ZoneID -NoNewline -Encoding Ascii Add-Content ${var.CC_Install_LogPath}/log.txt "`nZoneID-Script completed." EOT filename = "${var.CC_Install_LogPath}/DATA/GetZoneID.ps1" } #### Running the ZoneID-Script to generate the ZoneID resource "terraform_data" "ZoneID" { depends_on = [ local_file.GetZoneIDScript ] provisioner "local-exec" { command = "${var.CC_Install_LogPath}/DATA/GetZoneID.ps1" interpreter = ["PowerShell", "-File"] } } resource "terraform_data" "ZoneID2" { depends_on = [ terraform_data.ZoneID ] provisioner "local-exec" { command = "${var.CC_Install_LogPath}/DATA/GetZoneID.ps1" interpreter = ["PowerShell", "-File"] } } #### Change RL-ID in CWC-JSON file to valid Zone-ID resource "local_file" "CreateValidCWCOnAVM" { depends_on = [ terraform_data.ZoneID2 ] content = <<-EOT Add-Content ${var.CC_Install_LogPath}/log.txt "`nCWC-Script started." #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" # Correct the Resource Location ID in cwc.json file $requestUri = "https://api-eu.cloud.com/resourcelocations" $CCBearerToken = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetBT.txt -Force $CCSiteID = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetSiteID.txt -Force $CCZoneID = Get-Content -Path ${var.CC_Install_LogPath}/DATA/GetZoneID.txt -Force $headers = @{ "Accept"="application/json"; "Authorization" = $CCBearerToken; "Citrix-CustomerId" = "${var.CC_CustomerID}"} $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json Add-Content ${var.CC_Install_LogPath}/log.txt "`nCWC-Response: $response" $RLs = ConvertFrom-Json $response $RLFiltered = $RLs.items | Where-Object name -in "${var.CC_RestRLName}" Add-Content ${var.CC_Install_LogPath}/log.txt $RLFiltered $RLID = $RLFiltered.id $OrigContent = Get-Content ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/log.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt $OrigContent $CorrContent = $OrigCOntent.Replace('XXXXXXXXXX', $RLID) | Out-File -FilePath ${var.CC_Install_LogPath}/DATA/cwc.json -NoNewline -Encoding Ascii $PathCompl = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" Set-Content -Path $PathCompl -Value $RLID -NoNewline -Encoding Ascii Add-Content ${var.CC_Install_LogPath}/log.txt "`ncwc.json corrected." Add-Content ${var.CC_Install_LogPath}/log.txt "`nCWC-Script completed." } EOT filename = "${var.CC_Install_LogPath}/DATA/CreateValidCWCOnAVM.ps1" } #### Running the CWC-Script to generate the ZoneID resource "terraform_data" "ExecuteCreateValidCWCOnAVM" { depends_on = [ local_file.CreateValidCWCOnAVM ] provisioner "local-exec" { command = "${var.CC_Install_LogPath}/DATA/CreateValidCWCOnAVM.ps1" interpreter = ["PowerShell", "-File"] } } #### Create PowerShell file for CWC-Installer-Script for CCs ##### Check %LOCALAPPDATA%\Temp\CitrixLogs\CloudServicesSetup and %ProgramData%\Citrix\WorkspaceCloud\InstallLogs for logs!!!!! resource "local_file" "InstallCWCOnCC" { #depends_on = [ terraform_data.ExecuteCreateValidCWCOnAVM ] content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" If(!(test-path -PathType container $path)) { New-Item -ItemType Directory -Path $path } Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript started." # Download the Citrix Cloud Connector-Software to CC Invoke-WebRequest ${var.CC_Install_CWCURI} -OutFile '${var.CC_Install_LogPath}/DATA/CWCConnector.exe' # Install Citrix Cloud Controller based on the cwc.json configuration file # Check %LOCALAPPDATA%\Temp\CitrixLogs\CloudServicesSetup and %ProgramData%\Citrix\WorkspaceCloud\InstallLogs for logs!!!!! Add-Content ${var.CC_Install_LogPath}/log.txt "`nInstalling Cloud Connector." Start-Process -Wait -Filepath "${var.CC_Install_LogPath}/DATA/CWCConnector.exe" -ArgumentList "/q /ParametersFilePath:${var.CC_Install_LogPath}/DATA/cwc.json" Add-Content ${var.CC_Install_LogPath}/log.txt "`nInstalled Cloud Connector." #Restart-Computer -Force } EOT filename = "${var.CC_Install_LogPath}/DATA/InstallCWCOnCC.ps1" } #### Create restart script resource "local_file" "RestartCC" { #depends_on = [ terraform_data.ExecuteCreateValidCWCOnAVM ] content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { Restart-Computer -Force } EOT filename = "${var.CC_Install_LogPath}/DATA/RestartCC.ps1" } ####################################################################################################################################################################### ### Upload required components to CC1 #### Set the Provisioner-Connection resource "null_resource" "UploadRequiredComponentsToCC1" { depends_on = [ local_file.InstallCWCOnCC ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = data.google_compute_address.IPOfCC1.address timeout = var.Provisioner_Timeout } ###### Upload Cloud Connector configuration file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/cwc.json" destination = "${var.CC_Install_LogPath}/DATA/cwc.json" } ###### Upload SiteID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" } ###### Upload ZoneID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } ###### Upload RLID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" } ###### Upload PreReqs script to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/InstallCWCOnCC.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallCWCOnCC.ps1" } ###### Upload Restart script to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/RestartCC.ps1" destination = "${var.CC_Install_LogPath}/DATA/RestartCC.ps1" } } ###### Execute the PreReqs script on CC1 resource "null_resource" "CallRequiredScriptsOnCC1" { depends_on = [ null_resource.UploadRequiredComponentsToCC1 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = data.google_compute_address.IPOfCC1.address timeout = var.Provisioner_Timeout } provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallCWCOnCC.ps1" ] } } ### Upload required components to CC2 #### Set the Provisioner-Connection resource "null_resource" "UploadRequiredComponentsToCC2" { depends_on = [ local_file.InstallCWCOnCC ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = data.google_compute_address.IPOfCC2.address timeout = var.Provisioner_Timeout } ###### Upload Cloud Connector configuration file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/cwc.json" destination = "${var.CC_Install_LogPath}/DATA/cwc.json" } ###### Upload SiteID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" } ###### Upload ZoneID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } ###### Upload RLID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" } ###### Upload PreReqs script to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/InstallCWCOnCC.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallCWCOnCC.ps1" } ###### Upload Restart script to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/RestartCC.ps1" destination = "${var.CC_Install_LogPath}/DATA/RestartCC.ps1" } } ###### Execute the PreReqs script on CC2 resource "null_resource" "CallRequiredScriptsOnCC2" { depends_on = [ null_resource.UploadRequiredComponentsToCC2 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = data.google_compute_address.IPOfCC2.address timeout = var.Provisioner_Timeout } provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallCWCOnCC.ps1" ] } } #### Wait 30 mins after CWC creation before restart resource "time_sleep" "wait_1800_seconds_CC1" { depends_on = [ null_resource.CallRequiredScriptsOnCC1 ] create_duration = var.Provisioner_Reboot } #### Wait 30 mins after CWC creation before restart resource "time_sleep" "wait_1800_seconds_CC2" { depends_on = [ null_resource.CallRequiredScriptsOnCC2 ] create_duration = var.Provisioner_Reboot } ###### Execute the Reboot script on CC1 resource "null_resource" "CallRebootScriptOnCC1" { depends_on = [ time_sleep.wait_1800_seconds_CC1 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = data.google_compute_address.IPOfCC1.address timeout = var.Provisioner_Timeout } provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/RebootCC.ps1" ] } } ###### Execute the Reboot script on CC2 resource "null_resource" "CallRebootScriptOnCC2" { depends_on = [ time_sleep.wait_1800_seconds_CC2 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = data.google_compute_address.IPOfCC2.address timeout = var.Provisioner_Timeout } provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/RebootCC.ps1" ] } } Module 3: CConGCP-CitrixCloudStuff These are the Terraform configuration files for Module 3 (excerpts): provider.tf .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } # Terraform deployment of Citrix DaaS on Google Cloud Platform (GCP) ## Definition of all required Terraform providers terraform { required_version = ">= 1.7.5" required_providers { restapi = { source = "Mastercard/restapi" version = ">=1.18.2" } citrix = { source = "citrix/citrix" version = ">=0.5.4" } google = { source = "hashicorp/google" version = ">=5.21.0" } } } # Configure the Google GCP Provider provider "google" { credentials = file(var.CCOnGCP-Creation-Provider-GCPAuthFileJSON) project = var.CCOnGCP-Creation-Provider-GCPProject region = var.CCOnGCP-Creation-Provider-GCPRegion zone = var.CCOnGCP-Creation-Provider-GCPZone } # Configure the Citrix Provider provider "citrix" { customer_id = "${var.CC_CustomerID}" client_id = "${var.CC_APIKey-ClientID}" client_secret = "${var.CC_APIKey-ClientSecret}" } CreateCCEntities.tf .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } # Terraform deployment of Citrix DaaS on Google Cloud Platform ## Creating all Citrix Cloud-related entities ### Creating a Hypervisor Connection #### Retrieving the ZoneID data "local_file" "LoadZoneID" { filename = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } ### Retrieving the GCP credentials data "local_file" "GCPCredentials" { filename = "${var.CC_Install_LogPath}/DATA/tacg-gcp-406812-a8e71b537c99.json" } #### Creating the Hypervisor Connection resource "citrix_gcp_hypervisor" "CreateHypervisorConnection" { depends_on = [ data.local_file.LoadZoneID ] name = "${var.CC_GCP-HypConn-Name}" zone = data.local_file.LoadZoneID.content #service_account_id = "${var.CC_GCP-ServiceAccountID}" #service_account_credentials = data.local_file.GCPCredentials.content service_account_id = "${var.tv_client_email}" service_account_credentials = "${var.tv_private_key}" } #### Creating the Hypervisor Resource Pool resource "citrix_gcp_hypervisor_resource_pool" "CreateHypervisorPool" { depends_on = [ citrix_gcp_hypervisor.CreateHypervisorConnection ] name = "${var.CC_GCP-HypConnPool-Name}" hypervisor = citrix_gcp_hypervisor.CreateHypervisorConnection.id project_name = "${var.CCOnGCP-CCStuff-Provider-GCPProjectName}" region = "${var.CCOnGCP-CCStuff-Provider-GCPRegion}" vpc = "${var.CC_GCP-HypConnPool-VPC}" subnets = [ "${var.CC_GCP-HypConnPool-Subnet}", ] } ### Creating a Machine Catalog #### Retrieving the VPC ID data "google_compute_network" "GCPVPC" { name = "${var.CC_GCP-HypConnPool-VPC}" } #### Retrieving the Subnet Mask based on the Subnet ID data "google_compute_network" "GCPSubnet" { name = "${var.CC_GCP-HypConnPool-Subnet}" } #### Sleep 60s to let AWS Background processes settle resource "time_sleep" "wait_60_seconds" { depends_on = [ citrix_gcp_hypervisor_resource_pool.CreateHypervisorPool ] create_duration = "60s" } #### Create the Machine Catalog resource "citrix_machine_catalog" "CreateMCSCatalog" { depends_on = [ time_sleep.wait_60_seconds ] name = "${var.CC_GCP-MC-Name}" description = "${var.CC_GCP-MC-Description}" allocation_type = "${var.CC_GCP-MC-AllocationType}" session_support = "${var.CC_GCP-MC-SessionType}" is_power_managed = true is_remote_pc = false provisioning_type = "MCS" zone = data.local_file.LoadZoneID.content provisioning_scheme = { hypervisor = citrix_gcp_hypervisor.CreateHypervisorConnection.id hypervisor_resource_pool = citrix_gcp_hypervisor_resource_pool.CreateHypervisorPool.id identity_type = "${var.CC_GCP-MC-IDPType}" machine_domain_identity = { domain = "${var.CC_GCP-MC-Domain}" domain_ou = "${var.CC_GCP-MC-DomainOU}" service_account = "${var.CC_GCP-MC-DomainAdmin-Username-UPN}" service_account_password = "${var.CC_GCP-MC-DomainAdmin-Password}" } gcp_machine_config = { master_image = "${var.CC_GCP-MC-MasterImage}" storage_type = "${var.CC_GCP-MC-StorageType}" #machine_profile = "${var.CC_GCP-MC-MasterImage}" } number_of_total_machines = "${var.CC_GCP-MC-Machine_Count}" machine_account_creation_rules = { naming_scheme = "${var.CC_GCP-MC-Naming_Scheme_Name}" naming_scheme_type = "${var.CC_GCP-MC-Naming_Scheme_Type}" } } } #### Sleep 60s to let Citrix Cloud Background processes settle resource "time_sleep" "wait_60_seconds_1" { depends_on = [ citrix_machine_catalog.CreateMCSCatalog ] create_duration = "60s" } #### Create an Example-Policy Set resource "citrix_policy_set" "SetPolicies" { count = var.CC_GCP-Policy-IsNotDaaS ? 1 : 0 #depends_on = [ time_sleep.wait_60_seconds_1 ] name = "${var.CC_GCP-Policy-Name}" description = "${var.CC_GCP-Policy-Description}" type = "DeliveryGroupPolicies" scopes = [ "All" ] policies = [ { name = "TACG-GCP-TF-Pol1" description = "Policy to enable use of Universal Printer" is_enabled = true policy_settings = [ { name = "UniversalPrintDriverUsage" value = "Use universal printing only" use_default = false }, ] policy_filters = [ { type = "DesktopGroup" is_enabled = true is_allowed = true }, ] }, { name = "TACG-GCP-TF-Pol2" description = "Policy to enable Client Drive Redirection" is_enabled = true policy_settings = [ { name = "UniversalPrintDriverUsage" value = "Prohibited" use_default = false }, ] policy_filters = [ { type = "DesktopGroup" is_enabled = true is_allowed = true }, ] } ] } #Sleep 60s to let Citrix Cloud Background processes settle resource "time_sleep" "Wait_60_Seconds_2" { depends_on = [ citrix_policy_set.SetPolicies ] create_duration = "60s" } #### Create the Delivery Group based on the Machine Catalog resource "citrix_delivery_group" "CreateDG" { depends_on = [ time_sleep.wait_60_seconds_1] name = "${var.CC_GCP-DG-Name}" associated_machine_catalogs = [ { machine_catalog = citrix_machine_catalog.CreateMCSCatalog.id machine_count = "${var.CC_GCP-MC-Machine_Count}" } ] desktops = [ { published_name = "${var.CC_GCP-DG-PublishedDesktopName}" description = "${var.CC_GCP-DG-Description}" restricted_access_users = { allow_list = [ "TACG-GCP\\vdaallowed" ] } enabled = true enable_session_roaming = var.CC_GCP-DG-SessionRoaming } ] autoscale_settings = { autoscale_enabled = true disconnect_peak_idle_session_after_seconds = 300 log_off_peak_disconnected_session_after_seconds = 300 peak_log_off_action = "Nothing" power_time_schemes = [ { days_of_week = [ "Monday", "Tuesday", "Wednesday", "Thursday", "Friday" ] name = "${var.CC_GCP-DG-AS-Name}" display_name = "${var.CC_GCP-DG-AS-Name}" peak_time_ranges = [ "09:00-17:00" ] pool_size_schedules = [ { time_range = "09:00-17:00", pool_size = 1 } ] pool_using_percentage = false }, ] } restricted_access_users = { allow_list = [ "TACG-GCP\\vdaallowed" ] } reboot_schedules = [ { name = "TACG-GCP-Reboot Schedule" reboot_schedule_enabled = true frequency = "Weekly" frequency_factor = 1 days_in_week = [ "Sunday", ] start_time = "02:00" start_date = "2024-01-01" reboot_duration_minutes = 0 ignore_maintenance_mode = true natural_reboot_schedule = false } ] policy_set_id = citrix_policy_set.SetPolicies.id } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; }
  6. NetScaler BLX is a Linux Software form-factor of NetScaler ADC, which runs natively on the Linux Kernel irrespective of the underlying environment . It is designed to run natively on bare-metal-Linux on commercial off-the-shelf servers (COTS). Following are the benefits of using a NetScaler BLX appliance: Cloud-ready. Easy-Management. Seamless third-party tools integration. Coexistence of other applications. DPDK Support. Why is there a need for a bare metal version of NetScaler ? NetScaler BLX appliances provide simplicity with no virtual machine overhead for better performance. Also, you can run a NetScaler BLX appliance on your preferred server hardware. Use Cases - High traffic load, mission critical applications, latency sensitive workload, North-South traffic. Characteristics - Lightweight software package and no VM overhead. BLX Deployment using Terraform Guide HashiCorp Terraform is an infrastructure-as-code software tool used to orchestrate and manage IT infrastructure, including networking. Terraform codifies infrastructure into declarative configuration files for easier provisioning, compliance, and management. Terraform provider CitrixBLX allows users to bring-up any number of NetScaler BLX instances in shared and DPDK modes (supporting both Intel & Mellanox Interfaces ). Along with Citrix ADC Terraform provider, it allows users to configure ADC BLX’s for various use-cases such as global server load balancing, web application firewall policies, and more. With Terraform, you can share and reuse your NetScaler configurations across your environments — a key time saver when migrating applications from your data center to any public cloud. A. Setting up Requirements - Setting up Terraform Client & Installing GO [Terraform] (https://www.terraform.io/downloads.html) 0.10.x [Go] (https://golang.org/doc/install) 1.11 (to build the provider plugin) After installing GO, set PATH & GOPATH accordingly export PATH=$PATH:/usr/local/go/bin export GOPATH=/root/go/ B. Terraform plugin to Deploy BLX Terraform provider for NetScaler BLX is not available through terrform.registry.io as of now. Hence users have to install the provider manually. Clone repository to: $GOPATH/src/github.com/citrix/terraform-provider-blx $ git clone git@github.com:citrix/terraform-provider-citrixblx $ GOPATH/src/github.com/citrix/terraform-provider-blx Enter the provider directory and build the provider $ cd $GOPATH/src/github.com/citrix/terraform-provider-blx $ make build Navigating the repository citrixblx folder - Contains the citrixblx resource file and modules leveraged by Terraform. examples folder - Contain the examples for users to deploy BLX. 2. Create a following directory in your local machine and save the NetScaler terraform binary. e.g. in Ubuntu machine. Note that the directory structure has to be same as below, you can edit the version -0.0.1 to the NetScaler version you downloaded. mkdir -p /home/user/.terraform.d/plugins/registry.terraform.io/citrix/citrixblx/0.0.1/linux_amd64Copy the terraform-provider-citrixblx to the above created folder as shown belowcp $GOPATH/bin/terraform-provider-citrixblx /home/user/.terraform.d/plugins/registry.terraform.io/citrix/citrixblx/0.0.1/linux_amd64[/code] C. Get Started on using terraform to deploy Netscaler BLX In order to familiarize with Netscaler BLX deployment through terraform, lets get started with basic configuration of setting up a dedicated mode BLX in Terraform. Network mode of a NetScaler BLX appliance defines whether the NIC ports of the Linux host are shared or not shared with other Linux applications running on the host. A NetScaler BLX appliance can be configured to run on one of the following network modes: Shared mode - A NetScaler BLX appliance configured to run in shared mode, shares the Linux host NIC ports with other Linux applications. Dedicated mode - A NetScaler BLX appliance configured in dedicated mode has dedicated Linux host NIC ports and it does not share the ports with other Linux applications. In our below Deployment case, we will bring up BLX in Simple Shared mode, similarly we have provider.tf & resources.tf to bring up BLX in – DPDK Mode ( Step inside blx-dedicated directory in examples folder ) DPDK Mode for Mellanox Interfaces ( Step inside blx-mlx directory in examples folder ) Secured way by not disclosing BLX Password ( Step inside blx-sensitive-pass in examples folder ). 1. Now navigate to examples folder as below. Here you can find many ready to use examples for you to get started: cd $GOPATH/src/github.com/citrix/terraform-provider-blx/examples Lets deploy a simple shared mode NetScaler BLX. cd terraform-provider-citrixblx/examples/simple-blx-shared/ 2. Provider.tf contains the details of the target Citrix ADC. Edit the simple-blx-shared/provider.tf as follows. For Terraform version > 0.13 edit the provider.tf as follows - terraform { required_providers { citrixblx = { source = "citrix/citrixblx" } } } provider "citrixblx" { } For terraform version < 0.13, edit the provider.tf as follows – provider "citrixblx" { } 3. Resources.tf contains the desired state of the resources that you want to manage through terraform. Here we want to create a shared mode blx. Edit the simple-blx-shared/resources.tf with your configuration values – source path of BLX packages to be installed, host ip address, host username, host password, blx password as below. resource "citrixblx_adc" "blx_1" { source = "/root/blx-rpm-13.1-27.59.tar.gz" host = { ipaddress = "10.102.174.76" username = "user" password = " DummyHostPass " } config = { worker_processes = "3" } password = DummyPassword}resource "citrixblx_adc" "blx_2" { source = "/root/blx-rpm-13.1-27.59.tar.gz" host = { ipaddress = "10.102.56.25" username = "user" password = " DummyHostPass " } config = { worker_processes = "1" } password = var.blx_password}[/code] 4 . Once the provider.tf and resources.tf is edited and saved with the desired values in the simple-blx-shared folder, you are good to run terraform and configure NetScaler. Initialize the terraform by running terraform-init inside the simple_blx-shared folder as follow: terraform-provider-citrixblx/examples/simple-blx-shared$ terraform init You should see following output if terraform was able to successfully find citrix blx provider and initialize it - Initializing the backend... Initializing provider plugins... - Reusing previous version of hashicorp/citrixblx from the dependency lock file - Installing hashicorp/citrixblx v0.0.1... - Installed hashicorp/citrixblx v0.0.1 (unauthenticated) Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. 5. To view the changes that will be done to your NetScaler configurations, run terraform-plan # citrixblx_adc.blx_1 will be created + resource "citrixblx_adc" "blx_1" { + config = { + "worker_processes" = "3" } + host = { + "ipaddress" = "10.102.174.76" + "password" = "freebsd" + "username" = "root" } + id = (known after apply) + password = (sensitive) + source = "/root/blx-rpm-13.1-27.59.tar.gz" } # citrixblx_adc.blx_2 will be created + resource "citrixblx_adc" "blx_2" { + config = { + "worker_processes" = "1" } + host = { + "ipaddress" = "10.102.56.25" + "password" = "freebsd" + "username" = "root" } + id = (known after apply) + password = (sensitive) + source = "/root/blx-rpm-13.1-27.59.tar.gz" } 6. Terraform apply – To apply the Infrastructure end to end – Install & Bring up BLX terrafrom-apply citrixblx_adc.blx_2: Creating... citrixblx_adc.blx_1: Creating... citrixblx_adc.blx_1: Still creating... [10s elapsed] citrixblx_adc.blx_2: Still creating... [10s elapsed] citrixblx_adc.blx_2: Still creating... [20s elapsed] citrixblx_adc.blx_1: Still creating... [20s elapsed] citrixblx_adc.blx_1: Still creating... [30s elapsed] citrixblx_adc.blx_2: Still creating... [30s elapsed] . . citrixblx_adc.blx_1: Creation complete after 2m52s [id=10.102.174.76] citrixadc_nsip.nsip: Creating... citrixadc_service.tf_service: Creating... citrixadc_nsfeature.nsfeature: Creating... citrixadc_lbvserver.tf_lbvserver: Creating... citrixadc_nsfeature.nsfeature: Creation complete after 0s [id=tf-nsfeature-20220810125911768300000001] citrixadc_nsip.nsip: Creation complete after 0s [id=192.168.2.55] citrixadc_service.tf_service: Creation complete after 0s [id=tf_service] citrixadc_lbvserver.tf_lbvserver: Creation complete after 0s [id=tf_lbvserver] citrixadc_lbvserver_service_binding.tf_binding: Creating... citrixadc_lbvserver_service_binding.tf_binding: Creation complete after 0s [id=tf_lbvserver,tf_service] citrixblx_adc.blx_2: Still creating... [3m0s elapsed] citrixblx_adc.blx_2: Still creating... [3m10s elapsed] citrixblx_adc.blx_2: Still creating... [3m20s elapsed] citrixblx_adc.blx_2: Still creating... [3m30s elapsed] citrixblx_adc.blx_2: Still creating... [3m40s elapsed] citrixblx_adc.blx_2: Still creating... [3m50s elapsed] citrixblx_adc.blx_2: Still creating... [4m0s elapsed] citrixblx_adc.blx_2: Creation complete after 4m7s [id=10.102.56.25] D. Configuring BLX for Load Balancing Use Case Citrix ADC Terraform provider allows users to configure ADCs for various use-cases such as global server load balancing, web application firewall policies, and more. Here we will look how to integrate both plugins to configure BLX – Edit the simple-blx-shared/provider.tf as follows and add details of your target adc provider "citrixadc" { endpoint = "http://10.102.174.76:9080" username = "user" password = "DummyPassword "} 2. Add config.tf section which specifies configuration details to be applied on NetScaler BLX. Here notice depends on variable used to apply configuration on a particular BLX Instance. In below example config.tf , LB vserver configurations are applied on BLX Instance blx_1. resource "citrixadc_nsip" "nsip" { ipaddress = "192.168.2.55" type = "VIP" netmask = "255.255.255.0" icmp = "ENABLED" depends_on = [ citrixblx_adc.blx_1 ] state = "ENABLED" } resource "citrixadc_nsfeature" "nsfeature" { lb = true depends_on = [ citrixblx_adc.blx_1 ] } resource "citrixadc_lbvserver" "tf_lbvserver" { ipv46 = "10.10.10.33" name = "tf_lbvserver" port = 80 depends_on = [ citrixblx_adc.blx_1 ] servicetype = "HTTP" } resource "citrixadc_service" "tf_service" { name = "tf_service" ip = "192.168.43.33" depends_on = [ citrixblx_adc.blx_1 ] servicetype = "HTTP" port = 80 } resource "citrixadc_lbvserver_service_binding" "tf_binding" { name = citrixadc_lbvserver.tf_lbvserver.name servicename = citrixadc_service.tf_service.name weight = 1 } 3. Post above config scripts, user needs to do Terraform plan and apply. 4. Terraform destroy – To destroy the infrastructure. [root@localhost simple-blx-shared]# terraform destroy citrixblx_adc.blx_2: Refreshing state... [id=10.102.56.25] citrixblx_adc.blx_1: Refreshing state... [id=10.102.174.76] citrixadc_nsip.nsip: Refreshing state... [id=192.168.2.55] citrixadc_nsfeature.nsfeature: Refreshing state... [id=tf-nsfeature-20220810125911768300000001] citrixadc_service.tf_service: Refreshing state... [id=tf_service] citrixadc_lbvserver.tf_lbvserver: Refreshing state... [id=tf_lbvserver] citrixadc_lbvserver_service_binding.tf_binding: Refreshing state... [id=tf_lbvserver,tf_service] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy citrixblx_adc.blx_2: Destroying... [id=10.102.56.25] citrixadc_lbvserver_service_binding.tf_binding: Destroying... [id=tf_lbvserver,tf_service] citrixadc_nsfeature.nsfeature: Destroying... [id=tf-nsfeature-20220810125911768300000001] citrixadc_nsip.nsip: Destroying... [id=192.168.2.55] citrixadc_nsfeature.nsfeature: Destruction complete after 0s citrixadc_lbvserver_service_binding.tf_binding: Destruction complete after 0s citrixadc_service.tf_service: Destroying... [id=tf_service] citrixadc_lbvserver.tf_lbvserver: Destroying... [id=tf_lbvserver] citrixadc_service.tf_service: Destruction complete after 0s citrixadc_nsip.nsip: Destruction complete after 0s citrixadc_lbvserver.tf_lbvserver: Destruction complete after 1s citrixblx_adc.blx_1: Destroying... [id=10.102.174.76] citrixblx_adc.blx_2: Still destroying... [id=10.102.56.25, 10s elapsed] citrixblx_adc.blx_2: Destruction complete after 10s citrixblx_adc.blx_1: Still destroying... [id=10.102.174.76, 10s elapsed] citrixblx_adc.blx_1: Destruction complete after 10s Destroy complete! Resources: 7 destroyed. Conclusion As we see above, Terraform abstracts the ADC technicalities and makes it easy to codify and integrate ADC with other applications. You can use the Terraform Netscaler BLX Provider and CitrixADC Terraform integrated solution for end to end Netscaler BLX deployments as is or customize it as per requirements. Citrix ADC Terraform modules enable an infrastructure-as-code approach and seamlessly integrate with your automation environment to provide self-service infrastructure. References BLX Documentation – https://docs.citrix.com/en-us/citrix-adc-blx/current-release.html Terraform provider for BLX – https://github.com/citrix/terraform-provider-citrixblx Terraform provider to configure ADC - https://registry.terraform.io/providers/citrix/citrixadc/latest/docs
  7. Introduction NetScaler supports one-time passwords (OTPs) without using a third-party server. OTPs are a highly secure option for authenticating to secure servers as the number or passcode generated is random. Previously, specialized firms, such as RSA, with specific devices that generate random numbers, offered OTPs. In addition to reducing capital and operating expenses, this feature enhances the administrator’s control by keeping the entire configuration on the NetScaler appliance. To use the OTP solution, a user must register with a NetScaler virtual server. Registration is required only once per unique device and can be restricted to certain environments. Configuring and validating a registered user is similar to configuring an extra authentication policy. This POC guide will show how a single UI(Logon form) can be leveraged for both OTP Registration and OTP validation flows instead of asking users to go to different Endpoints for each. Netscaler Configuration VPN Vserver and AAA vserver Creation add vpn vserver test.aaadomain.net SSL 10.106.1.1 443 add authentication vserver aaavserver1 SSL 0.0.0.0 Creating and binding authnprofile to VPN vserver (for advanced or nfactor OTP Configuration) add authnprofile authnprof -authnVsName aaavserver1 set vpn vserver test.aaadomain.net -authnprofile authnprof Creating and Binding Single Unified Loginschema for OTP Registration and Validation add authentication loginSchema otpregistrationorvalidation -authenticationSchema "/nsconfig/loginschema/LoginSchema/DualAuthOrOTPRegisterDynamic.xml" add authentication loginSchemaPolicy otpregistrationorvalidation -rule true -action otpregistrationorvalidation bind authentication vserver aaavserver1 -policy otpregistrationorvalidation -priority 1 -gotoPriorityExpression END OTP Registration flow add authentication ldapAction ldap -serverIP 10.106.7.50 -serverPort 636 -ldapBase "dc=xyz,dc=com" -ldapBindDn test@xyz.com -ldapBindDnPassword test@123 -ldapLoginName samAccountName add authentication Policy ldap-registration -rule "aaa.login.VALUE(\"otpregister\").eq(\"true\")" -action ldap add authentication policylabel otp-registration -loginSchema LSCHEMA_INT add authentication ldapAction ldap-otp -serverIP 10.106.7.50 -serverPort 636 -ldapBase "dc=xyz,dc=com" -ldapBindDn test@xyz.xom -ldapBindDnPassword test@123 -ldapLoginName sAMAccountName -secType SSL -authentication DISABLED -OTPSecret userParameters add authentication Policy ldap-otp -rule true -action ldap-otp bind authentication policylabel otp-registration -policyName ldap-otp -priority 1 -gotoPriorityExpression NEXT bind authentication vserver aaavserver1 -policy ldap-registration -priority 1 -nextFactor otp-registration -gotoPriorityExpression NEXT OTP Validation flow add authentication Policy ldap -rule true -action ldap >>> Same ldap Action/Profile created for OTP Registration can be used for OTP Validation flow as well add authentication policylabel otp-validation -loginSchema LSCHEMA_INT bind authentication policylabel otp-validation -policyName ldap-otp -priority 1 -gotoPriorityExpression NEXT >>> Same ldap-otp policy created for OTP Registration can be used for OTP Validation flow as well bind authentication vserver aaavserver1 -policy ldap -priority 2 -nextFactor otp-validation -gotoPriorityExpression NEXT CLI Snippet for the nfactor Configuration on AAA vserver(here aaavserver1) > sh authentication vs aaavserver1 aaavserver1 (10.106.1.1:443) - SSL IPSet: ??? Type: CONTENT State: UP Client Idle Timeout: 180 sec Down state flush: DISABLED Disable Primary Vserver On Down: DISABLED HTTP profile name: nshttp_default_strict_validation Network profile name: ??? Appflow logging: ENABLED Authentication: ON Device Certificate Check: ??? CGInfra Homepage Redirect: ??? Current AAA Sessions: 0 Current Users: 0 Dtls: ??? L2Conn: ??? RDP Server Profile Name: ??? Max Login Attempts: 0 Failed Login Timeout: 0 Fully qualified domain name: ??? PCoIP VServer Profile Name: ??? Listen Policy: NONE Listen Priority: 0 IcmpResponse: ??? RHIstate: ??? Traffic Domain: 0 Probe Protocol: ??? 1) LoginSchema Policy Name: otpregistrationorvalidation Priority: 1 GotoPriority Expression: END 1) Advanced Authentication Policy Name: ldap-registration Priority: 1 GotoPriority Expression: NEXT NextFactor name: otp-registration 2) Advanced Authentication Policy Name: ldap Priority: 2 GotoPriority Expression: NEXT NextFactor name: otp-validation User Endpoint Now we test the above configuration OTP Registration flow with Citrix SSO app 1. Open a browser and navigate to the domain FQDN managed by the NetScaler Gateway. We use https://test.aaadomain.net 2. The following login screen will appear after your browser is redirected. If a user wants to register a new Device, click on the “Click to register” checkbox. 3. On the next screen, add Username, Password, and DeviceName to be Registered as follows On your mobile device, open your Citrix SSO app and Scan the QR code Select Done, and you will see confirmation that the device was added successfully.You can also “Test” if the device is added successfully by clicking on the “Test” Button and entering the OTP from your Citrix SSO app. OTP Validation flow Open a browser and navigate to the domain FQDN managed by the NetScaler Gateway. We use https://test.aaadomain.net After your browser is redirected to a login screen, enter your Username, Password, and Passcode(OTP from the Citrix SSO app for the Android1 device) if your device is already registered. On successful Authentication, you will be logged in to Citrix Gateway.
  8. Overview This proof of concept guide is designed to provide a step-by-step method to deploy an instance of the NetScaler VPX on Nutanix AHV and prepare it for use. NetScaler VPX running on Nutanix AHV is supported through the Citrix Ready Program. This guide will assist in deploying a VPX appliance using Prism Element with some basic best practices. This guide will NOT cover the specific needs for every deployment. It is recommended that deployments and testing are conducted to define the best method for a particular need. Nutanix Acropolis Hypervisor (AHV) is a modern and secure virtualization platform that powers VMs and containers for applications and cloud-native workloads on-premises and in public clouds that can run any application at any scale. Prerequisites This guide assumes the following prerequisites have been completed: Nutanix AHV is configured and ready for use Nutanix Prism Element will be used for the deployment (not Prism Central) Sufficient resources are available to support the recommended VM configuration The NetScaler VPX requires a minimum of 2 vCPUs and 2 GB of RAM (4 GB RAM or more is recommended). At least one vNIC (2 or more vNICs recommended for Management and Production networks) At least 20 GB of disk space A basic understanding of Nutanix AHV A basic understanding of Nutanix Prism Element Familiarity with the Acropolis Command Line Interface (ACLI) Familiarity with the initial setup of a NetScaler VPX appliance. Considerations for NetScaler VPX appliances A proof of concept deployment is set up to try out different functions of the VPX appliance. With a POC deployment, customers can: Try different features Familiarize themselves with the environment Try different configurations to see how they impact performance, usability, etc. A POC is not intended for production workloads and should only be utilized for learning and feasibility purposes. Therefore, a virtual appliance running with (2) vCPUs, (4) GB RAM, and 20 GB of disc drive should be sufficient. In a production environment, it is recommended to provision the appliance with adequate resources for the expected workload. With a virtual appliance on Nutanix AHV, scaling up or down on resources is very easy, making the virtual appliance very flexible. To determine the required resources for your workload, use the following NetScaler Form Factors Datasheet Deploying the NetScaler VPX Download the VPX virtual appliance (the example below shows the latest 14.1 version of the firmware, however other versions are available for AHV should this meet your business requirements) Download the “Citrix ADC VPX for KVM” file. On the first extraction, it will become a “tar” file. Extract that until you see the “.qcow2” and “.xml” files. Login to Prism Element (not Prism Central) From Home, select Settings Choose Image Configuration Give the image a name Select the “DISK” image type Pick a storage container Choose “Upload a file” and navigate to the NetScaler VPX “.qcow2” file Choose “Save” to create the image 7. Once the file uploads, you should see the image listed and the status should show as “ACTIVE”, this may take some time as Prism Element processes the image file. 8. Navigate to VM and then click Create VM 9. On the Create VM Screen, remove the CD ROM Drive 10 Add a new disk Select “Clone from the image service” from the drop-down menu In the Bus Type, select “SCSI” Note: The NetScaler VPX has been deployed with PCI, SCSI, SATA, and IDE bus disks without issue Choose the NetScaler image that was uploaded Choose “Add” 11. The disk will then be added 12. Add VLANs as necessary. A minimum of two VLANs (Management and LAN) are recommended 13. Do not set affinity now, as it will be set later in this guide 14. Choose "Save" Once the VM is listed and shows as powered off, we must add a serial port. The VM appliance will not boot without a serial port connection, and Nutanix AHV does not add a serial port by default. To add the Serial Port SSH into the CVM using the username “nutanix” and the password you set for that account (You can find a list of CVM IP addresses in the “Hardware” section of the Prism Element console) Enter the ACLI acli Enter the following command to create the serial port where <vmname> is the name you gave to the VPX Appliance vm.serial_port_create <vmname> type=kServer index=0 At this point, you can snapshot the VM to be used as a template later should you wish to deploy more instances (an HA pair, for example). Initial Configuration Power on the VM Launch the VNC console Watch the VM Boot Log in with the default credentials of nsrootnsroot You will be prompted to change the password. It is recommended that you change it at this time Manually run the “config ns” command from the CLI Assign the IP Enter the NetMask Choose “Apply changes and exit” 6. Restart the VM When the appliance reboots, log back into the CLI and add the default route using the command below, replacing <default_route> with the default route assigned to the network that your NSIP resides on. route add 0.0.0.0 0.0.0.0 <default_route> Save the configuration using the command below to ensure the default route persists during a reboot save ns config Now you can connect to the GUI After this point, the configuration proceeds like any other NetScaler setup. Additional Considerations High CPU usage CPU usage will show high by default on NetScaler VPX appliances. If you desire to enable CPU sharing, then you should enable CPU Yield. From the GUI Navigate to Settings and click the “Change VPX Settings link Change “CPU Yield” to Yes Save the configuration 2. From the CLI set ns vpxparam -cpuyield YES Running a pair of appliances for high availability (HA) If you are going to run an HA pair of appliances, it is recommended that you set anti-affinity rules so the appliances will always be run on separate AHV hosts To accomplish this: Login to the CVM via SSH Create the VM group where <vmgroupname> is the name you give to the group of NetScalers you deployed on AHV vm_group.create <vmgroupname> Add the existing NetScalers to the group where <vmgroupname> is the name from the previous step, and <vm1name> and <vm2name> are the NetScaler VMs to be added to the group vm_group.add_vms <vmgroupname> vm_list=<vm1name>,<vm2name> Set the Anti-affinity rule where <vmgroupname> is the name given in step 2 above vm_group.antiaffinity_set <vmgroupname> Disaster Recovery and GSLB Suppose multiple sites are to be used, and Global Server Load Balancing (GSLB) is utilized for access. In that case, it is recommended that an HA pair of NetScalers be deployed on AHV at both locations. You can then use Nutanix technologies such as DR replication to ensure the availability of your NetScaler pair should you experience a cluster outage. More information on Nutanix DR replication can be found here. Resources NetScaler Form Factors Datasheet FAQ on Deploying a NetScaler VPX
  9. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: XenServer VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler ADC VPX on Microsoft Azure Microsoft Azure Microsoft Azure is an ever-expanding set of cloud computing services built to help organizations meet their business challenges. Azure gives users the freedom to build, manage, and deploy applications on a massive, global network using their preferred tools and frameworks. With Azure, users can: Be future-ready with continuous innovation from Microsoft to support their development today and their product visions for tomorrow. Operate hybrid cloud seamlessly on-premises, in the cloud, and at the edge—Azure meets users where they are. Build on their terms with Azure’s commitment to open source and support for all languages and frameworks, allowing users to be free to build how they want and deploy where they want. Trust their cloud with security from the ground up—backed by a team of experts and proactive, industry-leading compliance that is trusted by enterprises, governments, and startups. Azure Terminology Here is a brief description of the key terms used in this document that users must be familiar with: Azure Load Balancer – Azure load balancer is a resource that distributes incoming traffic among computers in a network. Traffic is distributed among virtual machines defined in a load-balancer set. A load balancer can be external or internet-facing, or it can be internal. Azure Resource Manager (ARM) – ARM is the new management framework for services in Azure. Azure Load Balancer is managed using ARM-based APIs and tools. Back-End Address Pool – IP addresses associated with the virtual machine NIC to which load is distributed. BLOB - Binary Large Object – Any binary object like a file or an image that can be stored in Azure storage. Front-End IP Configuration – An Azure Load balancer can include one or more front-end IP addresses, also known as a virtual IPs (VIPs). These IP addresses serve as ingress for the traffic. Instance Level Public IP (ILPIP) – An ILPIP is a public IP address that users can assign directly to a virtual machine or role instance, rather than to the cloud service that the virtual machine or role instance resides in. This does not take the place of the VIP (virtual IP) that is assigned to their cloud service. Rather, it is an extra IP address that can be used to connect directly to a virtual machine or role instance. Note: In the past, an ILPIP was referred to as a PIP, which stands for public IP. Inbound NAT Rules – This contains rules mapping a public port on the load balancer to a port for a specific virtual machine in the back-end address pool. IP-Config - It can be defined as an IP address pair (public IP and private IP) associated with an individual NIC. In an IP-Config, the public IP address can be NULL. Each NIC can have multiple IP-Configs associated with it, which can be up to 255. Load Balancing Rules – A rule property that maps a given front-end IP and port combination to a set of back-end IP addresses and port combinations. With a single definition of a load balancer resource, users can define multiple load balancing rules, each rule reflecting a combination of a front-end IP and port and back end IP and port associated with virtual machines. Network Security Group (NSG) – NSG contains a list of Access Control List (ACL) rules that allow or deny network traffic to virtual machine instances in a virtual network. NSGs can be associated with either subnets or individual virtual machine instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all the virtual machine instances in that subnet. In addition, traffic to an individual virtual machine can be restricted further by associating an NSG directly to that virtual machine. Private IP addresses – Used for communication within an Azure virtual network, and user on-premises network when a VPN gateway is used to extend a user network to Azure. Private IP addresses allow Azure resources to communicate with other resources in a virtual network or an on-premises network through a VPN gateway or ExpressRoute circuit, without using an internet-reachable IP address. In the Azure Resource Manager deployment model, a private IP address is associated with the following types of Azure resources – virtual machines, internal load balancers (ILBs), and application gateways. Probes – This contains health probes used to check availability of virtual machines instances in the back-end address pool. If a particular virtual machine does not respond to health probes for some time, then it is taken out of traffic serving. Probes enable users to track the health of virtual instances. If a health probe fails, the virtual instance is taken out of rotation automatically. Public IP Addresses (PIP) – PIP is used for communication with the Internet, including Azure public-facing services and is associated with virtual machines, internet-facing load balancers, VPN gateways, and application gateways. Region - An area within a geography that does not cross national borders and that contains one or more data centers. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away, to form a regional pair. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to generally as location. Resource Group - A container in Resource Manager that holds related resources for an application. The resource group can include all resources for an application, or only those resources that are logically grouped. Storage Account – An Azure storage account gives users access to the Azure blob, queue, table, and file services in Azure Storage. A user storage account provides the unique namespace for user Azure storage data objects. Virtual Machine – The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in various sizes. Virtual Network - An Azure virtual network is a representation of a user network in the cloud. It is a logical isolation of the Azure cloud dedicated to a user subscription. Users can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. Users can also further segment their VNet into subnets and launch Azure IaaS virtual machines and cloud services (PaaS role instances). Also, users can connect the virtual network to their on-premises network using one of the connectivity options available in Azure. In essence, users can expand their network to Azure, with complete control on IP address blocks with the benefit of the enterprise scale Azure provides. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on Azure combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the Azure Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on Azure enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Global Server Load Balancing (GSLB) Global Server Load Balancing (GSLB) is huge for many of our customers. Those businesses have an on-prem data center presence serving regional customers, but with increasing demand for their business, they now want to scale and deploy their presence globally across AWS and Azure while maintaining their on-prem presence for regional customers. Customers want to do all of this with automated configurations as well. Thus, they are looking for a solution that can rapidly adapt to either evolving business needs or changes in the global market. With NetScaler ADC on the network administrator’s side, customers can use the Global Load Balancing (GLB) StyleBook to configure applications both on-prem and in the cloud, and that same config can be transferred to the cloud with NetScaler ADM. Users can reach either on-prem or cloud resources depending on proximity with GSLB. This allows for a seamless experience no matter where the users are located in the world. Deployment Types Multi-NIC Multi-IP Deployment (Three-NIC Deployment) Use Cases Multi-NIC Multi-IP (Three-NIC) Deployments are used to achieve real isolation of data and management traffic. Multi-NIC Multi-IP (Three-NIC) Deployments also improve the scale and performance of the ADC. Multi-NIC Multi-IP (Three-NIC) Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. Multi-NIC Multi-IP (Three-NIC) Deployments are also used in network applications for WAF Deployment. Multi-NIC Multi-IP (Three-NIC) Deployment for GSLB Customers would potentially deploy using three-NIC deployment if they are deploying into a production environment where security, redundancy, availability, capacity, and scalability are critical. With this deployment method, complexity and ease of management are not critical concerns to the users. Azure Resource Manager (ARM) Template Deployment Customers would deploy using Azure Resource Manager (ARM) Templates if they are customizing their deployments or they are automating their deployments. Deployment Steps When users deploy a NetScaler ADC VPX instance on a Microsoft Azure Resource Manager (ARM), they can use the Azure cloud computing capabilities and use NetScaler ADC load balancing and traffic management features for their business needs. Users can deploy NetScaler ADC VPX instances on Azure Resource Manager either as standalone instances or as high availability pairs in active-standby modes. But users can deploy a NetScaler ADC VPX instance on Microsoft Azure in either of two ways: Through the Azure Marketplace. The NetScaler ADC VPX virtual appliance is available as an image in the Microsoft Azure Marketplace. NetScaler ADC ARM templates are available in the Azure Marketplace for standalone and HA deployment types. Using the NetScaler ADC Azure Resource Manager (ARM) json template available on GitHub. For more information, see the GitHub repository for NetScaler ADC Azure Templates. How a NetScaler ADC VPX Instance Works on Azure In an on-premises deployment, a NetScaler ADC VPX instance requires at least three IP addresses: Management IP address, called NSIP address Subnet IP (SNIP) address for communicating with the server farm Virtual server IP (VIP) address for accepting client requests For more information, see: Network Architecture for NetScaler ADC VPX Instances on Microsoft Azure. Note: VPX virtual appliances can be deployed on any instance type that has two or more cores and more than 2 GB memory. In an Azure deployment, users can provision a NetScaler ADC VPX instance on Azure in three ways: Multi-NIC multi-IP architecture Single NIC multi IP architecture ARM (Azure Resource Manager) templates Depending on requirements, users can deploy any of these supported architecture types. Multi-NIC Multi-IP Architecture (Three-NIC) In this deployment type, users can have more than one network interfaces (NICs) attached to a VPX instance. Any NIC can have one or more IP configurations - static or dynamic public and private IP addresses assigned to it. Refer to the following use cases: Configure a High-Availability Setup with Multiple IP Addresses and NICs Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure a High-Availability Setup with Multiple IP Addresses and NICs In a Microsoft Azure deployment, a high-availability configuration of two NetScaler ADC VPX instances is achieved by using the Azure Load Balancer (ALB). This is achieved by configuring a health probe on ALB, which monitors each VPX instance by sending health probes at every 5 seconds to both primary and secondary instances. In this setup, only the primary node responds to health probes and the secondary does not. Once the primary sends the response to the health probe, the ALB starts sending the data traffic to the instance. If the primary instance misses two consecutive health probes, ALB does not redirect traffic to that instance. On failover, the new primary starts responding to health probes and the ALB redirects traffic to it. The standard VPX high availability failover time is three seconds. The total failover time that might occur for traffic switching can be a maximum of 13 seconds. Users can deploy a pair of NetScaler ADC VPX instances with multiple NICs in an active-passive high availability (HA) setup on Azure. Each NIC can contain multiple IP addresses. The following options are available for a multi-NIC high availability deployment: High availability using Azure availability set High availability using Azure availability zones For more information about Azure Availability Set and Availability Zones, see the Azure documentation: Manage the Availability of Linux Virtual Machines. High Availability using Availability Set A high availability setup using an availability set must meet the following requirements: An HA Independent Network Configuration (INC) configuration The Azure Load Balancer (ALB) in Direct Server Return (DSR) mode All traffic goes through the primary node. The secondary node remains in standby mode until the primary node fails. Note: For a NetScaler VPX high availability deployment on the Azure cloud to work, users need a floating public IP (PIP) that can be moved between the two VPX nodes. The Azure Load Balancer (ALB) provides that floating PIP, which is moved to the second node automatically in the event of a failover. In an active-passive deployment, the ALB front-end public IP (PIP) addresses are added as the VIP addresses in each VPX node. In an HA-INC configuration, the VIP addresses are floating and the SNIP addresses are instance specific. Users can deploy a VPX pair in active-passive high availability mode in two ways by using: NetScaler ADC VPX standard high availability template: use this option to configure an HA pair with the default option of three subnets and six NICs. Windows PowerShell commands: use this option to configure an HA pair according to your subnet and NIC requirements. This section describes how to deploy a VPX pair in active-passive HA setup by using the NetScaler template. If you want to deploy with PowerShell commands, see Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands. Configure HA-INC Nodes by using the NetScaler High Availability Template Users can quickly and efficiently deploy a pair of VPX instances in HA-INC mode by using the standard template. The template creates two nodes, with three subnets and six NICs. The subnets are for management, client, and server-side traffic, and each subnet has two NICs for both of the VPX instances. Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Sets. From Azure Marketplace, select and initiate the NetScaler solution template. The template appears. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Create a Resource Group and select OK. The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group in the Azure portal to see the configuration details, such as LB rules, back-end pools, health probes. The high availability pair appears as ns-vpx0 and ns-vpx1. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. Next, users need to configure the load-balancing virtual server with the ALB’s Frontend public IP (PIP) address, on the primary node. To find the ALB PIP, select ALB > Frontend IP configuration. See the Resources section for more information about how to configure the load-balancing virtual server. Resources: The following links provide additional information related to HA deployment and virtual server (virtual server) configuration: Configuring High Availability Nodes in Different Subnets Set up Basic Load Balancing Related resources: Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure GSLB on an Active-Standby High-Availability Setup High Availability using Availability Zones Azure Availability Zones are fault-isolated locations within an Azure region, providing redundant power, cooling, and networking and increasing resiliency. Only specific Azure regions support Availability Zones. For more information, see: Regions and Availability Zones in Azure. Users can deploy a VPX pair in high availability mode by using the template called “NetScaler 13.0 HA using Availability Zones,” available in Azure Marketplace. Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Zones. From Azure Marketplace, select and initiate the NetScaler solution template. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Enter the details and click OK. Note: Ensure that an Azure region that supports Availability Zones is selected. For more information about regions that support Availability Zones, see: Regions and Availability Zones in Azure . The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group to see the configuration details, such as LB rules, back-end pools, health probes, in the Azure portal. The high availability pair appears as ns-vpx0 and ns-vpx1. Also, users can see the location under the Location column. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. ARM (Azure Resource Manager) Templates The GitHub repository for NetScaler ADC ARM (Azure Resource Manager) templates hosts NetScaler ADC Azure Templates for deploying NetScaler ADC in Microsoft Azure Cloud Services. All templates in the repository are developed and maintained by the NetScaler ADC engineering team. Each template in this repository has co-located documentation describing the usage and architecture of the template. The templates attempt to codify the recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature, edition, or option. Users can reuse, modify, or enhance the templates to suit their particular production and testing needs. Most templates require sufficient subscriptions to portal.azure.com to create resource and deploy templates. NetScaler ADC VPX Azure Resource Manager (ARM) templates are designed to ensure an easy and consistent way of deploying standalone NetScaler ADC VPX. These templates increase reliability and system availability with built-in redundancy. These ARM templates support Bring Your Own License (BYOL) or Hourly based selections. Choice of selection is either mentioned in the template description or offered during template deployment. For more information on how to provision a NetScaler ADC VPX instance on Microsoft Azure using ARM (Azure Resource Manager) templates, visit NetScaler ADC Azure Templates. NetScaler ADC GSLB and Domain Based Services Back-end Autoscale with Cloud Load Balancer GSLB and DBS Overview NetScaler ADC GSLB supports using DBS (Domain Based Services) for Cloud load balancers. This allows for the auto-discovery of dynamic cloud services using a cloud load balancer solution. This configuration allows the NetScaler ADC to implement Global Server Load Balancing Domain-Name Based Services (GSLB DBS) in an Active-Active environment. DBS allows the scaling of back end resources in Microsoft Azure environments from DNS discovery. This section covers integrations between NetScaler ADC in the Azure Auto Scaling environments. The final section of the document details the ability to set up a HA pair of NetScaler ADCs that span two different Availability Zones (AZs) specific to an Azure region. Domain-Name Based Services – Azure ALB GLSB DBS utilizes the FQDN of the user Azure Load Balancer to dynamically update the GSLB Service Groups to include the back-end servers that are being created and deleted within Azure. To configure this feature, users point the NetScaler ADC to their Azure Load Balancer to dynamically route to different servers in Azure. They can do this without having to manually update the NetScaler ADC every time an instance is created and deleted within Azure. The NetScaler ADC DBS feature for GSLB Service Groups uses DNS-aware service discovery to determine the member service resources of the DBS namespace identified in the Autoscale group. Diagram: NetScaler ADC GSLB DBS Autoscale Components with Cloud Load Balancers Configuring Azure Components Log in to the user Azure Portal and create a new virtual machine from a NetScaler ADC template Create an Azure Load Balancer Add the created NetScaler ADC back-end Pools Create a Health Probe for port 80. Create a Load Balancing Rule utilizing the front-end IP created from the Load Balancer. Protocol: TCP Backend Port: 80 Backend pool: NetScaler ADC created in step 1 Health Probe: Created in step 4 Session Persistence: None Configure NetScaler ADC GSLB Domain Based Service The following configurations summarize what is required to enable domain-based services for autoscaling ADCs in a GSLB enabled environment. Traffic Management Configurations Note: It is required to configure the NetScaler ADC with either a nameserver or a DNS virtual server through which the ELB /ALB Domains are resolved for the DNS Service Groups. Navigate to Traffic Management > Load Balancing > Servers Click Add to create a server, provide a name and FQDN corresponding to the A record (domain name) in Azure for the Azure Load Balancer (ALB) Repeat step 2 to add the second ALB from the second resource in Azure. GSLB Configurations Click the Add button to configure a GSLB Site Name the Site. Type is configured as Remote or Local based on which NetScaler ADC users are configuring the site on. The Site IP Address is the IP address for the GSLB site. The GSLB site uses this IP address to communicate with the other GSLB sites. The Public IP address is required when using a cloud service where a particular IP is hosted on an external firewall or NAT device. The site should be configured as a Parent Site. Ensure the Trigger Monitors are set to ALWAYS. Also, be sure to check off the three boxes at the bottom for Metric Exchange, Network Metric Exchange, and Persistence Session Entry Exchange. NetScaler recommends that you set the Trigger monitor setting to MEPDOWN, please refer to: Configure a GSLB Service Group. Click Create, repeat steps 3 & 4 to configure the GSLB site for the other resource location in Azure (this can be configured on the same NetScaler ADC) Navigate to Traffic Management > GSLB > Service Groups Click Add to add a service group. Name the Service Group, use the HTTP protocol, and then under Site Name choose the respective site that was created in the previous steps. Be sure to configure autoscale Mode as DNS and check off the boxes for State and Health Monitoring. Click OK to create the Service Group. Click Service Group Members and select Server Based. Select the respective Elastic Load Balancing Server that was configured in the start of the run guide. Configure the traffic to go over port 80. Click Create. The Service group Member Binding should populate with 2 instances that it is receiving from the Elastic Load Balancer. Repeat steps 5 & 6 to configure the Service Group for the second resource location in Azure. (This can be done from the same NetScaler ADC GUI). The final step is to set up a GSLB Virtual Server. Navigate to Traffic Management > GSLB > Virtual Servers. Click Add to create the virtual server. Name the server, DNS Record Type is set as A, Service Type is set as HTTP, and check the boxes for Enable after Creating and AppFlow Logging. Click OK to create the GSLB Virtual Server. Once the GSLB Virtual Server is created, click No GSLB Virtual Server ServiceGroup Binding. Under ServiceGroup Binding use Select Service Group Name to select and add the Service Groups that were created in the previous steps. Next configure the GSLB Virtual Server Domain Binding by clicking No GSLB Virtual Server Domain Binding. Configure the FQDN and Bind, the rest of the settings can be left as the defaults. Configure the ADNS Service by clicking No Service. Add a Service Name, click New Server, and enter the IP Address of the ADNS server. Also, if the user ADNS is already configured users can select Existing Server and then choose the user ADNS from the drop-down menu. Make sure the Protocol is ADNS and the traffic is configured to flow over Port 53. Configure the Method as LEASTCONNECTION and the Backup Method as ROUNDROBIN. Click Done and verify that the user GSLB Virtual Server is shown as Up. NetScaler ADC Global Load Balancing for Hybrid and Multi-Cloud Deployments The NetScaler ADC hybrid and multi-cloud global load balancing (GLB) solution enables users to distribute application traffic across multiple data centers in hybrid clouds, multiple clouds, and on-premises deployments. The NetScaler ADC hybrid and multi-cloud GLB solution helps users to manage their load balancing setup in hybrid or multi-cloud without altering the existing setup. Also, if users have an on-premises setup, they can test some of their services in the cloud by using the NetScaler ADC hybrid and multi-cloud GLB solution before completely migrating to the cloud. For example, users can route only a small percentage of their traffic to the cloud, and handle most of the traffic on-premises. The NetScaler ADC hybrid and multi-cloud GLB solution also enables users to manage and monitor NetScaler ADC instances across geographic locations from a single, unified console. A hybrid and multi-cloud architecture can also improve overall enterprise performance by avoiding “vendor lock-in” and using different infrastructure to meet the needs of user partners and customers. With a multiple cloud architecture, users can manage their infrastructure costs better as they now have to pay only for what they use. Users can also scale their applications better as they now use the infrastructure on demand. It also lets you quickly switch from one cloud to another to take advantage of the best offerings of each provider. Architecture of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution The following diagram illustrates the architecture of the NetScaler ADC hybrid and multi-cloud GLB feature. The NetScaler ADC GLB nodes handle the DNS name resolution. Any of these GLB nodes can receive DNS requests from any client location. The GLB node that receives the DNS request returns the load balancer virtual server IP address as selected by the configured load balancing method. Metrics (site, network, and persistence metrics) are exchanged between the GLB nodes using the metrics exchange protocol (MEP), which is a proprietary NetScaler protocol. For more information on the MEP protocol, see: Configure Metrics Exchange Protocol. The monitor configured in the GLB node monitors the health status of the load balancing virtual server in the same data center. In a parent-child topology, metrics between the GLB and NetScaler ADC nodes are exchanged by using MEP. However, configuring monitor probes between a GLB and NetScaler ADC LB node is optional in a parent-child topology. The NetScaler Application Delivery Management (ADM) service agent enables communication between the NetScaler ADM and the managed instances in your data center. For more information on NetScaler ADM service agents and how to install them, see: Getting Started. Note: This document makes the following assumptions: If users have an existing load balancing setup, it is up and running. A SNIP address or a GLB site IP address is configured on each of the NetScaler ADC GLB nodes. This IP address is used as the data center source IP address when exchanging metrics with other data centers. An ADNS or ADNS-TCP service is configured on each of the NetScaler ADC GLB instances to receive the DNS traffic. The required firewall and security groups are configured in the cloud service providers. SECURITY GROUPS CONFIGURATION Users must set up the required firewall/security groups configuration in the cloud service providers. For more information about AWS security features, see: AWS/Documentation/Amazon VPC/User Guide/Security. For more information about Microsoft Azure Network Security Groups, see: Azure/Networking/Virtual Network/Plan Virtual Networks/Security. In addition, on the GLB node, users must open port 53 for ADNS service/DNS server IP address and port 3009 for GSLB site IP address for MEP traffic exchange. On the load balancing node, users must open the appropriate ports to receive the application traffic. For example, users must open port 80 for receiving HTTP traffic and open port 443 for receiving HTTPS traffic. Open port 443 for NITRO communication between the NetScaler ADM service agent and NetScaler ADM. For the dynamic round trip time GLB method, users must open port 53 to allow UDP and TCP probes depending on the configured LDNS probe type. The UDP or the TCP probes are initiated using one of the SNIPs and therefore this setting must be done for security groups bound to the server-side subnet. Capabilities of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution Some of the capabilities of the NetScaler ADC hybrid and multi-cloud GLB solution are described in this section: Compatibility with other Load Balancing Solutions The NetScaler ADC hybrid and multi-cloud GLB solution supports various load balancing solutions, such as the NetScaler ADC load balancer, NGINX, HAProxy, and other third-party load balancers. Note: Load balancing solutions other than NetScaler ADC are supported only if proximity-based and non-metric based GLB methods are used and if parent-child topology is not configured. GLB Methods The NetScaler ADC hybrid and multi-cloud GLB solution supports the following GLB methods. Metric-based GLB methods. Metric-based GLB methods collect metrics from the other NetScaler ADC nodes through the metrics exchange protocol. Least Connection: The client request is routed to the load balancer that has the fewest active connections. Least Bandwidth: The client request is routed to the load balancer that is currently serving the least amount of traffic. Least Packets: The client request is routed to the load balancer that has received the fewest packets in the last 14 seconds. Non-metric based GLB methods Round Robin: The client request is routed to the IP address of the load balancer that is at the top of the list of load balancers. That load balancer then moves to the bottom of the list. Source IP Hash: This method uses the hashed value of the client IP address to select a load balancer. Proximity-based GLB methods Static Proximity: The client request is routed to the load balancer that is closest to the client IP address. Round-Trip Time (RTT): This method uses the RTT value (the time delay in the connection between the client’s local DNS server and the data center) to select the IP address of the best performing load balancer. For more information on the load balancing methods, see: Load Balancing Algorithms. GLB Topologies The NetScaler ADC hybrid and multi-cloud GLB solution supports the active-passive topology and parent-child topology. Active-passive topology - Provides disaster recovery and ensures continuous availability of applications by protecting against points of failure. If the primary data center goes down, the passive data center becomes operational. For more information about GSLB active-passive topology, see: Configure GSLB for Disaster Recovery. Parent-child topology – Can be used if customers are using the metric-based GLB methods to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. In a parent-child topology, the LB node (child site) must be a NetScaler ADC appliance because the exchange of metrics between the parent and child site is through the metrics exchange protocol (MEP). For more information about parent-child topology, see: Parent-Child Topology Deployment using the MEP Protocol. IPv6 Support The NetScaler ADC hybrid and multi-cloud GLB solution also supports IPv6. Monitoring The NetScaler ADC hybrid and multi-cloud GLB solution supports built-in monitors with an option to enable the secure connection. However, if LB and GLB configurations are on the same NetScaler ADC instance or if parent-child topology is used, configuring monitors is optional. Persistence The NetScaler ADC hybrid and multi-cloud GLB solution supports the following: Source IP based persistence sessions, so that multiple requests from the same client are directed to the same service if they arrive within the configured time-out window. If the time-out value expires before the client sends another request, the session is discarded, and the configured load balancing algorithm is used to select a new server for the client’s next request. Spillover persistence so that the backup virtual server continues to process the requests it receives, even after the load on the primary falls below the threshold. For more information, see: Configure Spillover. Site persistence so that the GLB node selects a data center to process a client request and forwards the IP address of the selected data center for all subsequent DNS requests. If the configured persistence applies to a site that is DOWN, the GLB node uses a GLB method to select a new site, and the new site becomes persistent for subsequent requests from the client. Configuration by using the NetScaler ADM StyleBooks Customers can use the default Multi-cloud GLB StyleBook on NetScaler ADM to configure the NetScaler ADC instances with hybrid and multi-cloud GLB configuration. Customers can use the default Multi-cloud GLB StyleBook for LB Node StyleBook to configure the NetScaler ADC load balancing nodes which are the child sites in a parent-child topology that handle the application traffic. Use this StyleBook only if users want to configure LB nodes in a parent-child topology. However, each LB node must be configured separately using this StyleBook. Workflow of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution Configuration Customers can use the shipped Multi-cloud GLB StyleBook on NetScaler ADM to configure the NetScaler ADC instances with hybrid and multi-cloud GLB configuration. The following diagram shows the workflow for configuring the NetScaler ADC hybrid and multi-cloud GLB solution. The steps in the workflow diagram are explained in more detail after the diagram. PNG 19 Perform the following tasks as a cloud administrator: Sign up for a Citrix Cloud account. To start using NetScaler ADM, create a Citrix Cloud company account or join an existing one that has been created by someone in your company. After users log on to Citrix Cloud, click Manage on the NetScaler Application Delivery Management tile to set up the ADM service for the first time. Download and install multiple NetScaler ADM service agents. Users must install and configure the NetScaler ADM service agent in their network environment to enable communication between the NetScaler ADM and the managed instances in their data center or cloud. Install an agent in each region, so that they can configure LB and GLB configurations on the managed instances. The LB and GLB configurations can share a single agent. For more information on the above three tasks, see: Getting Started. Deploy load balancers on Microsoft Azure/AWS cloud/on-premises data centers. Depending on the type of load balancers that users are deploying on cloud and on-premises, provision them accordingly. For example, users can provision NetScaler ADC VPX instances in a Microsoft Azure Resource Manager (ARM) portal, in an Amazon Web Services (AWS) virtual private cloud and in on-premises data centers. Configure NetScaler ADC instances to function as LB or GLB nodes in standalone mode, by creating the virtual machines and configuring other resources. For more information on how to deploy NetScaler ADC VPX instances, see the following documents: NetScaler ADC VPX on AWS. Configure a NetScaler VPX Standalone Instance. Perform security configurations. Configure network security groups and network ACLs in ARM and AWS to control inbound and outbound traffic for user instances and subnets. Add NetScaler ADC instances in NetScaler ADM. NetScaler ADC instances are network appliances or virtual appliances that users want to discover, manage, and monitor from NetScaler ADM. To manage and monitor these instances, users must add the instances to the service and register both LB (if users are using NetScaler ADC for LB) and GLB instances. For more information on how to add NetScaler ADC instances in the NetScaler ADM, see: Getting Started. Implement the GLB and LB configurations using default NetScaler ADM StyleBooks. Use Multi-cloud GLB StyleBook to execute the GLB configuration on the selected GLB NetScaler ADC instances. Implement the load balancing configuration. (Users can skip this step if they already have LB configurations on the managed instances.) Users can configure load balancers on NetScaler ADC instances in one of two ways: Manually configure the instances for load balancing the applications. For more information on how to manually configure the instances, see: Set up Basic Load Balancing. Use StyleBooks. Users can use one of the NetScaler ADM StyleBooks (HTTP/SSL Load Balancing StyleBook or HTTP/SSL Load Balancing (with Monitors) StyleBook) to create the load balancer configuration on the selected NetScaler ADC instance. Users can also create their own StyleBooks. For more information on StyleBooks, see: StyleBooks. Use Multi-cloud GLB StyleBook for LB Node to configure GLB parent-child topology in any of the following cases: If users are using the metric-based GLB algorithms (Least Packets, Least Connections, Least Bandwidth) to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance If site persistence is required Using StyleBooks to Configure GLB on NetScaler ADC LB Nodes Customers can use the Multi-cloud GLB StyleBook for LB Node if they are using the metric-based GLB algorithms (Least Packets, Least Connections, Least Bandwidth) to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. Users can also use this StyleBook to configure more child sites for an existing parent site. This StyleBook configures one child site at a time. So, create as many configurations (config packs) from this StyleBook as there are child sites. The StyleBook applies the GLB configuration on the child sites. Users can configure a maximum of 1024 child sites. Note: Use Multi-cloud GLB StyleBook found here: Using StyleBooks to Configure GLB to configure the parent sites. This StyleBook makes the following assumptions: A SNIP address or a GLB site IP address is configured. The required firewall and security groups are configured in the cloud service providers. Configuring a Child Site in a Parent-Child Topology by using Multi-cloud GLB StyleBook for LB Node Navigate to Applications > Configuration, and click Create New. The Choose StyleBook page displays all the StyleBooks available for customer use in NetScaler Application Delivery Management (ADM). Scroll down and select Multi-cloud GLB StyleBook for LB Node. The StyleBook appears as a user interface page on which users can enter the values for all the parameters defined in this StyleBook. Note: The terms data center and sites are used interchangeably in this document. Set the following parameters: Application Name. Enter the name of the GLB application deployed on the GLB sites for which you want to create child sites. Protocol. Select the application protocol of the deployed application from the drop-down list box. LB Health Check (Optional) Health Check Type. From the drop-down list box, select the type of probe used for checking the health of the load balancer VIP address that represents the application on a site. Secure Mode. (Optional) Select Yes to enable this parameter if SSL based health checks are required. HTTP Request. (Optional) If users selected HTTP as the health-check type, enter the full HTTP request used to probe the VIP address. List of HTTP Status Response Codes. (Optional) If users selected HTTP as the health check type, enter the list of HTTP status codes expected in responses to HTTP requests when the VIP is healthy. Configuring parent site. Provide the details of the parent site (GLB node) under which you want to create the child site (LB node). Site Name. Enter the name of the parent site. Site IP Address. Enter the IP address that the parent site uses as its source IP address when exchanging metrics with other sites. This IP address is assumed to be already configured on the GLB node in each site. Site Public IP Address. (Optional) Enter the Public IP address of the parent site that is used to exchange metrics, if that site’s IP address is NAT’ed. Configuring child site. Provide the details of the child site. Site name. Enter the name of the site. Site IP Address. Enter the IP address of the child site. Here, use the private IP address or SNIP of the NetScaler ADC node that is being configured as a child site. Site Public IP Address. (Optional) Enter the Public IP address of the child site that is used to exchange metrics, if that site’s IP address is NAT’ed. Configuring active GLB services (optional) Configure active GLB services only if the LB virtual server IP address is not a public IP address. This section allows users to configure the list of local GLB services on the sites where the application is deployed. Service IP. Enter the IP address of the load balancing virtual server on this site. Service Public IP Address. If the virtual IP address is private and has a public IP address NAT’ed to it, specify the public IP address. Service Port. Enter the port of the GLB service on this site. Site Name. Enter the name of the site on which the GLB service is located. Click Target Instances and select the NetScaler ADC instances configured as GLB instances on each site on which to deploy the GLB configuration. Click Create to create the LB configuration on the selected NetScaler ADC instance (LB node). Users can also click Dry Run to check the objects that would be created in the target instances. The StyleBook configuration that users have created appears in the list of configurations on the Configurations page. Users can examine, update, or remove this configuration by using the NetScaler ADM GUI. For more information on how to deploy a NetScaler ADC VPX instance on Microsoft Azure, see Deploy a NetScaler ADC VPX Instance on Microsoft Azure. For more information on how a NetScaler ADC VPX instance works on Azure, visit How a NetScaler ADC VPX Instance Works on Azure. For more information on how to configure GSLB on NetScaler ADC VPX instances, see Configure GSLB on NetScaler ADC VPX Instances. For more information on how to configure GSLB on an active-standby high-availability setup on Azure, visit Configure GSLB on an Active-Standby High-Availability Setup. Prerequisites Users need some prerequisite knowledge before deploying a NetScaler VPX instance on Azure: Familiarity with Azure terminology and network details. For information, see the Azure terminology in the previous section. Knowledge of a NetScaler ADC appliance. For detailed information about the NetScaler ADC appliance, see: NetScaler ADC 13.0. For knowledge of NetScaler ADC networking, see the Networking topic: Networking. Azure GSLB Prerequisites The prerequisites for the NetScaler ADC GSLB Service Groups include a functioning Amazon Web Services / Microsoft Azure environment with the knowledge and ability to configure Security Groups, Linux Web Servers, NetScaler ADCs within AWS, Elastic IPs, and Elastic Load Balancers. GSLB DBS Service integration requires NetScaler ADC version 12.0.57 for AWS ELB and Microsoft Azure ALB load balancer instances. NetScaler ADC GSLB Service Group Feature Enhancements GSLB Service Group entity: NetScaler ADC version 12.0.57 GSLB Service Group is introduced which supports autoscale using DBS dynamic discovery. DBS Feature Components (domain-based service) must be bound to the GSLB service group Example: > add server sydney_server LB-Sydney-xxxxxxxxxx.ap-southeast-2.elb.amazonaws.com> add gslb serviceGroup sydney_sg HTTP -autoscale DNS -siteName sydney> bind gslb serviceGroup sydney_sg sydney_server 80 Limitations Running the NetScaler ADC VPX load balancing solution on ARM imposes the following limitations: The Azure architecture does not accommodate support for the following NetScaler ADC features: Clustering IPv6 Gratuitous ARP (GARP) L2 Mode (bridging). Transparent virtual server are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC USIP Jumbo Frames If you think you might need to shut down and temporarily deallocate the NetScaler ADC VPX virtual machine at any time, assign a static Internal IP address while creating the virtual machine. If you do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible. In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. If a NetScaler ADC VPX instance with a model number higher than VPX 3000 is used, the network throughput might not be the same as specified by the instance’s license. However, other features, such as SSL throughput and SSL transactions per second, might improve. The “deployment ID” that Azure generates during virtual machine provisioning is not visible to the user in ARM. Users cannot use the deployment ID to deploy NetScaler ADC VPX appliance on ARM. The NetScaler ADC VPX instance supports 20 Mb/s throughput and standard edition features when it is initialized. For a XenApp and XenDesktop deployment, a VPN virtual server on a VPX instance can be configured in the following modes: Basic mode, where the ICAOnly VPN virtual server parameter is set to ON. The Basic mode works fully on an unlicensed NetScaler ADC VPX instance. SmartAccess mode, where the ICAOnly VPN virtual server parameter is set to OFF. The SmartAccess mode works for only 5 NetScaler ADC AAA session users on an unlicensed NetScaler ADC VPX instance. Note: To configure the Smart Control feature, users must apply a Premium license to the NetScaler ADC VPX instance. Azure-VPX Supported Models and Licensing In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. A NetScaler ADC VPX instance on Azure requires a license. The following licensing options are available for NetScaler ADC VPX instances running on Azure. Subscription-based licensing: NetScaler ADC VPX appliances are available as paid instances on Azure Marketplace. Subscription based licensing is a pay-as-you-go option. Users are charged hourly. The following VPX models and license types are available on Azure Marketplace: VPX Model License Type VPX10 Standard, Advanced, Premium VPX200 Standard, Advanced, Premium VPX1000 Standard, Advanced, Premium VPX3000 Standard, Advanced, Premium Bring your own license (BYOL): If users bring their own license (BYOL), they should see the VPX Licensing Guide at: CTX122426/NetScaler VPX and CloudBridge VPX Licensing Guide. Users have to: Use the licensing portal within MyCitrix to generate a valid license. Upload the license to the instance. NetScaler ADC VPX Check-In/Check-Out licensing: For more information, see: NetScaler ADC VPX Check-in and Check-out Licensing. Starting with NetScaler release 12.0 56.20, VPX Express for on-premises and cloud deployments does not require a license file. For more information on NetScaler ADC VPX Express see the “NetScaler ADC VPX Express license” section in NetScaler ADC licensing overview, which can be found here: Licensing Overview. Note: Regardless of the subscription-based hourly license bought from Azure Marketplace, in rare cases, the NetScaler ADC VPX instance deployed on Azure might come up with a default NetScaler license. This happens due to issues with Azure Instance Metadata Service (IMDS). Perform a warm restart before making any configuration change on the NetScaler ADC VPX instance, to enable the correct NetScaler ADC VPX license. Port Usage Guidelines Users can configure more inbound and outbound rules n NSG while creating the NetScaler VPX instance or after the virtual machine is provisioned. Each inbound and outbound rule is associated with a public port and a private port. Before configuring NSG rules, note the following guidelines regarding the port numbers users can use: The NetScaler VPX instance reserves the following ports. Users cannot define these as private ports when using the Public IP address for requests from the internet. Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000. However, if users want internet-facing services such as the VIP to use a standard port (for example, port 443) users have to create port mapping by using the NSG. The standard port is then mapped to a different port that is configured on the NetScaler ADC VPX for this VIP service. For example, a VIP service might be running on port 8443 on the VPX instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is directed to private port 8443. The Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG. In a NetScaler Gateway deployment, users need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured. Users must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the back-end server, the VIP port number has to be specified along with the VIP URL (for example, url: port). Note: In Azure Resource Manager, a NetScaler ADC VPX instance is associated with two IP addresses - a public IP address (PIP) and an internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure a VIP in VPX, use the internal IP address (NSIP) and any of the free ports available. Do not use the PIP to configure a VIP. For example, if NSIP of a NetScaler ADC VPX instance is 10.1.0.3 and an available free port is 10022, then users can configure a VIP by providing the 10.1.0.3:10022 (NSIP address + port) combination.
  10. Introduction This document provides an overview of the steps, tools, architecture, and considerations for migrating Citrix ADC traffic management and security solutions to the new Citrix App Delivery and Security (CADS) service. This guide is intended for technical engineering and architectural teams who want to migrate applications to AWS. The scope of this guide is limited to Citrix ADC hardware or software-based appliances on product version 13 and later. What is CADS Service - Citrix Managed? CADS service – Citrix Managed is a new SaaS offering for application delivery and security. Citrix App Delivery and Security service removes the complexity from every step of app delivery, including provisioning, securing, on-boarding, and management, empowering IT to deliver a superior experience that keeps users engaged and productive. Getting Started There are four key steps for migrating to the new CADS service: Deployment models - Evaluation of the current deployment, assessment of how your applications fit together, and the design the architecture for the AWS environment. Use cases and feature mapping - Develop a high-level plan for your migration and making key decisions about what to migrate. Licensing – Identify the right CADS service – Citrix Managed entitlement by converting the current ADC capacity. Traffic flow - Migrate your application user’s traffic to the new site. Follow the Getting Started Guide Deployment Models Customers have designed their application architecture based on requirements such as specific feature need, performance, high availability, compliance, etc. When you migrate applications and their associated dependencies to AWS there is no standard approach. The following table provides an overview of the common use cases for different applications and ADC workloads that are migrated to CADS service – Citrix Managed. Application Type Use Case Suggested Action Development/Testing/PoC web app with temporary capacity needs Web application utilizing SSL-offload, load balancing and content switching capabilities of Citrix ADC Depending on the required location of the datacenter, create an environment as described here. Use CADS service Modern App delivery workflow to deploy your application as documented here. Trial License can be used, for more details see the Licensing section. Custom/Commercial, external facing application to be deployed across multiple Availability Zones, high availability (HA) You either plan to expand a datacenter or run a mix of self-managed and Citrix manged CADS services. You might have integrated Citrix Application Delivery Controller (ADC) as part of the application’s logic, and required it to port the same logic to CADS. You can leverage the Cloud Recommendation engine to determine the optimal site location for application. For details click here. Depending on the required location of the new datacenter, choose multiple availability zones for the region while you create an environment as described here. Review current Citrix ADC configurations (ns.conf) and break them down into the application components that need to be migrated. You can use the app migration workflow as described here. You can refer to feature mapping in Figure 2 to decide on modern app workflow or migration. External application across multiple Regions, high availability (HA) with DNS / GSLB Expand application presence globally with the help of global server load-balancing capability of CADS Based on the feature usage, you can either choose the Modern App or Migration (Classic App) workflow for application deployment. Once the applications are deployed in the desired region and availability zones, you can use the Multi-Site application delivery to create a GSLBaaS solution with CADS as described here. Internal application across multiple Availability Zones, high availability (HA) but no DNS / GSLB Deploy application for internal users only. In the Application creation workflow, while creating endpoints, ensures you select Internal for Access type. This ensures no public IP association for your application is configured. Applications with high compliance or security-related requirements. WAF or IDS/IPS applications These applications require advanced security features such as signatures, bot protections, deep and complex WAF rule sets, protection from OWASP top 10. You need to have a CADS Premium license to use these features. Ensure you enable the desired security protection features for your application deployment as described here. Cloud Native applications Use CADS to deploy an application as an Ingress controller to manage and route traffic into your Kubernetes cluster Not Supported with CADS. However, you can use CADS as the first (relatively static) tier of load balancing to an existing second tier of Citrix ADC CPX. Use Cases and Feature Mapping There are many aspects of migration that need to be considered, but before beginning your Citrix ADC workload migration, the following assessments help clarify the migration process. Application and the associated feature dependency to migrate: Assess whether the entire application is moving or only the web (UI) tier. You should also consider additional dependencies around features like use of caching, compression, authentication, security and more. Your evaluation needs to determine what would be required from the network topology. Reasons for application migration: You might be migrating your application because you are decommissioning your on-prem datacenter or because you want more elasticity or creating a disaster recovery site. Assess whether the application is migrating to have a per-application architecture, compared to the shared monolithic patterns common in many datacenters. Destination of the migration: Assess if the application needs to move to a single VPC with one Availability Zone or two Availability Zones. Determine the peer or transit VPC topology, along with the need for multi-Region deployments. These will impact the migration pattern design You can refer to Deployment types and the Datasheet for full set of supported features with CADS service – Citrix Managed. Following flow chart in Figure 2 shows the feature list for Modern and Classic App. You can start with the Modern App decision flow and check if all the required functionalities are addressed. If not, then you can validate the Classic app flow. Licensing The Citrix App Delivery and Security Service license is based on flexible consumption-based metering, where your applications automatically consume capacity from available entitlements. You get full architectural flexibility to deploy what you need when you need it. Details of the licensing entitlements are available here. Following calculation can be used to determine the consumption. If your application serves an average throughput of 250 Mbps per year, then the annual data usage can be calculated. Average application throughput per year (T) = 250 Mbps Data usage per sec (d) = T x 0.125 i.e. 250 x 0.125 = 31.25 MB per sec Total data usage in TB per year = (d x 365 x 24 x 3600)/1048576 i.e. (31.25 x 24 x 3600)/1048576 = 939.85 TB. For a data usage of ~1000 TB, the preferred license entitlement is Advance or Premium 1200 TB bandwidth + 100 million DNS queries. Traffic Flow With applications deployed with CADS service – Citrix Managed, the final step is to migrate the application traffic from an existing datacenter. For this, use Multi-site application delivery and define the existing and new Citrix Managed site. For traffic migration use weighted Round-Robin as the algorithm. Configure a weight in 90(existing site):10 (new Citrix managed site) ratio. Weights are proportional, i.e. 90 % of the traffic is received by the existing site and 10% by the Citrix Managed site. You can alter this to control the traffic proportions to your datacenters. Finally, perform application tests and complete the migration process with 100% traffic to the Citrix Managed site. Summary Following above pattern enables admins to migrate applications delivered and secured by an ADC to CADS service - Citrix Managed.
  11. Deployment Guide NetScaler ADC VPX on AWS - GSLB Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: XenServer Hypervisor VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler ADC VPX on Amazon Web Services. Amazon Web Services Amazon Web Services (AWS) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services can offer tools such as compute power, database storage, and content delivery services. AWS offers the following essential services AWS Compute Services Migration Services Storage Database Services Management Tools Security Services Analytics Networking Messaging Developer Tools Mobile Services AWS Terminology Here is a brief description of essential terms used in this document that users must be familiar with: Elastic Network Interface (ENI) - A virtual network interface that users can attach to an instance in a Virtual Private Cloud (VPC). Elastic IP (EIP) address - A static, public IPv4 address that users have allocated in Amazon EC2 or Amazon VPC and then attached to an instance. Elastic IP addresses are associated with user accounts, not a specific instance. They are elastic because users can easily allocate, attach, detach, and free them as their needs change. Subnet - A segment of the IP address range of a VPC with which EC2 instances can be attached. Users can create subnets to group instances according to security and operational needs. Virtual Private Cloud (VPC) - A web service for provisioning a logically isolated section of the AWS cloud where users can launch AWS resources in a virtual network that they define. Here is a brief description of other terms used in this document that users should be familiar with: Amazon Machine Image (AMI) - A machine image, which provides the information required to launch an instance, which is a virtual server in the cloud. Elastic Block Store - Provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Simple Storage Service (S3) - Storage for the Internet. It is designed to make web-scale computing easier for developers. Elastic Compute Cloud (EC2) - A web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Elastic Load Balancing (ELB) - Distributes incoming application traffic across multiple EC2 instances, in multiple Availability Zones. This increases the fault tolerance of user applications. Instance type - Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give users the flexibility to choose the appropriate mix of resources for their applications. Identity and Access Management (IAM) - An AWS identity with permission policies that determine what the identity can and cannot do in AWS. Users can use an IAM role to enable applications running on an EC2 instance to securely access their AWS resources. IAM role is required for deploying VPX instances in a high-availability setup. Internet Gateway - Connects a network to the Internet. Users can route traffic for IP addresses outside their VPC to the Internet gateway. Key pair - A set of security credentials with which users prove their identity electronically. A key pair consists of a private key and a public key. Route table - A set of routing rules that controls the traffic leaving any subnet that is associated with the route table. Users can associate multiple subnets with a single route table, but a subnet can be associated with only one route table at a time. Auto Scaling - A web service to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. CloudFormation - A service for writing or changing templates that create and delete related AWS resources together as a unit. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on AWS combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the AWS Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on AWS enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Global Server Load Balancing (GSLB) Global Server Load Balancing (GSLB) is important for many of our customers. Those businesses have an on-prem data center presence serving regional customers, but with increasing demand for their business, they now want to scale and deploy their presence globally across AWS and Azure while maintaining their on-prem presence for regional customers. Customers want to do all of this with automated configurations as well. Thus, they are looking for a solution that can rapidly adapt to either evolving business needs or changes in the global market. With NetScaler ADC on the network administrator’s side, customers can use the Global Load Balancing (GLB) StyleBook to configure applications both on-prem and in the cloud, and that same config can be transferred to the cloud with NetScaler ADM. Users can reach either on-prem or cloud resources depending on proximity with GSLB. This allows for a seamless experience no matter where the users are located in the world. Deployment Types Three-NIC Deployment Typical Deployments GLB StyleBook With ADM With GSLB (Route53 w/domain registration) Licensing - Pooled/Marketplace Use Cases Three-NIC Deployments are used to achieve real isolation of data and management traffic. Three-NIC Deployments also improve the scale and performance of the ADC. Three-NIC Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. CFT Deployment Customers would deploy using CloudFormation Templates if they are customizing their deployments or they are automating their deployments. Deployment Steps Three-NIC Deployment for GSLB The NetScaler ADC VPX instance is available as an Amazon Machine Image (AMI) in the AWS marketplace, and it can be launched as an Elastic Compute Cloud (EC2) instance within an AWS VPC. The minimum EC2 instance type allowed as a supported AMI on NetScaler VPX is m4.large. The NetScaler ADC VPX AMI instance requires a minimum of 2 virtual CPUs and 2 GB of memory. An EC2 instance launched within an AWS VPC can also provide the multiple interfaces, multiple IP addresses per interface, and public and private IP addresses needed for VPX configuration. Each VPX instance requires at least three IP subnets: A management subnet A client-facing subnet (VIP) A back-end facing subnet (SNIP) Citrix recommends three network interfaces for a standard VPX instance on AWS installation. AWS currently makes multi-IP functionality available only to instances running within an AWS VPC. A VPX instance in a VPC can be used to load balance servers running in EC2 instances. An Amazon VPC allows users to create and control a virtual networking environment, including their own IP address range, subnets, route tables, and network gateways. Note: By default, users can create up to 5 VPC instances per AWS region for each AWS account. Users can request higher VPC limits by submitting Amazon’s request form here: Amazon VPC Request . Licensing A NetScaler ADC VPX instance on AWS requires a license. The following licensing options are available for NetScaler ADC VPX instances running on AWS: Free (unlimited) Hourly Annual Bring your own license Free Trial (all NetScaler ADC VPX-AWS subscription offerings for 21 days free in AWS marketplace). Deployment Options Users can deploy a NetScaler ADC VPX standalone instance on AWS by using the following options: AWS web console Citrix-authored CloudFormation template AWS CLI Three-NIC Deployment Steps Users can deploy a NetScaler ADC VPX instance on AWS through the AWS web console. The deployment process includes the following steps: Create a Key Pair Create a Virtual Private Cloud (VPC) Add more subnets Create security groups and security rules Add route tables Create an internet gateway Create a NetScaler ADC VPX instance Create and attach more network interfaces Attach elastic IPs to the management NIC Connect to the VPX instance Create a Key Pair Amazon EC2 uses a key pair to encrypt and decrypt logon information. To log on to an instance, users must create a key pair, specify the name of the key pair when they launch the instance, and provide the private key when they connect to the instance. When users review and launch an instance by using the AWS Launch Instance wizard, they are prompted to use an existing key pair or create a new key pair. For more information about how to create a key pair, see: Amazon EC2 Key Pairs and Linux Instances. Create a VPC A NetScaler ADC VPC instance is deployed inside an AWS VPC. A VPC allows users to define virtual networks dedicated to their AWS account. For more information about AWS VPC, see: Getting Started With IPv4 for Amazon VPC. While creating a VPC for a NetScaler ADC VPX instance, keep the following points in mind. Use the VPC with a Single Public Subnet Only option to create an AWS VPC in an AWS availability zone. Citrix recommends that users create at least three subnets, of the following types: One subnet for management traffic. Place the management IP (NSIP) on this subnet. By default, elastic network interface (ENI) eth0 is used for the management IP. One or more subnets for client-access (user-to-NetScaler ADC VPX) traffic, through which clients connect to one or more virtual IP (VIP) addresses assigned to NetScaler ADC load balancing virtual servers. One or more subnets for the server-access (VPX-to-server) traffic, through which user servers connect to VPX-owned subnet IP (SNIP) addresses. For more information about NetScaler ADC load balancing and virtual servers, virtual IP addresses (VIPs), and subnet IP addresses (SNIPs). All subnets must be in the same availability zone. Add Subnets When the VPC wizard is used for deployment, only one subnet is created. Depending on user requirements, users may want to create more subnets. For more information about how to create more subnets, see: VPCs and Subnets. Create Security Groups and Security Rules To control inbound and outbound traffic, create security groups and add rules to the groups. For more information about how to create groups and add rules, see: Security Groups for Your VPC. For NetScaler ADC VPX instances, the EC2 wizard gives default security groups, which are generated by AWS Marketplace and is based on recommended settings by Citrix. However, users can create more security groups based on their requirements. Note: Port 22, 80, 443 to be opened on the Security group for SSH, HTTP, and HTTPS access respectively. Add Route Tables Route tables contain a set of rules, called routes, that are used to determine where network traffic is directed. Each subnet in a VPC must be associated with a route table. For more information about how to create a route table, see: Route Tables. Create an Internet Gateway An internet gateway serves two purposes: to provide a target in the VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. Create an internet gateway for internet traffic. For more information about how to create an Internet Gateway, see the section: Creating and Attaching an Internet Gateway. Create a NetScaler ADC VPX Instance by using the AWS EC2 Service To create a NetScaler ADC VPX instance by using the AWS EC2 service, complete the following steps: From the AWS dashboard, go to Compute > EC2 > Launch Instance > AWS Marketplace. Before clicking Launch Instance, users should ensure their region is correct by checking the note that appears under Launch Instance. In the Search AWS Marketplace bar, search with the keyword NetScaler ADC VPX. Select the version the user wants to deploy and then click Select. For the NetScaler ADC VPX version, users have the following options: A licensed version NetScaler ADC VPX Express appliance (This is a free virtual appliance, which is available from NetScaler ADC version 12.0 56.20.) Bring your own device The Launch Instance wizard starts. Follow the wizard to create an instance. The wizard prompts users to: Choose Instance Type Configure Instance Add Storage Add Tags Configure Security Group Review Create and Attach more Network Interfaces Create two more network interfaces for the VIP and SNIP. For more information about how to create more network interfaces, see: Creating a Network Interface. After users have created the network interfaces, they must attach the interfaces to the VPX instance. Before attaching the interfaces, shut down the VPX instance, attach the interfaces, and power on the instance. For more information about how to attach network interfaces, see the section: Attaching a Network Interface When Launching an Instance. Allocate and Associate Elastic IPs If users assign a public IP address to an EC2 instance, it remains assigned only until the instance is stopped. After that, the address is released back to the pool. When users restart the instance, a new public IP address is assigned. In contrast, an elastic IP (EIP) address remains assigned until the address is disassociated from an instance. Allocate and associate an elastic IP for the management NIC. For more information about how to allocate and associate elastic IP addresses, see these topics: Allocating an Elastic IP Address Associating an Elastic IP Address with a Running Instance These steps complete the procedure to create a NetScaler ADC VPX instance on AWS. It can take a few minutes for the instance to be ready. Check that the instance has passed its status checks. Users can view this information in the Status Checks column on the Instances page. Connect to the VPX Instance After users have created the VPX instance, users can connect to the instance by using the GUI and an SSH client. GUI The following are the default administrator credentials to access a NetScaler ADC VPX instance: User name: nsroot Password: The default password for the nsroot account is set to the AWS instance-ID of the NetScaler ADC VPX instance. SSH client From the AWS management console, select the NetScaler ADC VPX instance and click Connect. Follow the instructions given on the Connect to Your Instance page. For more information about how to deploy a NetScaler ADC VPX standalone instance on AWS by using the AWS web console, see: Scenario: Standalone Instance Configure GSLB in two AWS Locations Setting up GSLB for the NetScaler ADC on AWS basically consists of configuring the NetScaler ADC to load balance traffic to servers located outside the VPC that the NetScaler ADC belongs to, such as within another VPC in a different Availability Region or an on-premises data center. Domain-Name Based Services (GSLB DBS) with Cloud Load Balancers GSLB and DBS Overview NetScaler ADC GSLB support using DBS (Domain Based Services) for Cloud load balancers allows for the automatic discovery of dynamic cloud services using a cloud load balancer solution. This configuration allows the NetScaler ADC to implement Global Server Load Balancing Domain-Name Based Services (GSLB DBS) in an Active-Active environment. DBS allows the scaling of back-end resources in AWS environments from DNS discovery. This section covers integrations between NetScaler ADC in AWS AutoScaling environments. The final section of the document details the ability to set up a HA pair of NetScaler ADCs that span two different Availability Zones (AZs) specific to an AWS region. NetScaler ADC GSLB Service Group Feature Enhancements GSLB Service Group entity: NetScaler ADC version 12.0.57 GSLB Service Group is introduced which supports autoscale using DBS dynamic discovery. DBS Feature Components (domain-based service) shall be bound to the GSLB service group. Example: `> add server sydney_server LB-Sydney-xxxxxxxxxx.ap-southeast-2.elb.amazonaws.com add gslb serviceGroup sydney_sg HTTP -autoScale DNS -siteName sydney bind gslb serviceGroup sydney_sg sydney_server 80` Domain-Name based Services – AWS ELB GLSB DBS utilizes the FQDN of the user Elastic Load Balancer to dynamically update the GSLB Service Groups to include the back-end servers that are being created and deleted within AWS. The back-end servers or instances in AWS can be configured to scale based on network demand or CPU utilization. To configure this feature, point the NetScaler ADC to the Elastic Load Balancer to dynamically route to different servers in AWS without having to manually update the NetScaler ADC every time an instance is created and deleted within AWS. The NetScaler ADC DBS feature for GSLB Service Groups uses DNS aware service discovery to determine the member service resources of the DBS namespace identified in the AutoScale group. Diagram: NetScaler ADC GSLB DBS AutoScale components with Cloud Load Balancers: Configure AWS Components Security Groups Note: Recommendation should be to create different security groups for ELB, NetScaler ADC GSLB Instance, and Linux instance, as the set of rules required for each of these entities is different. This example has a consolidated Security Group configuration for brevity. To ensure the proper configuration of the virtual firewall, see: Security Groups for Your VPC. Step 1: Log in to the user AWS resource group and navigate to EC2 > NETWORK & SECURITY > Security Groups. Step 2: Click Create Security Group and provide a name and description. This security group encompasses the NetScaler ADC and Linux back-end web servers. Step 3: Add the inbound port rules from the following screenshot. Note: Limiting Source IP access is recommended for granular hardening. For more information, see: Web Server Rules . Amazon Linux Back-end Web Services Step 4: Log in to the user AWS resource group and navigate to EC2 > Instances. Step 5: Click Launch Instance using the details that follow to configure the Amazon Linux instance. Fill in details about setting up a Web Server or back-end service on this instance. NetScaler ADC Configuration Step 6: Log in to the user AWS resource group and navigate to EC2 > Instances. Step 7: Click Launch Instance and use the following details to configure the Amazon AMI instance. Elastic IP Configuration Note: NetScaler ADC can also be made to run with a single elastic IP if necessary to reduce cost, by not having a public IP for the NSIP. Instead, attach an elastic IP to the SNIP which can cover for management access to the box, in addition to the GSLB site IP and ADNS IP. Step 8: Log in to the user AWS resource group and navigate to EC2 > NETWORK & SECURITY > Elastic IPs. Click Allocate new address to create a Elastic IP address. Configure the Elastic IP to point to the user running the NetScaler ADC instance within AWS. Configure a second Elastic IP and again point it to the user running the NetScaler ADC instance. Elastic Load Balancer Step 9: Log in to the user AWS resource group and navigate to EC2 > LOAD BALANCING > Load Balancers. Step 10: Click Create Load Balancer to configure a classic load balancer. The user Elastic Load Balancers allow users to load balance their back-end Amazon Linux instances while also being able to Load Balance other instances that are spun up based on demand. Configuring Global Server Load Balancing Domain-Name Based Services Traffic Management Configurations Note: It is required to configure the NetScaler ADC with either a nameserver or a DNS virtual server through which the ELB/ALB Domains will be resolved for the DBS Service Groups. Step 1: Navigate to Traffic Management > Load Balancing > Servers. Step 2: Click Add to create a server, provide a name and FQDN corresponding to the A record (domain name) in AWS for the Elastic Load Balancer (ELB). Repeat step 2 to add the second ELB from the second resource location in AWS. GSLB Configuration Step 1: Navigate to Traffic Management > GSLB > Sites. Step 2: Click the Add button to configure a GSLB Site. Name the Site. The Type is configured as Remote or Local based on which NetScaler ADC users are configuring the site on. The Site IP Address is the IP address for the GSLB site. The GSLB site uses this IP address to communicate with the other GSLB sites. The Public IP address is required when using a cloud service where a particular IP is hosted on an external firewall or NAT device. The site should be configured as a Parent Site. Ensure the Trigger Monitors are set to ALWAYS and be sure to check off the three boxes at the bottom for Metric Exchange, Network Metric Exchange, and Persistence Session Entry Exchange. Citrix recommends setting the Trigger monitor setting to MEPDOWN. For more information, see: Configure a GSLB Service Group . Step 3: The following screenshot from the AWS configurations shows where users can find the Site IP Address and Public IP Address. The IPs are found under Network & Security > Elastic IPs. Click Create, repeat steps 2 and 3 to configure the GSLB site for the other resource location in AWS (this can be configured on the same NetScaler ADC). Step 4: Navigate to Traffic Management > GSLB > Service Groups. Step 5: Click Add to add a service group. Name the Service Group, use the HTTP protocol, and then under Site Name, choose the respective site that was created in the previous steps. Be sure to configure AutoScale Mode as DNS and check off the boxes for State and Health Monitoring. Click OK to create the Service Group. Step 6: Click Service Group Members and select Server Based. Select the respective Elastic Load Balancing Server that was configured in the start of the run guide. Configure the traffic to go over port 80. Click Create. Step 7: The Service group Member Binding should populate with two instances that it is receiving from the Elastic Load Balancer. Repeat steps to configure the Service Group for the second resource location in AWS. (This can be done from the same location). Step 8: Navigate to Traffic Management > GSLB > Virtual Servers. Click Add to create the virtual server. Name the server, DNS Record Type is set as A, Service Type is set as HTTP, and check the boxes for Enable after Creating and AppFlow Logging. Click OK to create the GSLB Virtual Server. (NetScaler ADC GUI) Step 9: When the GSLB Virtual Server is created, click No GSLB Virtual Server ServiceGroup Binding. Click Add to create the virtual server. Name the server, DNS Record Type is set as A, Service Type is set as HTTP, and check the boxes for Enable after Creating and AppFlow Logging. Click OK to create the GSLB Virtual Server. (NetScaler ADC GUI) Step 10: Under “ServiceGroup Binding” use Select Service Group Name to select and add the Service Groups that were created in the previous steps. Step 11: Next configure the GSLB Virtual Server Domain Binding by clicking No GSLB Virtual Server Domain Binding. Configure the FQDN and Bind, the rest of the settings can be left as the defaults. Step 12: Configure the ADNS Service by clicking No Service. Add a Service Name, click New Server, and enter the IP Address of the ADNS server. Also, if the user ADNS is already configured users can select Existing Server and then choose their ADNS from the menu. Make sure the Protocol is ADNS and the traffic is over Port 53. Configure the Method as LEASTCONNECTION and Backup Method as ROUNDROBIN. NetScaler ADC Global Load Balancing for Hybrid and Multi-Cloud Deployments The NetScaler ADC hybrid and multi-cloud global load balancing (GLB) solution enables users to distribute application traffic across multiple data centers in hybrid clouds, multiple clouds, and on-premises deployments. The NetScaler ADC hybrid and multi-cloud GLB solution helps users to manage their load balancing setup in hybrid or multi-cloud environments without altering the existing setup. Also, if users have an on-premises setup, they can test some of their services in the cloud by using the NetScaler ADC hybrid and multi-cloud GLB solution before completely migrating to the cloud. For example, users can route only a small percentage of their traffic to the cloud, and handle most of the traffic on-premises. The NetScaler ADC hybrid and multi-cloud GLB solution also enables users to manage and monitor NetScaler ADC instances across geographic locations from a single, unified console. A hybrid and multi-cloud architecture can also improve overall enterprise performance by avoiding “vendor lock-in” and using different infrastructure to meet the needs of user partners and customers. With multiple cloud architecture, users can manage their infrastructure costs better as they now have to pay only for what they use. Users can also scale their applications better as they now use the infrastructure on demand. It also provides the ability to quickly switch from one cloud to another to take advantage of the best offerings of each provider. Architecture of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution The following diagram illustrates the architecture of NetScaler ADC hybrid and multi-cloud GLB feature. The NetScaler ADC GLB nodes handle the DNS name resolution. Any of these GLB nodes can receive DNS requests from any client location. The GLB node that receives the DNS request returns the load balancer virtual server IP address as selected by the configured load balancing method. Metrics (site, network, and persistence metrics) are exchanged between the GLB nodes using the metrics exchange protocol (MEP), which is a proprietary Citrix protocol. For more information on the MEP protocol, see: Configure Metrics Exchange Protocol . The monitor configured in the GLB node monitors the health status of the load balancing virtual server in the same data center. In a parent-child topology, metrics between the GLB and NetScaler ADC nodes are exchanged by using MEP. However, configuring monitor probes between a GLB and NetScaler ADC LB node is optional in a parent-child topology. The NetScaler Application Delivery Management (ADM) service agent enables communication between the NetScaler ADM and the managed instances in the user data center. For more information on NetScaler ADM service agents and how to install them, see: Getting Started . Note: This document makes the following assumptions: If users have an existing load balancing setup, it is up and running. A SNIP address or a GLB site IP address is configured on each of the NetScaler ADC GLB nodes. This IP address is used as the data center source IP address when exchanging metrics with other data centers. An ADNS or ADNS-TCP service is configured on each of the NetScaler ADC GLB instances to receive the DNS traffic. The required firewall and security groups are configured in the cloud service providers. Security Groups Configuration Users must set up the required firewall/security groups configuration in the cloud service providers. For more information about AWS security features, see: AWS/Documentation/Amazon VPC/User Guide/Security. Also, on the GLB node, users must open port 53 for ADNS service/DNS server IP address and port 3009 for GSLB site IP address for MEP traffic exchange. On the load balancing node, users must open the appropriate ports to receive the application traffic. For example, users must open port 80 for receiving HTTP traffic and open port 443 for receiving HTTPS traffic. Open port 443 for NITRO communication between the NetScaler ADM service agent and NetScaler ADM. For the dynamic round trip time GLB method, users must open port 53 to allow UDP and TCP probes depending on the configured LDNS probe type. The UDP or the TCP probes are initiated using one of the SNIPs and therefore this setting must be done for security groups bound to the server-side subnet. Capabilities of the NetScaler ADC Hybrid and Multi-Cloud GLB Solution Some of the capabilities of the NetScaler ADC hybrid and multi-cloud GLB solution are described in this section. Compatibility with other Load Balancing Solutions The NetScaler ADC hybrid and multi-cloud GLB solution supports various load balancing solutions such as the NetScaler ADC load balancer, NGINX, HAProxy, and other third-party load balancers. Note: Load balancing solutions other than NetScaler ADC are supported only if proximity-based and non-metric based GLB methods are used and if parent-child topology is not configured. GLB Methods The NetScaler ADC hybrid and multi-cloud GLB solution supports the following GLB methods. Metric-based GLB methods. Metric-based GLB methods collect metrics from the other NetScaler ADC nodes through the metrics exchange protocol. Least Connection: The client request is routed to the load balancer that has the fewest active connections. Least Bandwidth: The client request is routed to the load balancer that is currently serving the least amount of traffic. Least Packets: The client request is routed to the load balancer that has received the fewest packets in the last 14 seconds. Non-metric based GLB methods Round Robin: The client request is routed to the IP address of the load balancer that is at the top of the list of load balancers. That load balancer then moves to the bottom of the list. Source IP Hash: This method uses the hashed value of the client IP address to select a load balancer. Proximity-based GLB methods Static Proximity: The client request is routed to the load balancer that is closest to the client IP address. Round-Trip Time (RTT): This method uses the RTT value (the time delay in the connection between the client’s local DNS server and the data center) to select the IP address of the best performing load balancer. For more information on the load balancing methods, see: Load Balancing Algorithms . GLB Topologies The NetScaler ADC hybrid and multi-cloud GLB solution supports the active-passive topology and parent-child topology. Active-passive topology - Provides disaster recovery and ensures continuous availability of applications by protecting against points of failure. If the primary data center goes down, the passive data center becomes operational. For more information about GSLB active-passive topology, see: Configure GSLB for Disaster Recovery . Parent-child topology – Can be used if customers are using the metric-based GLB methods to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. In a parent-child topology, the LB node (child site) must be a NetScaler ADC appliance because the exchange of metrics between the parent and child site is through the metrics exchange protocol (MEP). For more information about parent-child topology, see: Parent-Child Topology Deployment using the MEP Protocol . IPv6 Support The NetScaler ADC hybrid and multi-cloud GLB solution also supports IPv6. Monitoring The NetScaler ADC hybrid and multi-cloud GLB solution supports built-in monitors with an option to enable the secure connection. However, if LB and GLB configurations are on the same NetScaler ADC instance or if parent-child topology is used, configuring monitors is optional. Persistence The NetScaler ADC hybrid and multi-cloud GLB solution supports the following: Source IP based persistence sessions, so that multiple requests from the same client are directed to the same service if they arrive within the configured time-out window. If the time-out value expires before the client sends another request, the session is discarded, and the configured load balancing algorithm is used to select a new server for the client’s next request. Spillover persistence so that the backup virtual server continues to process the requests it receives, even after the load on the primary falls below the threshold. For more information, see: Configure Spillover. Site persistence so that the GLB node selects a data center to process a client request and forwards the IP address of the selected data center for all subsequent DNS requests. If the configured persistence applies to a site that is DOWN, the GLB node uses a GLB method to select a new site, and the new site becomes persistent for subsequent requests from the client. Configuration by using NetScaler ADM StyleBooks Customers can use the default Multi-cloud GLB StyleBook on NetScaler ADM to configure the NetScaler ADC instances with hybrid and multi-cloud GLB configurations. Customers can use the default Multi-cloud GLB StyleBook for the LB Node StyleBook to configure the NetScaler ADC load balancing nodes which are the child sites in a parent-child topology that handle the application traffic. Use this StyleBook only if users want to configure LB nodes in a parent-child topology. However, each LB node must be configured separately using this StyleBook. Workflow of the NetScaler ADC Hybrid and Multi-Cloud GSLB Solution Configuration Customers can use the shipped Multi-cloud GLB StyleBook on NetScaler ADM to configure the NetScaler ADC instances with hybrid and multi-cloud GLB configurations. The following diagram shows the workflow for configuring a NetScaler ADC hybrid and multi-cloud GLB solution. The steps in the workflow diagram are explained in more detail after the diagram. Perform the following tasks as a cloud administrator: Sign up for a Citrix Cloud account. To start using NetScaler ADM, create a Citrix Cloud company account or join an existing one that has been created by someone in your company. After users log on to Citrix Cloud, click Manage on the Citrix Application Delivery Management tile to set up the ADM service for the first time. Download and install multiple NetScaler ADM service agents. Users must install and configure the NetScaler ADM service agent in their network environment to enable communication between the NetScaler ADM and the managed instances in their data center or cloud. Install an agent in each region, so that they can configure LB and GLB configurations on the managed instances. The LB and GLB configurations can share a single agent. For more information on the above three tasks, see: Getting Started . Deploy load balancers on Microsoft Azure/AWS cloud/on-premises data centers. Depending on the type of load balancers that users are deploying on cloud and on-premises, provision them accordingly. For example, users can provision NetScaler ADC VPX instances in a Microsoft Azure Resource Manager (ARM) portal, in an Amazon Web Services (AWS) virtual private cloud and in on-premises data centers. Configure NetScaler ADC instances to function as LB or GLB nodes in standalone mode, by creating the virtual machines and configuring other resources. For more information on how to deploy NetScaler ADC VPX instances, see the following documents: NetScaler ADC VPX on AWS. Configure a NetScaler VPX Standalone Instance . Perform security configurations. Configure network security groups and network ACLs in ARM and in AWS to control inbound and outbound traffic for user instances and subnets. Add NetScaler ADC instances in NetScaler ADM. NetScaler ADC instances are network appliances or virtual appliances that users want to discover, manage, and monitor from NetScaler ADM. To manage and monitor these instances, users must add the instances to the service and register both LB (if users are using NetScaler ADC for LB) and GLB instances. For more information on how to add NetScaler ADC instances in the NetScaler ADM, see: Getting Started Implement the GLB and LB configurations using default NetScaler ADM StyleBooks. Use Multi-cloud GLB StyleBook to execute the GLB configuration on the selected GLB NetScaler ADC instances. Implement the load balancing configuration. (Users can skip this step if they already have LB configurations on the managed instances.) Users can configure load balancers on NetScaler ADC instances in one of two ways: Manually configure the instances for load balancing the applications. For more information on how to manually configure the instances, see: Set up Basic Load Balancing . Use StyleBooks. Users can use one of the NetScaler ADM StyleBooks (HTTP/SSL Load Balancing StyleBook or HTTP/SSL Load Balancing (with Monitors) StyleBook) to create the load balancer configuration on the selected NetScaler ADC instance. Users can also create their own StyleBooks. For more information on StyleBooks, see: StyleBooks . Use Multi-cloud GLB StyleBook for LB Node to configure GLB parent-child topology in any of the following cases: If users are using the metric-based GLB algorithms (Least Packets, Least Connections, Least Bandwidth) to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. If site persistence is required. Using StyleBooks to Configure GLB on NetScaler ADC LB Nodes Customers can use the Multi-cloud GLB StyleBook for LB Node if they are using the metric-based GLB algorithms (Least Packets, Least Connections, Least Bandwidth) to configure GLB and LB nodes and if the LB nodes are deployed on a different NetScaler ADC instance. Users can also use this StyleBook to configure more child sites for an existing parent site. This StyleBook configures one child site at a time. So, create as many configurations (config packs) from this StyleBook as there are child sites. The StyleBook applies the GLB configuration on the child sites. Users can configure a maximum of 1024 child sites. Note: Use Multi-cloud GLB StyleBook to configure the parent sites. This StyleBook makes the following assumptions: A SNIP address or a GLB site IP address is configured. The required firewall and security groups are configured in the cloud service providers. Configuring a Child Site in a Parent-Child Topology by using Multi-Cloud GLB StyleBook for LB Node Navigate to Applications > Configuration > Create New. Navigate to Applications > Configuration, and click Create New. The StyleBook appears as a user interface page on which users can enter the values for all the parameters defined in this StyleBook. Note: The terms data center and sites are used interchangeably in this document. Set the following parameters: Application Name. Enter the name of the GLB application deployed on the GLB sites for which you want to create child sites. Protocol. Select the application protocol of the deployed application from the drop-down list box. LB Health Check (Optional) Health Check Type. From the drop-down list box, select the type of probe used for checking the health of the load balancer VIP address that represents the application on a site. Secure Mode. (Optional) Select Yes to enable this parameter if SSL based health checks are required. HTTP Request. (Optional) If users selected HTTP as the health-check type, enter the full HTTP request used to probe the VIP address. List of HTTP Status Response Codes. (Optional) If users selected HTTP as the health check type, enter the list of HTTP status codes expected in responses to HTTP requests when the VIP is healthy. Configuring parent site. Provide the details of the parent site (GLB node) under which you want to create the child site (LB node). Site Name. Enter the name of the parent site. Site IP Address. Enter the IP address that the parent site uses as its source IP address when exchanging metrics with other sites. This IP address is assumed to be already configured on the GLB node in each site. Site Public IP Address. (Optional) Enter the Public IP address of the parent site that is used to exchange metrics, if that site’s IP address is NAT’ed. Configuring child site. Provide the details of the child site. Site name. Enter the name of the site. Site IP Address. Enter the IP address of the child site. Here, use the private IP address or SNIP of the NetScaler ADC node that is being configured as a child site. Site Public IP Address. (Optional) Enter the Public IP address of the child site that is used to exchange metrics, if that site’s IP address is NAT’ed. Configuring active GLB services (optional) Configure active GLB services only if the LB virtual server IP address is not a public IP address. This section allows users to configure the list of local GLB services on the sites where the application is deployed. Service IP. Enter the IP address of the load balancing virtual server on this site. Service Public IP Address. If the virtual IP address is private and has a public IP address NAT’ed to it, specify the public IP address. Service Port. Enter the port of the GLB service on this site. Site Name. Enter the name of the site on which the GLB service is located. Click Target Instances and select the NetScaler ADC instances configured as GLB instances on each site on which to deploy the GLB configuration. Click Create to create the LB configuration on the selected NetScaler ADC instance (LB node). Users can also click Dry Run to check the objects that would be created in the target instances. The StyleBook configuration that users have created appears in the list of configurations on the Configurations page. Users can examine, update, or remove this configuration by using the NetScaler ADM GUI. CloudFormation Template Deployment NetScaler ADC VPX is available as Amazon Machine Images (AMI) in the AWS Marketplace. Before using a CloudFormation template to provision a NetScaler ADC VPX in AWS, the AWS user has to accept the terms and subscribe to the AWS Marketplace product. Each edition of the NetScaler ADC VPX in the Marketplace requires this step. Each template in the CloudFormation repository has collocated documentation describing the usage and architecture of the template. The templates attempt to codify recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature, edition, or option. Users can reuse, modify, or enhance the templates to suit their particular production and testing needs. Most templates require full EC2 permissions in addition to permissions to create IAM roles. The CloudFormation templates contain AMI Ids that are specific to a particular release of the NetScaler ADC VPX (for example, release 12.0-56.20) and edition (for example, NetScaler ADC VPX Platinum Edition - 10 Mbps) OR NetScaler ADC BYOL. To use a different version / edition of the NetScaler ADC VPX with a CloudFormation template requires the user to edit the template and replace the AMI IDs. The latest NetScaler ADC AWS-AMI-IDs are located here: NetScaler ADC AWS CloudFormation Master. CFT Three-NIC Deployment This template deploys a VPC, with 3 subnets (Management, client, server) for 2 Availability Zones. It deploys an Internet Gateway, with a default route on the public subnets. This template also creates a HA pair across Availability Zones with two instances of NetScaler ADC: 3 ENIs associated to 3 VPC subnets (Management, Client, Server) on primary and 3 ENIs associated to 3 VPC subnets (Management, Client, Server) on secondary. All the resource names created by this CFT are prefixed with a tagName of the stack name. The output of the CloudFormation template includes: PrimaryCitrixADCManagementURL - HTTPS URL to the Management GUI of the Primary VPX (uses self-signed cert) PrimaryCitrixADCManagementURL2 - HTTP URL to the Management GUI of the Primary VPX PrimaryCitrixADCInstanceID - Instance Id of the newly created Primary VPX instance PrimaryCitrixADCPublicVIP - Elastic IP address of the Primary VPX instance associated with the VIP PrimaryCitrixADCPrivateNSIP - Private IP (NS IP) used for management of the Primary VPX PrimaryCitrixADCPublicNSIP - Public IP (NS IP) used for management of the Primary VPX PrimaryCitrixADCPrivateVIP - Private IP address of the Primary VPX instance associated with the VIP PrimaryCitrixADCSNIP - Private IP address of the Primary VPX instance associated with the SNIP SecondaryCitrixADCManagementURL - HTTPS URL to the Management GUI of the Secondary VPX (uses self-signed cert) SecondaryCitrixADCManagementURL2 - HTTP URL to the Management GUI of the Secondary VPX SecondaryCitrixADCInstanceID - Instance Id of the newly created Secondary VPX instance SecondaryCitrixADCPrivateNSIP - Private IP (NS IP) used for management of the Secondary VPX SecondaryCitrixADCPublicNSIP - Public IP (NS IP) used for management of the Secondary VPX SecondaryCitrixADCPrivateVIP - Private IP address of the Secondary VPX instance associated with the VIP SecondaryCitrixADCSNIP - Private IP address of the Secondary VPX instance associated with the SNIP SecurityGroup - Security group id that the VPX belongs to When providing input to the CFT, the * against any parameter in the CFT implies that it is a mandatory field. For example, VPC ID* is a mandatory field. The following prerequisites must be met. The CloudFormation template requires sufficient permissions to create IAM roles, beyond normal EC2 full privileges. The user of this template also needs to accept the terms and subscribe to the AWS Marketplace product before using this CloudFormation template. The following should also be present: Key Pair 3 unallocated EIPs Primary Management Client VIP Secondary Management For more information on provisioning NetScaler ADC VPX instances on AWS, users can visit: Provisioning NetScaler ADC VPX Instances on AWS . For information on how to configure GLB using stylebooks visit Using StyleBooks to Configure GLB Prerequisites Before attempting to create a VPX instance in AWS, users should ensure they have the following: An AWS account to launch a NetScaler ADC VPX AMI in an Amazon Web Services (AWS) Virtual Private Cloud (VPC). Users can create an AWS account for free at www.aws.amazon.com. An AWS Identity and Access Management (IAM) user account to securely control access to AWS services and resources for users. For more information about how to create an IAM user account, see the topic: Creating IAM Users (Console). An IAM role is mandatory for both standalone and high availability deployments. The IAM role must have the following privileges: ec2:DescribeInstances ec2:DescribeNetworkInterfaces ec2:DetachNetworkInterface ec2:AttachNetworkInterface ec2:StartInstances ec2:StopInstances ec2:RebootInstances ec2:DescribeAddresses ec2:AssociateAddress ec2:DisassociateAddress autoscaling:* sns:* sqs:* iam:SimulatePrincipalPolicy iam:GetRole If the Citrix CloudFormation template is used, the IAM role is automatically created. The template does not allow selecting an already created IAM role. Note: When users log on the VPX instance through the GUI, a prompt to configure the required privileges for IAM role appears. Ignore the prompt if the privileges have already been configured. AWS CLI is required to use all the functionality provided by the AWS Management Console from the terminal program. For more information, see: What Is the AWS Command Line Interface?. Users also need the AWS CLI to change the network interface type to SR-IOV. GSLB Prerequisites The prerequisites for the NetScaler ADC GSLB Service Groups include a functioning AWS / Microsoft Azure environment with the knowledge and ability to configure Security Groups, Linux Web Servers, NetScaler ADCs within AWS, Elastic IPs, and Elastic Load Balancers. GSLB DBS Service integration requires NetScaler ADC version 12.0.57 for AWS ELB and Microsoft Azure ALB load balancer instances. Limitations and Usage Guidelines The following limitations and usage guidelines apply when deploying a NetScaler ADC VPX instance on AWS: Users should be familiar with the AWS terminology listed previously before starting a new deployment. The clustering feature is supported only when provisioned with NetScaler ADM Auto Scale Groups. For the high availability setup to work effectively, associate a dedicated NAT device to the management Interface or associate an Elastic IP (EIP) to NSIP. For more information on NAT, in the AWS documentation, see: NAT Instances. Data traffic and management traffic must be segregated with ENIs belonging to different subnets. Only the NSIP address must be present on the management ENI. If a NAT instance is used for security instead of assigning an EIP to the NSIP, appropriate VPC level routing changes are required. For instructions on making VPC level routing changes, in the AWS documentation, see: Scenario 2: VPC with Public and Private Subnets. A VPX instance can be moved from one EC2 instance type to another (for example, from m3.large to an m3.xlarge). For more information, visit: Limitations and Usage Guidelines. For storage media for VPX on AWS, NetScaler recommends EBS, because it is durable and the data is available even after it is detached from instance. Dynamic addition of ENIs to VPX is not supported. Restart the VPX instance to apply the update. NetScaler recommends users to stop the standalone or HA instance, attach the new ENI, and then restart the instance. The primary ENI cannot be changed or attached to a different subnet once it is deployed. Secondary ENIs can be detached and changed as needed while the VPX is stopped. Users can assign multiple IP addresses to an ENI. The maximum number of IP addresses per ENI is determined by the EC2 instance type, see the section “IP Addresses Per Network Interface Per Instance Type” in: Elastic Network Interfaces. Users must allocate the IP addresses in AWS before they assign them to ENIs. For more information, see: Elastic Network Interfaces. NetScaler recommends that users avoid using the enable and disable interface commands on NetScaler ADC VPX interfaces. The NetScaler ADC set ha node <NODE_ID> -haStatus STAYPRIMARY and set ha node <NODE_ID> -haStatus STAYSECONDARY commands are disabled by default. IPv6 is not supported for VPX. Due to AWS limitations, these features are not supported: Gratuitous ARP(GARP) L2 mode (bridging). Transparent virtual server are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC For RNAT, routing, and Transparent virtual server to work, ensure Source/Destination Check is disabled for all ENIs in the data path. For more information, see “Changing the Source/Destination Checking” in: Elastic Network Interfaces. In a NetScaler ADC VPX deployment on AWS, in some AWS regions, the AWS infrastructure might not be able to resolve AWS API calls. This happens if the API calls are issued through a non-management interface on the NetScaler ADC VPX instance. As a workaround, restrict the API calls to the management interface only. To do that, create an NSVLAN on the VPX instance and bind the management interface to the NSVLAN by using the appropriate command. For example: set ns config -nsvlan <vlan id> -ifnum 1/1 -tagged NO save config Restart the VPX instance at the prompt. For more information about configuring nsvlan, see: Configuring NSVLAN. In the AWS console, the vCPU usage shown for a VPX instance under the Monitoring tab might be high (up to 100 percent), even when the actual usage is much lower. To see the actual vCPU usage, navigate to View all CloudWatch metrics. For more information, see: Monitor your Instances using Amazon CloudWatch. Alternately, if low latency and performance are not a concern, users may enable the CPU Yield feature allowing the packet engines to idle when there is no traffic. For more details about the CPU Yield feature and how to enable it, visit: Citrix Support Knowledge Center. AWS-VPX Support Matrix The following tables list the supported VPX model and AWS regions, instance types, and services. Supported VPX Models on AWS Supported VPX Model: NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 200 Mbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 1000 Mbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 3 Gbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 5 Gbps NetScaler ADC VPX Standard/Advanced/Premium - 10 Mbps NetScaler ADC VPX Express - 20 Mbps NetScaler ADC VPX - Customer Licensed Supported AWS Regions Supported AWS Regions: US West (Oregon) Region US West (N. California) Region US East (Ohio) Region US East (N. Virginia) Region Asia Pacific (Seoul) Region Canada (Central) Region Asia Pacific (Singapore) Region Asia Pacific (Sydney) Region Asia Pacific (Tokyo) Region Asia Pacific (Hong Kong) Region Canada (Central) Region China (Beijing) Region China (Ningxia) Region EU (Frankfurt) Region EU (Ireland) Region EU (London) Region EU (Paris) Region South America (São Paulo) Region AWS GovCloud (US-East) Region Supported AWS Instance Types Supported AWS Instance Types: m3.large, m3.large, m3.2xlarge c4.large, c4.large, c4.2xlarge, c4.4xlarge, c4.8xlarge m4.large, m4.large, m4.2xlarge, m4.4xlarge, m4.10xlarge m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge, m5.12xlarge, m5.24xlarge c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.18xlarge, c5.24xlarge C5n.large, C5n.xlarge, C5n.2xlarge, C5n.4xlarge, C5n.9xlarge, C5n.18xlarge Supported AWS Services Supported AWS Services: #EC2 #Lambda #S3 #VPC #route53 #ELB #Cloudwatch #AWS AutoScaling #Cloud formation Simple Queue Service (SQS) Simple Notification Service (SNS) Identity & Access Management (IAM) For higher bandwidth, NetScaler recommends the following instance types Instance Type Bandwidth Enhanced Networking (SR-IOV) M4.10x large 3 Gbps and 5 Gbps Yes C4.8x large 3 Gbps and 5 Gbps Yes C5.18xlarge/M5.18xlarge 25 Gbps ENA C5n.18xlarge 30 Gbps ENA To remain updated about the current supported VPX models and AWS regions, instance types, and services, visit: VPX-AWS support matrix .
  12. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: XenServer VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler ADC VPX on Microsoft Azure Microsoft Azure Microsoft Azure is an ever-expanding set of cloud computing services built to help organizations meet their business challenges. Azure gives users the freedom to build, manage, and deploy applications on a massive, global network using their preferred tools and frameworks. With Azure, users can: Be future-ready with continuous innovation from Microsoft to support their development today—and their product visions for tomorrow. Operate hybrid cloud seamlessly on-premises, in the cloud, and at the edge—Azure meets users where they are. Build on their terms with Azure’s commitment to open source and support for all languages and frameworks, allowing users to be free to build how they want and deploy where they want. Trust their cloud with security from the ground up—backed by a team of experts and proactive, industry-leading compliance that is trusted by enterprises, governments, and startups. Azure Terminology Here is a brief description of the key terms used in this document that users must be familiar with: Azure Load Balancer – Azure load balancer is a resource that distributes incoming traffic among computers in a network. Traffic is distributed among virtual machines defined in a load-balancer set. A load balancer can be external or internet-facing, or it can be internal. Azure Resource Manager (ARM) – ARM is the new management framework for services in Azure. Azure Load Balancer is managed using ARM-based APIs and tools. Back-End Address Pool – These are IP addresses associated with the virtual machine NIC to which load is distributed. BLOB - Binary Large Object – Any binary object like a file or an image that can be stored in Azure storage. Front-End IP Configuration – An Azure Load balancer can include one or more front-end IP addresses, also known as a virtual IPs (VIPs). These IP addresses serve as ingress for the traffic. Instance Level Public IP (ILPIP) – An ILPIP is a public IP address that users can assign directly to a virtual machine or role instance, rather than to the cloud service that the virtual machine or role instance resides in. This does not take the place of the VIP (virtual IP) that is assigned to their cloud service. Rather, it is an extra IP address that can be used to connect directly to a virtual machine or role instance. Note: In the past, an ILPIP was referred to as a PIP, which stands for public IP. Inbound NAT Rules – This contains rules mapping a public port on the load balancer to a port for a specific virtual machine in the back-end address pool. IP-Config - It can be defined as an IP address pair (public IP and private IP) associated with an individual NIC. In an IP-Config, the public IP address can be NULL. Each NIC can have multiple IP-Configs associated with it, which can be up to 255. Load Balancing Rules – A rule property that maps a given front-end IP and port combination to a set of back-end IP addresses and port combinations. With a single definition of a load balancer resource, users can define multiple load balancing rules, each rule reflecting a combination of a front-end IP and port and back end IP and port associated with virtual machines. Network Security Group (NSG) – NSG contains a list of Access Control List (ACL) rules that allow or deny network traffic to virtual machine instances in a virtual network. NSGs can be associated with either subnets or individual virtual machine instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all the virtual machine instances in that subnet. In addition, traffic to an individual virtual machine can be restricted further by associating an NSG directly to that virtual machine. Private IP addresses – Used for communication within an Azure virtual network, and user on-premises network when a VPN gateway is used to extend a user network to Azure. Private IP addresses allow Azure resources to communicate with other resources in a virtual network or an on-premises network through a VPN gateway or ExpressRoute circuit, without using an Internet-reachable IP address. In the Azure Resource Manager deployment model, a private IP address is associated with the following types of Azure resources – virtual machines, internal load balancers (ILBs), and application gateways. Probes – This contains health probes used to check availability of virtual machines instances in the back-end address pool. If a particular virtual machine does not respond to health probes for some time, then it is taken out of traffic serving. Probes enable users to track the health of virtual instances. If a health probe fails, the virtual instance is taken out of rotation automatically. Public IP Addresses (PIP) – PIP is used for communication with the Internet, including Azure public-facing services and is associated with virtual machines, Internet-facing load balancers, VPN gateways, and application gateways. Region - An area within a geography that does not cross national borders and that contains one or more data centers. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away, to form a regional pair. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to generally as location. Resource Group - A container in Resource Manager that holds related resources for an application. The resource group can include all resources for an application, or only those resources that are logically grouped. Storage Account – An Azure storage account gives users access to the Azure blob, queue, table, and file services in Azure Storage. A user storage account provides the unique namespace for user Azure storage data objects. Virtual Machine – The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in various sizes. Virtual Network - An Azure virtual network is a representation of a user network in the cloud. It is a logical isolation of the Azure cloud dedicated to a user subscription. Users can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. Users can also further segment their VNet into subnets and launch Azure IaaS virtual machines and cloud services (PaaS role instances). Also, users can connect the virtual network to their on-premises network using one of the connectivity options available in Azure. In essence, users can expand their network to Azure, with complete control on IP address blocks with the benefit of the enterprise scale Azure provides. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on Azure combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the Azure Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on Azure enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Datacenter Expansion with Autoscale In an application economy where applications are synonymous with business productivity, growth, and customer experience, it becomes indispensable for organizations to stay competitive, innovate rapidly and scale to meet customer demands while minimizing downtime and to prevent revenue losses. When an organization outgrows the on-prem data center capacity, instead of thinking about procuring more hardware and spending their capex budget, they are thinking about expanding their presence in the public cloud. With the move to the public cloud, when selecting the right ADC for the user public cloud deployments, scale and performance are important factors. There is always a need to scale applications in response to fluctuating demand. Under provisioning may lead to lost customers, reduced employee productivity, and lower revenue. Right sizing the user infrastructure on demand is even more important in the public cloud where over provisioning is costly. In response to the need for greater performance and scalability in the public cloud, NetScaler ADC remains the best option. The best-in-class solution lets users automatically scale up to 100 Gbps/region and because of its superior software architecture, it delivers a latency advantage of 100 ms on a typical eCommerce webpage compared to other ADC vendors and cloud provider options. Benefits of Autoscaling High availability of applications. Autoscaling ensures that your application always has the right number of NetScaler ADC VPX instances to handle the traffic demands. This is to ensure that your application is up and running all the time irrespective of traffic demands. Smart scaling decisions and zero touch configuration. Autoscaling continuously monitors your application and adds or removes NetScaler ADC instances dynamically depending on the demand. When demand spikes upward, the instances are automatically added. When the demand spikes downward, the instances are automatically removed. The addition and removal of NetScaler ADC instances happens automatically making it a zero-touch manual configuration. Automatic DNS management. The NetScaler ADM Autoscale feature offers automatic DNS management. Whenever new NetScaler ADC instances are added, the domain names are updated automatically. Graceful connection termination. During a scale-in, the NetScaler ADC instances are gracefully removed avoiding the loss of client connections. Better cost management. Autoscaling dynamically increases or decreases NetScaler ADC instances as needed. This enables users to optimize the costs involved. Users save money by launching instances only when they are needed and terminate them when they are not needed. Thus, users pay only for the resources they use. Observability. Observability is essential to application dev-ops or IT personnel to monitor the health of the application. The NetScaler ADM’s Autoscale dashboard enables users to visualize the threshold parameter values, Autoscale trigger time stamps, events, and the instances participating in Autoscale. Autoscaling of NetScaler ADC VPX in Microsoft Azure using NetScaler ADM Autoscaling Architecture NetScaler ADM handles the client traffic distribution using Azure DNS or Azure Load Balancer (ALB). Traffic Distribution using Azure DNS The following diagram illustrates how the DNS based autoscaling occurs using the Azure traffic manager as the traffic distributor: In DNS based autoscaling, DNS acts as a distribution layer. The Azure traffic manager is the DNS based load balancer in Microsoft Azure. Traffic manager directs the client traffic to the appropriate NetScaler ADC instance that is available in the NetScaler ADM autoscaling group. Azure traffic manager resolves the FQDN to the VIP address of the NetScaler ADC instance. Note: In DNS based autoscaling, each NetScaler ADC instance in the NetScaler ADM autoscale group requires a public IP address. NetScaler ADM triggers the scale-out or scale-in action at the cluster level. When a scale-out is triggered, the registered virtual machines are provisioned and added to the cluster. Similarly, when a scale-in is triggered, the nodes are removed and de-provisioned from the NetScaler ADC VPX clusters. Traffic Distribution using Azure Load Balancer The following diagram illustrates how the autoscaling occurs using the Azure Load Balancer as the traffic distributor: Azure Load Balancer is the distribution tier to the cluster nodes. ALB manages the client traffic and distributes it to NetScaler ADC VPX clusters. ALB sends the client traffic to NetScaler ADC VPX cluster nodes that are available in the NetScaler ADM autoscaling group across availability zones. Note: Public IP address is allocated to Azure Load Balancer. NetScaler ADC VPX instances do not require a public IP address. NetScaler ADM triggers the scale-out or scale-in action at the cluster level. When a scale-out is triggered the registered virtual machines are provisioned and added to the cluster. Similarly, when a scale-in is triggered, the nodes are removed and de-provisioned from the NetScaler ADC VPX clusters. NetScaler ADM Autoscale Group Autoscale group is a group of NetScaler ADC instances that load balance applications as a single entity and trigger autoscaling based on the configured threshold parameter values. Resource Group Resource group contains the resources that are related to NetScaler ADC autoscaling. This resource group helps users to manage the resources required for autoscaling. For more information, see: Manage Azure Resources by using the Azure Portal. Azure Back-end Virtual Machine Scale Set Azure virtual machine scale set is a collection of identical VM instances. The number of VM instances can increase or decrease depending on the client traffic. This set provides high-availability to your applications. For more information, see: What are Virtual Machine Scale Sets?. Availability Zones Availability Zones are isolated locations within an Azure region. Each region is made up of several availability zones. Each availability zone belongs to a single region. Each availability zone has one NetScaler ADC VPX cluster. For more information, see: Regions and Availability Zones in Azure. Availability Sets An availability set is a logical grouping of a NetScaler ADC VPX cluster and application servers. Availability Sets are helpful to deploy ADC instances across multiple isolated hardware nodes in a cluster. With an availability set, users can ensure a reliable ADM autoscaling if there is hardware or software failure within Azure. For more information, see: Tutorial: Create and Deploy Highly Available Virtual Machines with Azure PowerShell. The following diagram illustrates the autoscaling in an availability set: The Azure infrastructure (ALB or Azure traffic manager) sends the client traffic to a NetScaler ADM autoscaling group in the availability set. NetScaler ADM triggers the scale-out or scale-in action at the cluster level. How Autoscaling Works The following flowchart illustrates the autoscaling workflow: The NetScaler ADM collects the statistics (CPU, Memory, and throughput) from the autoscale provisioned clusters for every minute. The statistics are evaluated against the configuration thresholds. Depending on the statistics, scale out or scale in is triggered. Scale-out is triggered when the statistics exceed the maximum threshold. Scale-in is triggered when the statistics are operating below the minimum threshold. If a scale-out is triggered: A new node is provisioned. The node is attached to the cluster and the configuration is synchronized from the cluster to the new node. The node is registered with NetScaler ADM. The new node IP addresses are updated in the Azure traffic manager. If a scale-in is triggered: The node is identified for removal. Stop new connections to the selected node. Waits for the specified period for the connections to drain. In DNS traffic, it also waits for the specified TTL period. The node is detached from the cluster, deregistered from NetScaler ADM, and then de-provisioned from Microsoft Azure. Note: When the application is deployed, an IP set is created on clusters in every availability zone. Then, the domain and instance IP addresses are registered with the Azure traffic manager or ALB. When the application is removed, the domain and instance IP addresses are deregistered from the Azure traffic manager or ALB. Then, the IP set is deleted. Example Autoscaling Scenario Consider that users have created an autoscale group named asg_arn in a single availability zone with the following configuration. Selected threshold parameters – Memory usage. Threshold limit set to memory: Minimum limit: 40 Maximum limit: 85 Watch time – 2 minutes. Cooldown period – 10 minutes. Time to wait during de-provision – 10 minutes. DNS time to live – 10 seconds. After the autoscale group is created, statistics are collected from the autoscale group. The autoscale policy also evaluates if any autoscale event is in progress. If autoscaling is in progress, wait for that event to complete before collecting the statistics. The Sequence of Events Memory usage exceeds the threshold limit at T2. However, the scale-out is not triggered because it did not breach for the specified watch time. Scale-out is triggered at T5 after a maximum threshold is breached for 2 minutes (watch time) continuously. No action was taken for the breach between T5-T10 because node provisioning is in progress. The node is provisioned at T10 and added to the cluster. The cooldown period is started. No action was taken for the breach between T10-T20 because of the cooldown period. This period ensures the organic growing of instances of an autoscale group. Before triggering the next scaling decision, it waits for the current traffic to stabilize and average out on the current set of instances. Memory usage drops below the minimum threshold limit at T23. However, the scale-in is not triggered because it did not breach for the specified watch time. Scale-in is triggered at T26 after the minimum threshold is breached for 2 minutes (watch time) continuously. A node in the cluster is identified for de-provisioning. No action was taken for the breach between T26-T36 because NetScaler ADM is waiting to drain existing connections. For DNS based autoscaling, TTL is in effect. Note: For DNS based autoscaling, NetScaler ADM waits for the specified Time-To-Live (TTL) period. Then, it waits for existing connections to drain before initiating node de-provisioning. No action was taken for the breach between T37-T39 because node de-provisioning is in progress. The node is removed and de-provisioned at T40 from the cluster. All the connections to the selected node were drained before initiating node de-provisioning. Therefore, the cooldown period is skipped after the node de-provision. Autoscale Configuration NetScaler ADM manages all the NetScaler ADC VPX clusters in Microsoft Azure. NetScaler ADM accesses the Azure resources using the Cloud Access Profile. The following flow diagram explains the steps involved in creating and configuring an Autoscale group: Set up Microsoft Azure Components Perform the following tasks in Azure before users Autoscale NetScaler ADC VPX instances in NetScaler ADM. Create a Virtual Network. Create Security Groups. Create Subnets. Subscribe to the NetScaler ADC VPX License in Microsoft Azure . Create and Register an Application. Create a Virtual Network Log on to the user Microsoft Azure portal. Select Create a resource. Select Networking and click Virtual Network. Specify the required parameters. In Resource group, users must specify the resource group where they want to deploy a NetScaler ADC VPX product. In Location, users must specify the locations that support availability zones such as: Central US East US2 France Central North Europe Southeast Asia West Europe West US2 Note: The application servers are present in this resource group. Click Create. For more information, see Azure Virtual Network here: What is Azure Virtual Network?. Create Security Groups Create three security groups in the user virtual network (VNet) - one each for the management, client, and server connections. Create a security group to control inbound and outbound traffic in the NetScaler ADC VPX instance. Create rules for incoming traffic that users want to control in the NetScaler Autoscale groups. Users can add as many rules as they want. Management: A security group in the user account dedicated for management of NetScaler ADC VPX. NetScaler ADC has to contact Azure services and requires Internet access. Inbound rules are allowed on the following TCP and UDP ports. TCP: 80, 22, 443, 3008–3011, 4001 UDP: 67, 123, 161, 500, 3003, 4500, 7000 For more information, see Azure Virtual Network here: What is Azure Virtual Network?. Create Security Groups Create three security groups in the user virtual network (VNet) - one each for the management, client, and server connections. Create a security group to control inbound and outbound traffic in the NetScaler ADC VPX instance. Create rules for incoming traffic that users want to control in the NetScaler Autoscale groups. Users can add as many rules as they want. Management: A security group in the user account dedicated for management of NetScaler ADC VPX. NetScaler ADC has to contact Azure services and requires Internet access. Inbound rules are allowed on the following TCP and UDP ports. TCP: 80, 22, 443, 3008–3011, 4001 UDP: 67, 123, 161, 500, 3003, 4500, 7000 Note: Ensure that the security group allows the NetScaler ADM agent to be able to access the VPX. Client: A security group in the user account dedicated for client-side communication of NetScaler ADC VPX instances. Typically, inbound rules are allowed on the TCP ports 80, 22, and 443. Server: A security group in the user account dedicated for server-side communication of NetScaler ADC VPX. For more information on how to create a security group in Microsoft Azure, see: Create, Change, or Delete a Network Security Group. Create Subnets Create three subnets in the user virtual network (VNet) - one each for the management, client, and server connections. Specify an address range that is defined in the user VNet for each of the subnets. Specify the availability zone in which users want the subnet to reside. Management: A subnet in the user Virtual Network (VNet) dedicated for management. NetScaler ADC has to contact Azure services and requires internet access. Client: A subnet in the user Virtual Network (VNet) dedicated for the client side. Typically, NetScaler ADC receives client traffic for the application via a public subnet from the internet. Server: A subnet where the application servers are provisioned. All the user application servers are present in this subnet and receive application traffic from the NetScaler ADC through this subnet. Note: Specify an appropriate security group to the subnet while creating a subnet. For more information on how to create a subnet in Microsoft Azure, see: Add, Change, or Delete a Virtual Network Subnet. Subscribe to the NetScaler ADC VPX License in Microsoft Azure Log on to the user Microsoft Azure portal. Select Create a resource. In the Search the marketplace bar, search NetScaler ADC and select the required product version. In the Select a software plan list, select one of the following license types: Bring your own license Enterprise Platinum Note: If users choose the Bring your own license option, the Autoscale group checks out the licenses from the NetScaler ADM while provisioning NetScaler ADC instances. In NetScaler ADM, the Advanced and Premium are the equivalent license types for Enterprise and Platinum respectively. Ensure the programmatic deployment is enabled for the selected NetScaler ADC product. Beside Want to deploy programmatically? click Get Started. In Choose the subscriptions, select Enable to deploy the selected NetScaler ADC VPX edition programmatically. Important: Enabling the programmatic deployment is required to Autoscale NetScaler ADC VPX instances in Azure. Click Save. Close Configure Programmatic Deployment. Click Create. Create and Register an Application NetScaler ADM uses this application to Autoscale NetScaler ADC VPX instances in Azure. To create and register an application in Azure: In the Azure portal, select Azure Active Directory. This option displays the user organization’s directory. Select App registrations: In Name, specify the name of the application. Select the Application type from the list. In Sign-on URL, specify the application URL to access the application. Click Create. For more information on App registrations, see: How to: Use the Portal to Create an Azure AD Application and Service Principal that can Access Resources. Azure assigns an application ID to the application. The following is an example application registered in Microsoft Azure: Copy the following IDs and provide these IDs when users are configuring the Cloud Access Profile in NetScaler ADM. For steps to retrieve the following IDs, see: Get Values for Signing in. Application ID Directory ID Key Subscription ID: Copy the subscription ID from the user storage account. Assign the Role Permission to an Application NetScaler ADM uses the application-as-a-service principle to Autoscale NetScaler ADC instances in Microsoft Azure. This permission is applicable only to the selected resource group. To assign a role permission to the user registered application, users have to be the owner of the Microsoft Azure subscription. In the Azure portal, select Resource groups. Select the resource group to which users want to assign a role permission. Select Access control (IAM). In Role assignments, click Add. Select Owner from the Role list. Select the application that is registered for autoscaling NetScaler ADC instances. Click Save. Set up NetScaler ADM Components Perform the following tasks in Azure before users Autoscale NetScaler ADC VPX instances in NetScaler ADM: Provision NetScaler ADM Agent on Azure Create a Site Attach the Site to a NetScaler ADM Service Agent Provision NetScaler ADM Agent on Azure The NetScaler ADM service agent works as an intermediary between the NetScaler ADM and the discovered instances in the data center or on the cloud. Navigate to Networks > Agents. Click Provision. Select Microsoft Azure and click Next. In the Cloud Parameters tab, specify the following: Name - specify the NetScaler ADM agent name. Site - select the site users have created to provision an agent and ADC VPX instances. Cloud Access Profile - select the cloud access profile from the list. Availability Zone - Select the zones in which users want to create the Autoscale groups. Depending on the cloud access profile that users have selected, availability zones specific to that profile are populated. Security Group - Security groups control the inbound and outbound traffic in the NetScaler ADC agent. Users create rules for both incoming and outgoing traffic that they want to control. Subnet - Select the management subnet where users want to provision an agent. Tags - Type the key-value pair for the Autoscale group tags. A tag consists of a case-sensitive key-value pair. These tags enable users to organize and identify the Autoscale groups easily. The tags are applied to both Azure and NetScaler ADM. Click Finish. Alternatively, users can install the NetScaler ADM agent from the Azure Marketplace. For more information, see: Install NetScaler ADM Agent on Microsoft Azure Cloud. Create a Site Create a site in NetScaler ADM and add the VNet details associated with the user Microsoft Azure resource group. In NetScaler ADM, navigate to Networks > Sites. Click Add. In the Select Cloud pane, Select Data Center as a Site type. Choose Azure from the Type list. Check the Fetch VNet from Azure check box. This option helps users to retrieve the existing VNet information from the user Microsoft Azure account. Click Next. In the Choose Region pane, In Cloud Access Profile, select the profile created for the user Microsoft Azure account. If there are no profiles, create a profile. To create a cloud access profile, click Add. In Name, specify a name to identify the user Azure account in NetScaler ADM. In Tenant Active Directory ID / Tenant ID, specify the Active Directory ID of the tenant or the account in Microsoft Azure. Specify the Subscription ID. Specify the Application ID/Client ID. Specify the Application Key Password / Secret. Click Create. For more information, see: Install NetScaler ADM Agent on Microsoft Azure Cloud and Mapping Cloud Access Profile to the Azure Application. In Vnet, select the virtual network containing NetScaler ADC VPX instances that users want to manage. Specify a Site Name. Click Finish. Mapping Cloud Access Profile to the Azure Application NetScaler ADM Term Microsoft Azure Term Tenant Active Directory ID / Tenant ID Directory ID Subscription ID Subscription ID Application ID / Client ID Application ID Application Key Password / Secret keys or Certificates or Client Secrets Attach the Site to a NetScaler ADM Service Agent In NetScaler ADM, navigate to Networks > Agents. Select the agent for which users want to attach a site. Click Attach Site. Select the site from the list that users want to attach. Click Save. Step 1: Initialize Autoscale Configuration in NetScaler ADM In NetScaler ADM, navigate to Networks > AutoScale Groups. Click Add to create Autoscale groups. The Create AutoScale Group page appears. Select Microsoft Azure and click Next. In Basic Parameters, enter the following details: Name: Type a name for the Autoscale group. Site: Select the site that users have created to Autoscale the NetScaler ADC VPX instances on Microsoft Azure. If users have not created a site, click Add to create a site. Agent: Select the NetScaler ADM agent that manages the provisioned instances. Cloud Access Profile: Select the cloud access profile. Users can also add or edit a Cloud Access Profile. Device Profile: Select the device profile from the list. NetScaler ADM uses the device profile when it requires users to log on to the NetScaler ADC VPX instance. Note: Ensure the selected device profile conforms to Microsoft Azure password rules, which can be found here: Password Policies that only Apply to Cloud User Accounts . Traffic Distribution Mode: The Load Balancing using Azure LB option is selected as the default traffic distribution mode. Users can also choose the DNS using Azure DNS mode for the traffic distribution. Enable AutoScale Group: Enable or disable the status of the ASG groups. This option is enabled, by default. If this option is disabled, autoscaling is not triggered. Availability Set or Availability Zone: Select the availability set or availability zones in which users want to create the Autoscale groups. Depending on the cloud access profile that users have selected, availability zones appear on the list. Tags: Type the key-value pair for the Autoscale group tags. A tag consists of a case-sensitive key-value pair. These tags enable users to organize and identify the Autoscale groups easily. The tags are applied to both Microsoft Azure and NetScaler ADM. Click Next. Step 2: Configure Autoscale Parameters In the AutoScale Parameters tab, enter the following details. Select one or more than one of the following threshold parameters whose values must be monitored to trigger a scale-out or a scale-in. Enable CPU Usage Threshold: Monitor the metrics based on the CPU usage. Enable Memory Usage Threshold: Monitor the metrics based on the memory usage. Enable Throughput Threshold: Monitor the metrics based on the throughput. Note: Default minimum threshold limit is 30 and the maximum threshold limit is 70. However, users can modify the limits. Minimum threshold limit must be equal or less than half of the maximum threshold limit. Users can select more than one threshold parameter for monitoring. Scale-out is triggered if at least one of the threshold parameters is above the maximum threshold. However, a scale-in is triggered only if all the threshold parameters are operating below their normal thresholds. Minimum Instances: Select the minimum number of instances that must be provisioned for this Autoscale group. The default minimum number of instances is equal to the number of zones selected. Users can only increment the minimum instances in the multiples of the specified number of zones. For example, if the number of availability zones is 4, the minimum instances are 4 by default. Users can increase the minimum instances by 8, 12, 16. Maximum Instances: Select the maximum number of instances that must be provisioned for this Autoscale group. The maximum number of instances must be greater than or equal to the value of the minimum instances. The maximum number of instances cannot exceed the number of availability zones multiplied by 32. Maximum number of instances = number of availability zones * 32 Watch-Time (minutes): Select the watch-time duration. The time for which the scale parameter’s threshold has to stay breached for scaling to happen. If the threshold is breached on all samples collected in this specified time then a scaling happens. Cooldown period (minutes): Select the cooldown period. During scale-out, the cooldown period is the time for which evaluation of the statistics has to be stopped after a scale-out occurs. This period ensures the organic growing of instances of an Autoscale group. Before triggering the next scaling decision, it waits for the current traffic to stabilize and average out on the current set of instances. Time to wait during Deprovision (minutes): Select the drain connection timeout period. During scale-in action, an instance is identified to de-provision. NetScaler ADM restricts the identified instance from processing new connections until the specified time expires before de-provision. In this period, it allows existing connections to this instance to be drained out before it gets de-provisioned. DNS Time To Live (seconds): Select the time (in seconds). In this period, a packet is set to exist inside a network before a router discards the packet. This parameter is applicable only when the traffic distribution mode is DNS using the Microsoft Azure traffic manager. Click Next. Step 3: Configure Licenses for Provisioning NetScaler ADC Instances Select one of the following modes to license NetScaler ADC instances that are part of the Autoscale Group: Using NetScaler ADM: While provisioning NetScaler ADC instances, the Autoscale group checks out the licenses from the NetScaler ADM. Using Microsoft Azure: The Allocate from Cloud option uses the NetScaler product licenses available in the Azure Marketplace. While provisioning NetScaler ADC instances, the Autoscale group uses the licenses from the marketplace. If users choose to use licenses from Azure Marketplace, specify the product or license in the Cloud Parameters tab. For more information, see: Licensing Requirements. Use Licenses from NetScaler ADM To use this option, ensure that users have subscribed to NetScaler ADC with the Bring your own license software plan in Azure. See: Subscribe to the NetScaler ADC VPX License in Microsoft Azure . In the License tab, select Allocate from ADM. In License Type, select one of the following options from the list: Bandwidth Licenses: Users can select one of the following options from the Bandwidth License Types list: Pooled Capacity: Specify the capacity to allocate for every new instance in the Autoscale group. From the common pool, each ADC instance in the Autoscale group checks out one instance license and only as much bandwidth as is specified. VPX Licenses: When a NetScaler ADC VPX instance is provisioned, the instance checks out the license from the NetScaler ADM. Virtual CPU Licenses: The provisioned NetScaler ADC VPX instance checks out licenses depending on the number of CPUs running in the Autoscale group. Note: When the provisioned instances are removed or destroyed, the applied licenses return to the NetScaler ADM license pool. These licenses can be reused to provision new instances during the next Autoscale. In License Edition, select the license edition. The Autoscale group uses the specified edition to provision instances. Click Next. Step 4: Configure Cloud Parameters In the Cloud Parameters tab, enter the following details: Resource Group: Select the resource group in which NetScaler ADC instances are deployed. Product / License: Select the NetScaler ADC product version that users want to provision. Ensure that programmatic access is enabled for the selected type. For more information, see: Subscribe to the NetScaler ADC VPX License in Microsoft Azure. Azure VM Size: Select the required VM size from the list. Note: Ensure that the selected Azure VM Size has a minimum of three NICs. For more information, see: Autoscaling of NetScaler ADC VPX in Microsoft Azure using NetScaler ADM . Cloud Access Profile for ADC: NetScaler ADM logs in to the user Azure account using this profile to provision or de-provision ADC instances. It also configures Azure LB or Azure DNS. Image: Select the required NetScaler ADC version image. Click Add New to add a NetScaler ADC image. Security Groups: Security groups control the inbound and outbound traffic in a NetScaler ADC VPX instance. Select a security group for Management, Client, and Server traffic. For more information on management, client, and server security groups, see: Create Security Groups. Subnets: Users must have three separate subnets such as Management, client, and server subnet to Autoscale NetScaler ADC subnets. Subnets contain the required entities for autoscaling. Select For more information, see: Create Subnets. Click Finish. Step 5: Configure an application for the Autoscale group In NetScaler ADM, navigate to Networks > Autoscale Groups. Select the Autoscale group that users created and click Configure. In Configure Application, specify the following details: Application Name - Specify the name of an application. Domain Name - Specify the domain name of an application. Zone Name - Specify the zone name of an application. This domain and zone name redirects to the virtual servers in Azure. For example, if users host an application in app.example.com, the app is the domain name and example.com is the zone name. Access Type - Users can use ADM autoscaling for both external and internal applications. Select the required application access type. Choose the required StyleBook that users want to deploy configurations for the selected Autoscale group. If users want to import StyleBooks, click Import New StyleBook. Specify the values for all the parameters. The configuration parameters are pre-defined in the selected StyleBook. Check the Application Server Group Type CLOUD check box to specify the application servers available in the virtual machine scale set. In Application Server Fleet Name, specify Autoscale setting name of your virtual machine scale set. Select the Application Server Protocol from the list. In Member Port, specify the port value of the application server. Note: Ensure AutoDisable Graceful shutdown is set to No and AutoDisable Delay field is blank. If users want to specify the advanced settings for the user application servers, check the Advanced Application Server Settings check box. Then, specify the required values listed under Advanced Application Server Settings. If users have standalone application servers in the virtual network, check the Application Server Group Type STATIC check box: Select the Application Server Protocol from the list. In Server IPs and Ports, click + to add an application server IP address, port, and weight, then click Create. Click Create. Modify the Autoscale Groups Configuration Users can modify an Autoscale group configuration or delete an Autoscale group. Users can modify only the following Autoscale group parameters: Maximum and minimum limits of the threshold parameters Minimum and maximum instance values Drain connection period value Cooldown period value Watch duration value Users can also delete the Autoscale groups after they are created. When an Autoscale group is deleted, all the domains and IP addresses are deregistered from DNS and the cluster nodes are de-provisioned. For more detailed information on provisioning NetScaler ADC VPX instances on Microsoft Azure, see Provisioning NetScaler ADC VPX Instances on Microsoft Azure. ARM (Azure Resource Manager) Templates The GitHub repository for NetScaler ADC ARM (Azure Resource Manager) templates hosts NetScaler ADC custom templates for deploying NetScaler ADC in Microsoft Azure Cloud Services. The templates in this repository are developed and maintained by the NetScaler ADC engineering team. Each template in this repository has co-located documentation describing the usage and architecture of the template. The templates attempt to codify the recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature / edition / option. Users can reuse / modify or enhance the templates to suit their particular production and testing needs. Most templates require sufficient subscriptions to portal.azure.com to create resource and deploy templates. NetScaler ADC VPX Azure Resource Manager (ARM) templates are designed to ensure an easy and consistent way of deploying standalone NetScaler ADC VPX. These templates increase reliability and system availability with built-in redundancy. These ARM templates support Bring Your Own License (BYOL) or Hourly based selections. Choice of selection is either mentioned in the template description or offered during template deployment. For more information on how to provision a NetScaler ADC VPX instance on Microsoft Azure using ARM (Azure Resource Manager) templates, visit NetScaler ADC Azure Templates. For more information on how to add Azure autoscale settings, visit: Add Azure Autoscale Settings. For more information on how to deploy a NetScaler ADC VPX instance on Microsoft Azure, refer to Deploy a NetScaler ADC VPX Instance on Microsoft Azure. For more information on how a NetScaler ADC VPX instance works on Azure, visit How a NetScaler ADC VPX Instance Works on Azure. Prerequisites Users need some prerequisite knowledge before deploying a NetScaler VPX instance on Azure: Familiarity with Azure terminology and network details. For information, see the Azure terminology in the previous section. Knowledge of a NetScaler ADC appliance. For detailed information about the NetScaler ADC appliance, see: NetScaler ADC 13.0. Knowledge of NetScaler ADC networking. See: Networking. Azure Autoscale Prerequisites This section describes the prerequisites that users must complete in Microsoft Azure and NetScaler ADM before they provision NetScaler ADC VPX instances. This document assumes the following: Users possess a Microsoft Azure account that supports the Azure Resource Manager deployment model. Users have a resource group in Microsoft Azure. For more information on how to create an account and other tasks, see Microsoft Azure Documentation. Limitations Running the NetScaler ADC VPX load balancing solution on ARM imposes the following limitations: The Azure architecture does not accommodate support for the following NetScaler ADC features: Clustering IPv6 Gratuitous ARP (GARP) L2 Mode (bridging). Transparent virtual server are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC USIP Jumbo Frames If users think that they might have to shut down and temporarily deallocate the NetScaler ADC VPX virtual machine at any time, they should assign a static Internal IP address while creating the virtual machine. If they do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible. In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. If a NetScaler ADC VPX instance with a model number higher than VPX 3000 is used, the network throughput might not be the same as specified by the instance’s license. However, other features, such as SSL throughput and SSL transactions per second, might improve. The “deployment ID” that is generated by Azure during virtual machine provisioning is not visible to the user in ARM. Users cannot use the deployment ID to deploy a NetScaler ADC VPX appliance on ARM. The NetScaler ADC VPX instance supports 20 Mb/s throughput and standard edition features when it is initialized. For a XenApp and XenDesktop deployment, a VPN virtual server on a VPX instance can be configured in the following modes: Basic mode, where the ICAOnly VPN virtual server parameter is set to ON. The Basic mode works fully on an unlicensed NetScaler ADC VPX instance. SmartAccess mode, where the ICAOnly VPN virtual server parameter is set to OFF. The SmartAccess mode works for only 5 NetScaler ADC AAA session users on an unlicensed NetScaler ADC VPX instance. Note: To configure the SmartControl feature, users must apply a Premium license to the NetScaler ADC VPX instance. Azure-VPX Supported Models and Licensing In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. A NetScaler ADC VPX instance on Azure requires a license. The following licensing options are available for NetScaler ADC VPX instances running on Azure. Users can choose one of these methods to license NetScaler ADCs provisioned by NetScaler ADM: Using ADC licenses present in NetScaler ADM: Configure pooled capacity, VPX licenses, or virtual CPU licenses while creating the autoscale group. So, when a new instance is provisioned for an autoscale group, the already configured license type is automatically applied to the provisioned instance. Pooled Capacity: Allocates bandwidth to every provisioned instance in the autoscale group. Ensure users have the necessary bandwidth available in NetScaler ADM to provision new instances. For more information, see: Configure Pooled Capacity. Each ADC instance in the autoscale group checks out one instance license and the specified bandwidth from the pool. VPX licenses: Applies the VPX licenses to newly provisioned instances. Ensure users have the necessary number of VPX licenses available in NetScaler ADM to provision new instances. When a NetScaler ADC VPX instance is provisioned, the instance checks out the license from the NetScaler ADM. For more information, see: NetScaler ADC VPX Check-in and Check-out Licensing. Virtual CPU licenses: Applies virtual CPU licenses to newly provisioned instances. This license specifies the number of CPUs entitled to a NetScaler ADC VPX instance. Ensure users have the necessary number of Virtual CPUs in NetScaler ADM to provision new instances. When a NetScaler ADC VPX instance is provisioned, the instance checks out the virtual CPU license from the NetScaler ADM. For more information, see: NetScaler ADC Virtual CPU Licensing. When the provisioned instances are destroyed or de-provisioned, the applied licenses are automatically returned to NetScaler ADM. To monitor the consumed licenses, navigate to the Networks > Licenses page. Using Microsoft Azure subscription licenses: Configure NetScaler ADC licenses available in the Azure Marketplace while creating the autoscale group. So, when a new instance is provisioned for the autoscale group, the license is obtained from Azure Marketplace. Supported NetScaler ADC Azure Virtual Machine Images Supported NetScaler ADC Azure Virtual Machine Images for Provisioning Use the Azure virtual machine image that supports a minimum of three NICs. Provisioning NetScaler ADC VPX instance is supported only on the Premium and Advanced editions. For more information on Azure virtual machine image types, see: General Purpose Virtual Machine Sizes. The following are the recommended VM sizes for provisioning: Standard_DS3_v2 Standard_B2ms Standard_DS4_v2 Port Usage Guidelines Users can configure more inbound and outbound rules on the NetScaler Gateway while creating the NetScaler ADC VPX instance or after the virtual machine is provisioned. Each inbound and outbound rule is associated with a public port and a private port. Before configuring NSG rules, note the following guidelines regarding the port numbers users can use: The NetScaler ADC VPX instance reserves the following ports. Users cannot define these as private ports when using the Public IP address for requests from the internet. Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000. However, if users want internet-facing services such as the VIP to use a standard port (for example, port 443) users have to create port mapping by using the NSG. The standard port is then mapped to a different port that is configured on the NetScaler ADC VPX for this VIP service. For example, a VIP service might be running on port 8443 on the VPX instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is directed to private port 8443. The Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG. High availability does not work for traffic that uses a public IP address (PIP) associated with a VPX instance, instead of a PIP configured on the Azure load balancer. In a NetScaler Gateway deployment, users need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured. Users must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the back-end server, the VIP port number has to be specified along with the VIP URL (for example, url: port). Note: In the Azure Resource Manager, a NetScaler ADC VPX instance is associated with two IP addresses - a public IP address (PIP) and an internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure a VIP in VPX, use the internal IP address (NSIP) and any of the free ports available. Do not use the PIP to configure a VIP. For example, if the NSIP of a NetScaler ADC VPX instance is 10.1.0.3 and an available free port is 10022, then users can configure a VIP by providing the 10.1.0.3:10022 (NSIP address + port) combination.
  13. Overview NetScaler is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: Citrix Hypervisor VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler VPX on Microsoft Azure Microsoft Azure Microsoft Azure is an ever-expanding set of cloud computing services built to help organizations meet their business challenges. Azure gives users the freedom to build, manage, and deploy applications on a massive, global network using their preferred tools and frameworks. With Azure, users can: Be future-ready with continuous innovation from Microsoft to support their development today and their product visions for tomorrow. Operate hybrid cloud seamlessly on-premises, in the cloud, and at the edge—Azure meets users where they are. Build on their terms with Azure’s commitment to open source and support for all languages and frameworks, allowing users to be free to build how they want and deploy where they want. Trust their cloud with security from the ground up—backed by a team of experts and proactive, industry-leading compliance that is trusted by enterprises, governments, and startups. Azure Terminology Here is a brief description of the essential terms used in this document that users must be familiar with: Azure Load Balancer – Azure load balancer is a resource that distributes incoming traffic among computers in a network. Traffic is distributed among virtual machines defined in a load-balancer set. A load balancer can be external or internet-facing, or it can be internal. Azure Resource Manager (ARM) – ARM is the new management framework for services in Azure. Azure Load Balancer is managed using ARM-based APIs and tools. Back-End Address Pool – The back-end address pool is the IP addresses associated with the virtual machine NIC (NIC) to which the load is distributed. BLOB - Binary Large Object – Any binary object like a file or an image that can be stored in Azure storage. Front-End IP Configuration – An Azure Load balancer can include one or more front-end IP addresses, also known as a virtual IPs (VIPs). These IP addresses serve as ingress for the traffic. Instance Level Public IP (ILPIP) – An ILPIP is a public IP address that users can assign directly to a virtual machine or role instance, rather than to the cloud service that the virtual machine or role instance resides in. The ILPIP does not take the place of the VIP (virtual IP) that is assigned to their cloud service. Rather, it is an extra IP address that can be used to connect directly to a virtual machine or role instance. Note: In the past, an ILPIP was referred to as a PIP, which stands for public IP. Inbound NAT Rules – This contains rules mapping a public port on the load balancer to a port for a specific virtual machine in the back-end address pool. IP-Config - It can be defined as an IP address pair (public IP and private IP) associated with an individual NIC. In an IP-Config, the public IP address can be NULL. Each NIC can have multiple IP-Configs associated with it, which can be up to 255. Load Balancing Rules – A rule property that maps a given front-end IP and port combination to a set of back-end IP addresses and port combinations. With a single definition of a load balancer resource, users can define multiple load balancing rules, each rule reflecting a combination of a front-end IP and port and back end IP and port associated with virtual machines. Network Security Group (NSG) – NSG contains a list of Access Control List (ACL) rules that allow or deny network traffic to virtual machine instances in a virtual network. NSGs can be associated with either subnets or individual virtual machine instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all the virtual machine instances in that subnet. In addition, traffic to an individual virtual machine can be restricted further by associating an NSG directly to that virtual machine. Private IP addresses – Used for communication within an Azure virtual network, and user on-premises network when a VPN gateway is used to extend a user network to Azure. Private IP addresses allow Azure resources to communicate with other resources in a virtual network or an on-premises network through a VPN gateway or ExpressRoute circuit, without using an internet-reachable IP address. In the Azure Resource Manager deployment model, a private IP address is associated with the following types of Azure resources – virtual machines, internal load balancers (ILBs), and application gateways. Probes – This contains health probes used to check availability of virtual machines instances in the back-end address pool. If a particular virtual machine does not respond to health probes for some time, then it is taken out of traffic serving. Probes enable users to track the health of virtual instances. If a health probe fails, the virtual instance is taken out of rotation automatically. Public IP Addresses (PIP) – PIP is used for communication with the Internet, including Azure public-facing services and is associated with virtual machines, internet-facing load balancers, VPN gateways, and application gateways. Region - An area within a geography that does not cross national borders and that contains one or more data centers. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away, to form a regional pair. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to generally as location. Resource Group - A container in Resource Manager that holds related resources for an application. The resource group can include all of the resources for an application, or only those resources that are logically grouped. Storage Account – An Azure storage account gives users access to the Azure blob, queue, table, and file services in Azure Storage. A user storage account provides the unique namespace for user Azure storage data objects. Virtual Machine – The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in various sizes. Virtual Network - An Azure virtual network is a representation of a user network in the cloud. It is a logical isolation of the Azure cloud dedicated to a user subscription. Users can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. Users can also further segment their VNet into subnets and launch Azure IaaS virtual machines and cloud services (PaaS role instances). Also, users can connect the virtual network to their on-premises network using one of the connectivity options available in Azure. In essence, users can expand their network to Azure, with complete control on IP address blocks with the benefit of enterprise scale Azure provides. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler on Azure combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the Azure Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler deployments. The net result is that NetScaler on Azure enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Disaster Recovery (DR) Disaster is a sudden disruption of business functions caused by natural calamities or human caused events. Disasters affect data center operations, after which resources and the data lost at the disaster site must be fully rebuilt and restored. The loss of data or downtime in the data center is critical and collapses the business continuity. One of the challenges that customers face today is deciding where to put their DR site. Businesses are looking for consistency and performance regardless of any underlying infrastructure or network faults. Possible reasons many organizations are deciding to migrate to the cloud are: Usage economics — The capital expense of having a data center on-prem is well documented and by using the cloud, these businesses can free up time and resources from expanding their own systems. Faster recovery times — Much of the automated orchestration enables recovery in mere minutes. Also, there are technologies that help replicate data by providing continuous data protection or continuous snapshots to guard against any outage or attack. Finally, there are use cases where customers need many different types of compliance and security control which are already present on the public clouds. These make it easier to achieve the compliance they need rather than building their own. A NetScaler configured for GSLB forwards traffic to the least-loaded or best-performing data center. This configuration, referred to as an active-active setup, not only improves performance, but also provides immediate disaster recovery by routing traffic to other data centers if a data center that is part of the setup goes down. NetScaler thereby saves customers valuable time and money. Deployment Types Multi-NIC Multi-IP Deployment (Three-NIC Deployment) Typical Deployments High Availability (HA) Standalone Use Cases Multi-NIC Multi-IP Deployments are used to achieve real isolation of data and management traffic. Multi-NIC Multi-IP Deployments also improve the scale and performance of the ADC. Multi-NIC Multi-IP Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. Single-NIC Multi-IP Deployment (One-NIC Deployment) Typical Deployments High Availability (HA) Standalone Use Cases Internal Load Balancing Typical use case of the Single-NIC Multi-IP Deployment is intranet applications requiring lower throughput (less than 1 Gbps). NetScaler Azure Resource Manager Templates Azure Resource Manager (ARM) Templates provide a method of deploying ADC infrastructure-as-code to Azure simply and consistently. Azure is managed using an Azure Resource Manager (ARM) API. The resources the ARM API manages are objects in Azure such as network cards, virtual machines, and hosted databases. ARM Templates define the objects users want to use along with their types, names, and properties in a JSON file which can be understood by the ARM API. ARM Templates are a way to declare the objects users want, along with their types, names and properties, in a JSON file which can be checked into source control and managed like any other code file. ARM Templates are what really gives users the ability to roll out Azure infrastructure as code. Use Cases Customizing deployment Automating deployment Multi-NIC Multi-IP (Three-NIC) Deployment for DR Customers would potentially deploy using three-NIC deployment if they are deploying into a production environment where security, redundancy, availability, capacity, and scalability are critical. With this deployment method, complexity and ease of management are not critical concerns to the users. Single NIC Multi IP (One-NIC) Deployment for DR Customers would potentially deploy using one-NIC deployment if they are deploying into a non-production environment, they are setting up the environment for testing, or they are staging a new environment before production deployment. Another potential reason for using one-NIC deployment is that customers want to deploy direct to the cloud quickly and efficiently. Finally, one-NIC deployment would be used when customers seek the simplicity of a single subnet configuration. Azure Resource Manager Template Deployment Customers would deploy using Azure Resource Manager (ARM) Templates if they are customizing their deployments or they are automating their deployments. Network Architecture In ARM, a NetScaler VPX virtual machine (VM) resides in a virtual network. A virtual NIC (NIC) is created on each NetScaler VM. The network security group (NSG) configured in the virtual network is bound to the NIC, and together they control the traffic flowing into the VM and out of the VM. The NSG forwards the requests to the NetScaler VPX instance, and the VPX instance sends them to the servers. The response from a server follows the same path in reverse. The NSG can be configured to control a single VPX VM, or, with subnets and virtual networks, can control traffic in multiple VPX VM deployments. The NIC contains network configuration details such as the virtual network, subnets, internal IP address, and Public IP address. While on ARM, it is good to know the following IP addresses that are used to access the VMs deployed with a single NIC and a single IP address: Public IP (PIP) address is the internet-facing IP address configured directly on the virtual NIC of the NetScaler VM. This allows users to directly access a VM from the external network. NetScaler IP (NSIP) address is an internal IP address configured on the VM. It is non-routable. Virtual IP address (VIP) is configured by using the NSIP and a port number. Clients access NetScaler services through the PIP address, and when the request reaches the NIC of the NetScaler VPX VM or the Azure load balancer, the VIP gets translated to internal IP (NSIP) and internal port number. Internal IP address is the private internal IP address of the VM from the virtual network’s address space pool. This IP address cannot be reached from the external network. This IP address is by default dynamic unless users set it to static. Traffic from the internet is routed to this address according to the rules created on the NSG. The NSG works with the NIC to selectively send the right type of traffic to the right port on the NIC, which depends on the services configured on the VM. The following figure shows how traffic flows from a client to a server through a NetScaler VPX instance provisioned in ARM. Deployment Steps When users deploy a NetScaler VPX instance on Microsoft Azure Resource Manager (ARM), they can use the Azure cloud computing capabilities and use NetScaler load balancing and traffic management features for their business needs. Users can deploy NetScaler VPX instances on Azure Resource Manager either as standalone instances or as high availability pairs in active-standby modes. But users can deploy a NetScaler VPX instance on Microsoft Azure in either of two ways: Through the Azure Marketplace. The NetScaler VPX virtual appliance is available as an image in the Microsoft Azure Marketplace. Using the NetScaler Azure Resource Manager (ARM) json template available on GitHub. For more information, see: NetScaler Azure Templates. How a NetScaler VPX Instance Works on Azure In an on-premises deployment, a NetScaler VPX instance requires at least three IP addresses: Management IP address, called NSIP address Subnet IP (SNIP) address for communicating with the server farm Virtual server IP (VIP) address for accepting client requests For more information, see: Network Architecture for NetScaler VPX Instances on Microsoft Azure. Note: VPX virtual appliances can be deployed on any instance type that has two or more cores and more than 2 GB memory. In an Azure deployment, users can provision a NetScaler VPX instance on Azure in three ways: Multi-NIC multi-IP architecture Single NIC multi IP architecture ARM (Azure Resource Manager) templates Depending on requirements, users can deploy any of these supported architecture types. Multi-NIC Multi-IP Architecture (Three-NIC) In this deployment type, users can have more than one network interfaces (NICs) attached to a VPX instance. Any NIC can have one or more IP configurations - static or dynamic public and private IP addresses assigned to it. Refer to the following use cases: Configure a High-Availability Setup with Multiple IP Addresses and NICs Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure a High-Availability Setup with Multiple IP Addresses and NICs In a Microsoft Azure deployment, a high-availability configuration of two NetScaler VPX instances is achieved by using the Azure Load Balancer (ALB). This is achieved by configuring a health probe on ALB, which monitors each VPX instance by sending health probes at every 5 seconds to both primary and secondary instances. In this setup, only the primary node responds to health probes and the secondary does not. Once the primary sends the response to the health probe, the ALB starts sending the data traffic to the instance. If the primary instance misses two consecutive health probes, ALB does not redirect traffic to that instance. On failover, the new primary starts responding to health probes and the ALB redirects traffic to it. The standard VPX high availability failover time is three seconds. The total failover time that might occur for traffic switching can be a maximum of 13 seconds. Users can deploy a pair of NetScaler VPX instances with multiple NICs in an active-passive high availability (HA) setup on Azure. Each NIC can contain multiple IP addresses. The following options are available for a multi-NIC high availability deployment: High availability using Azure availability set High availability using Azure availability zones For more information about Azure Availability Set and Availability Zones, see the Azure documentation: Manage the Availability of Linux Virtual Machines. High Availability using Availability Set A high availability setup using availability set must meet the following requirements: An HA Independent Network Configuration (INC) configuration The Azure Load Balancer (ALB) in Direct Server Return (DSR) mode All traffic goes through the primary node. The secondary node remains in standby mode until the primary node fails. Note: For a NetScaler VPX high availability deployment on Azure cloud to work, users need a floating public IP (PIP) that can be moved between the two VPX nodes. The Azure Load Balancer (ALB) provides that floating PIP, which is moved to the second node automatically in the event of a failover. For a NetScaler VPX high availability deployment on Azure cloud to work, users need a floating public IP (PIP) that can be moved between the two VPX nodes. The Azure Load Balancer (ALB) provides that floating PIP, which is moved to the second node automatically in the event of a failover. In an active-passive deployment, the ALB front end public IP (PIP) addresses are added as the VIP addresses in each VPX node. In an HA-INC configuration, the VIP addresses are floating and the SNIP addresses are instance specific. Users can deploy a VPX pair in active-passive high availability mode in two ways by using: NetScaler VPX standard high availability template: use this option to configure an HA pair with the default option of three subnets and six NICs. Windows PowerShell commands: use this option to configure an HA pair according to your subnet and NIC requirements. This section describes how to deploy a VPX pair in active-passive HA setup by using the Citrix template. If users want to deploy with PowerShell commands, see: Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands. Configure HA-INC Nodes by using the Citrix High Availability Template Users can quickly and efficiently deploy a pair of VPX instances in HA-INC mode by using the standard template. The template creates two nodes, with three subnets and six NICs. The subnets are for management, client, and server-side traffic, and each subnet has two NICs for both of the VPX instances. Users can get the NetScaler 12.1 HA Pair template at the Azure Marketplace by visiting: Azure Marketplace/NetScaler 12.1 (High Availability). Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Sets. From Azure Marketplace, select and initiate the Citrix solution template. The template appears. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Create a Resource Group and select OK. The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group in the Azure portal to see the configuration details, such as LB rules, back-end pools, health probes, and so on. The high availability pair appears as ns-vpx0 and ns-vpx1. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. Next, users need to configure the load-balancing virtual server with the ALB’s Frontend public IP (PIP) address, on the primary node. To find the ALB PIP, select ALB > Frontend IP configuration. See the Resources section for more information about how to configure the load-balancing virtual server. Resources: The following links provide additional information related to HA deployment and virtual server (virtual server) configuration: Configuring High Availability Nodes in Different Subnets Set up Basic Load Balancing Related resources: Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure GSLB on an Active-Standby High-Availability Setup High Availability using Availability Zones Azure Availability Zones are fault-isolated locations within an Azure region, providing redundant power, cooling, and networking and increasing resiliency. Only specific Azure regions support Availability Zones. For more information, see the Azure documentation: Regions and Availability Zones in Azure. Users can deploy a VPX pair in high availability mode by using the template called “NetScaler 13.0 HA using Availability Zones,” available in Azure Marketplace. Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Zones. From Azure Marketplace, select and initiate the Citrix solution template. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Enter the details and click OK. Note: Ensure that an Azure region that supports Availability Zones is selected. For more information about regions that support Availability Zones, see Azure documentation: Regions and Availability Zones in Azure . Ensure that an Azure region that supports Availability Zones is selected. For more information about regions that support Availability Zones, see Azure documentation: Regions and Availability Zones in Azure. The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group to see the configuration details, such as LB rules, back-end pools, health probes, and so on, in the Azure portal. The high availability pair appears as ns-vpx0 and ns-vpx1. Also, users can see the location under the Location column. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. Single NIC Multi IP Architecture (One-NIC) In this deployment type, one network interface (NIC) is associated with multiple IP configurations - static or dynamic public and private IP addresses assigned to it. For more information, refer to the following use cases: Configure Multiple IP Addresses for a NetScaler VPX Standalone Instance Configure Multiple IP Addresses for a NetScaler VPX Standalone Instance by using PowerShell Commands Configure Multiple IP Addresses for a NetScaler VPX Standalone Instance This section explains how to configure a standalone NetScaler VPX instance with multiple IP addresses, in Azure Resource Manager (ARM). The VPX instance can have one or more NICs attached to it, and each NIC can have one or more static or dynamic public and private IP addresses assigned to it. Users can assign multiple IP addresses as NSIP, VIP, SNIP, and so on. For more information, refer to the Azure documentation: Assign Multiple IP Addresses to Virtual Machines using the Azure Portal. If you want to deploy using PowerShell commands, see Configure Multiple IP Addresses for a NetScaler VPX Standalone Instance by using PowerShell Commands. Standalone NetScaler VPX with Single NIC Use Case In this use case, a standalone NetScaler VPX appliance is configured with a single NIC that is connected to a virtual network (VNET). The NIC is associated with three IP configurations (ipconfig), each serves a different purpose - as shown in the table: IPconfig Associated with Purpose ipconfig1 Static IP address; Serves management traffic static private IP address ipconfig2 Static public IP address Serves client-side traffic static private IP address ipconfig3 Static private IP address Communicates with back-end servers Note: IPConfig-3 is not associated with any public IP address. In a multi-NIC, multi-IP Azure NetScaler VPX deployment, the private IP associated with the primary (first) IPConfig of the primary (first) NIC is automatically added as the management NSIP of the appliance. The remaining private IP addresses associated with the IPConfigs need to be added in the VPX instance as a VIP or SNIP by using the add ns ip command, according to user requirements. Before starting Deployment Before users begin deployment, they must create a VPX instance using the steps that follow. For this use case, the NSDoc0330VM VPX instance is created. Configure Multiple IP Addresses for a NetScaler VPX Instance in Standalone Mode Add IP addresses to the VM Configure NetScaler -owned IP addresses Step 1: Add IP Addresses to the VM In the portal, click More services > type virtual machines in the filter box, and then click Virtual machines. In the Virtual machines blade, click the VM you want to add IP addresses to. Click Network interfaces in the virtual machine blade that appears, and then select the network interface. In the blade that appears for the NIC selected, click IP configurations. The existing IP configuration that was assigned when the VM was created, ipconfig1, is displayed. For this use case, make sure the IP addresses associated with ipconfig1 are static. Next, create two more IP configurations: ipconfig2 (VIP) and ipconfig3 (SNIP). To create more IP configurations, click Add. In the Add IP configuration window, enter a Name, specify the allocation method as Static, enter an IP address (192.0.0.5 for this use case), and enable Public IP address. Note: Before you add a static private IP address, check for IP address availability and make sure the IP address belongs to the same subnet to which the NIC is attached. Next, click Configure required settings to create a static public IP address for ipconfig2. By default, public IPs are dynamic. To make sure that the VM always uses the same public IP address, create a static Public IP. In the Create public IP address blade, add a Name, under Assignment click Static. And then click OK. Note: Even when users set the allocation method to static, they cannot specify the actual IP address assigned to the public IP resource. Instead, it gets allocated from a pool of available IP addresses in the Azure location where the resource is created. Follow the steps to add one more IP configuration for ipconfig3. Public IP is not mandatory. Step 2: Configure NetScaler-owned IP Addresses Configure the NetScaler-owned IP addresses by using the GUI or the command add ns ip. For more information, refer to: Configuring NetScaler-owned IP Addresses. For more information about how to deploy a NetScaler VPX instance on Microsoft Azure, see Deploy a NetScaler VPX Instance on Microsoft Azure. For more information about how a NetScaler VPX instance works on Azure, see How a NetScaler VPX Instance Works on Azure. ARM (Azure Resource Manager) Templates The GitHub repository for NetScaler ARM (Azure Resource Manager) templates hosts NetScaler custom templates for deploying NetScaler in Microsoft Azure Cloud Services here: NetScaler Azure Templates. All templates in this repository were developed and are maintained by the NetScaler engineering team. Each template in this repository has co-located documentation describing the usage and architecture of the template. The templates attempt to codify the recommended deployment architecture of the NetScaler VPX, or to introduce the user to the NetScaler or to demonstrate a particular feature, edition, or option. Users can reuse, modify, or enhance the templates to suit their particular production and testing needs. Most templates require sufficient subscriptions to portal.azure.com to create resource and deploy templates. NetScaler VPX Azure Resource Manager (ARM) templates are designed to ensure an easy and consistent way of deploying standalone NetScaler VPX. These templates increase reliability and system availability with built-in redundancy. These ARM templates support Bring Your Own License (BYOL) or Hourly based selections. Choice of selection is either mentioned in the template description or offered during template deployment. For more information about how to provision a NetScaler VPX instance on Microsoft Azure using ARM (Azure Resource Manager) templates, visit: Citrix Azure ADC Templates. Prerequisites Users need some prerequisite knowledge before deploying a NetScaler VPX instance on Azure: Familiarity with Azure terminology and network details. For information, see the Azure terminology in the preceding section. Knowledge of a NetScaler appliance. For detailed information about the NetScaler appliance, see NetScaler 13.0. Knowledge of NetScaler networking. See the Networking topic here: Networking. Limitations Running the NetScaler VPX load balancing solution on ARM imposes the following limitations: The Azure architecture does not accommodate support for the following NetScaler features: Clustering IPv6 Gratuitous ARP (GARP) L2 Mode (bridging). Transparent virtual servers are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC USIP Jumbo Frames If you think you might have to shut down and temporarily deallocate the NetScaler VPX virtual machine at any time, assign a static Internal IP address while creating the virtual machine. If you do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible. In an Azure deployment, only the following NetScaler VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler VPX Data Sheet. If a NetScaler VPX instance with a model number higher than VPX 3000 is used, the network throughput might not be the same as specified by the instance’s license. However, other features, such as SSL throughput and SSL transactions per second, might improve. The “deployment ID” that is generated by Azure during virtual machine provisioning is not visible to the user in ARM. Users cannot use the deployment ID to deploy a NetScaler VPX appliance on ARM. The NetScaler VPX instance supports 20 Mb/s throughput and standard edition features when it is initialized. For a XenApp and XenDesktop deployment, a VPN virtual server on a VPX instance can be configured in the following modes: Basic mode, where the ICAOnly VPN virtual server parameter is set to ON. The Basic mode works fully on an unlicensed NetScaler VPX instance. Smart-Access mode, where the ICAOnly VPN virtual server parameter is set to OFF. The Smart-Access mode works for only 5 NetScaler AAA session users on an unlicensed NetScaler VPX instance. Note: To configure the Smart Control feature, users must apply a Premium license to the NetScaler VPX instance. Azure-VPX Supported Models and Licensing In an Azure deployment, only the following NetScaler VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler VPX Data Sheet. A NetScaler VPX instance on Azure requires a license. The following licensing options are available for NetScaler VPX instances running on Azure. Subscription-based licensing: NetScaler VPX appliances are available as paid instances on Azure Marketplace. Subscription based licensing is a pay-as-you-go option. Users are charged hourly. The following VPX models and license types are available on Azure Marketplace: VPX Model License Type VPX10 Standard, Advanced, Premium VPX200 Standard, Advanced, Premium VPX1000 Standard, Advanced, Premium VPX3000 Standard, Advanced, Premium Bring your own license (BYOL): If you bring your own license (BYOL), see the VPX Licensing Guide at: CTX122426/NetScaler VPX and CloudBridge VPX Licensing Guide. Users have to: Use the licensing portal within MyCitrix to generate a valid license. Upload the license to the instance. NetScaler VPX Check-In/Check-Out licensing: For more information, see: NetScaler VPX Check-in and Check-out Licensing. Starting with NetScaler release 12.0 56.20, VPX Express for on-premises and cloud deployments does not require a license file. For more information on NetScaler VPX Express see the “NetScaler VPX Express license” section in Licensing Overview. Note: Regardless of the subscription-based hourly license bought from Azure Marketplace, in rare cases, the NetScaler VPX instance deployed on Azure might come up with a default NetScaler license. This happens due to issues with Azure Instance Metadata Service (IMDS). Do a warm restart before making any configuration change on the NetScaler VPX instance, to enable the correct NetScaler VPX license. Port Usage Guidelines Users can configure more inbound and outbound rules in NSG while creating the NetScaler VPX instance or after the virtual machine is provisioned. Each inbound and outbound rule is associated with a public port and a private port. Before you configure NSG rules, note the following guidelines regarding the port numbers you can use: The NetScaler VPX instance reserves the following ports. Users cannot define these as private ports when using the Public IP address for requests from the internet. Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000. However, if users want internet-facing services such as the VIP to use a standard port (for example, port 443) users have to create port mapping by using the NSG. The standard port is then mapped to a different port that is configured on the NetScaler VPX for this VIP service. For example, a VIP service might be running on port 8443 on the VPX instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is directed to private port 8443. The Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG. High availability does not work for traffic that uses a public IP address (PIP) associated with a VPX instance, instead of a PIP configured on the Azure load balancer. For more information, see: Configure a High-Availability Setup with a Single IP Address and a Single NIC. In a NetScaler Gateway deployment, users need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured. Users must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the back-end server, the VIP port number has to be specified along with the VIP URL (for example, url: port). Note: In Azure Resource Manager, a NetScaler VPX instance is associated with two IP addresses - a public IP address (PIP) and an internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure a VIP in VPX, use the internal IP address (NSIP) and any of the free ports available. Do not use the PIP to configure a VIP. For example, if the NSIP of a NetScaler VPX instance is 10.1.0.3 and an available free port is 10022, then users can configure a VIP by providing the 10.1.0.3:10022 (NSIP address + port) combination. The official version of this content is in English. Some of the Citrix documentation content is machine translated for your convenience only. Citrix has no control over machine-translated content, which may contain errors, inaccuracies or unsuitable language. No warranty of any kind, either expressed or implied, is made as to the accuracy, reliability, suitability, or correctness of any translations made from the English original into any other language, or that your Citrix product or service conforms to any machine translated content, and any warranty provided under the applicable end user license agreement or terms of service, or any other agreement with Citrix, that the product or service conforms with any documentation shall not apply to the extent that such documentation has been machine translated. Citrix will not be held responsible for any damage or issues that may arise from using machine-translated content. DIESER DIENST KANN ÜBERSETZUNGEN ENTHALTEN, DIE VON GOOGLE BEREITGESTELLT WERDEN. GOOGLE LEHNT JEDE AUSDRÜCKLICHE ODER STILLSCHWEIGENDE GEWÄHRLEISTUNG IN BEZUG AUF DIE ÜBERSETZUNGEN AB, EINSCHLIESSLICH JEGLICHER GEWÄHRLEISTUNG DER GENAUIGKEIT, ZUVERLÄSSIGKEIT UND JEGLICHER STILLSCHWEIGENDEN GEWÄHRLEISTUNG DER MARKTGÄNGIGKEIT, DER EIGNUNG FÜR EINEN BESTIMMTEN ZWECK UND DER NICHTVERLETZUNG VON RECHTEN DRITTER. CE SERVICE PEUT CONTENIR DES TRADUCTIONS FOURNIES PAR GOOGLE. GOOGLE EXCLUT TOUTE GARANTIE RELATIVE AUX TRADUCTIONS, EXPRESSE OU IMPLICITE, Y COMPRIS TOUTE GARANTIE D'EXACTITUDE, DE FIABILITÉ ET TOUTE GARANTIE IMPLICITE DE QUALITÉ MARCHANDE, D'ADÉQUATION À UN USAGE PARTICULIER ET D'ABSENCE DE CONTREFAÇON. ESTE SERVICIO PUEDE CONTENER TRADUCCIONES CON TECNOLOGÍA DE GOOGLE. GOOGLE RENUNCIA A TODAS LAS GARANTÍAS RELACIONADAS CON LAS TRADUCCIONES, TANTO IMPLÍCITAS COMO EXPLÍCITAS, INCLUIDAS LAS GARANTÍAS DE EXACTITUD, FIABILIDAD Y OTRAS GARANTÍAS IMPLÍCITAS DE COMERCIABILIDAD, IDONEIDAD PARA UN FIN EN PARTICULAR Y AUSENCIA DE INFRACCIÓN DE DERECHOS. 本服务可能包含由 Google 提供技术支持的翻译。Google 对这些翻译内容不做任何明示或暗示的保证,包括对准确性、可靠性的任何保证以及对适销性、特定用途的适用性和非侵权性的任何暗示保证。 このサービスには、Google が提供する翻訳が含まれている可能性があります。Google は翻訳について、明示的か黙示的かを問わず、精度と信頼性に関するあらゆる保証、および商品性、特定目的への適合性、第三者の権利を侵害しないことに関するあらゆる黙示的保証を含め、一切保証しません。 ESTE SERVIÇO PODE CONTER TRADUÇÕES FORNECIDAS PELO GOOGLE. O GOOGLE SE EXIME DE TODAS AS GARANTIAS RELACIONADAS COM AS TRADUÇÕES, EXPRESSAS OU IMPLÍCITAS, INCLUINDO QUALQUER GARANTIA DE PRECISÃO, CONFIABILIDADE E QUALQUER GARANTIA IMPLÍCITA DE COMERCIALIZAÇÃO, ADEQUAÇÃO A UM PROPÓSITO ESPECÍFICO E NÃO INFRAÇÃO.
  14. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler ADC VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms. This deployment guide focuses on NetScaler ADC VPX on Amazon Web Services. Amazon Web Services Amazon Web Services (AWS) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services offer tools such as compute power, database storage, and content delivery services. AWS offers the following essential services: AWS Compute Services Migration Services Storage Database Services Management Tools Security Services Analytics Networking Messaging Developer Tools Mobile Services AWS Terminology Here is a brief description of key terms used in this document that users must be familiar with: Elastic Network Interface (ENI) – A virtual network interface that users can attach to an instance in a Virtual Private Cloud (VPC). Elastic IP (EIP) address – A static, public IPv4 address that users have allocated in Amazon EC2 or Amazon VPC and then attached to an instance. Elastic IP addresses are associated with user accounts, not a specific instance. They are elastic because users can easily allocate, attach, detach, and free them as their needs change. Subnet – A segment of the IP address range of a VPC with which EC2 instances can be attached. Users can create subnets to group instances according to security and operational needs. Virtual Private Cloud (VPC) – A web service for provisioning a logically isolated section of the AWS cloud where users can launch AWS resources in a virtual network that they define. Here is a brief description of other terms used in this document that users should be familiar with: Amazon Machine Image (AMI) – A machine image, which provides the information required to launch an instance, which is a virtual server in the cloud. Elastic Block Store – Provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Simple Storage Service (S3) – Storage for the Internet. It is designed to make web-scale computing easier for developers. Elastic Compute Cloud (EC2) – A web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Elastic Kubernetes Service (EKS) – Amazon EKS is a managed service that makes it easy for users to run Kubernetes on AWS without needing to stand up or maintain their own Kubernetes control plane. ... Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS is a managed service that makes it easy for users to run Kubernetes on AWS without needing to install and operate their own Kubernetes clusters. Application Load Balancing (ALB) – Amazon ALB operates at layer 7 of the OSI stack so it's employed when users want to route or select traffic based on elements of the HTTP or HTTPS connection, whether host-based or path-based. The ALB connection is context-aware and can have direct requests based on any single variable. Applications are load balanced based on their peculiar behavior not solely on server (operating system or virtualization layer) information. Elastic Load Balancing (ALB/ELB/NLB) – Amazon ELB Distributes incoming application traffic across multiple EC2 instances, in multiple Availability Zones. This increases the fault tolerance of user applications. Network Load Balancing (NLB) – Amazon NLB operates at layer 4 of the OSI stack and below and is not designed to consider anything at the application layer such as content type, cookie data, custom headers, user location, or application behavior. It is context-less, caring only about the network-layer information contained within the packets it is directing. It distributes traffic based on network variables such as IP address and destination ports. Instance type – Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give users the flexibility to choose the appropriate mix of resources for their applications. Identity and Access Management (IAM) – An AWS identity with permission policies that determine what the identity can and cannot do in AWS. Users can use an IAM role to enable applications running on an EC2 instance to securely access their AWS resources. IAM role is required for deploying VPX instances in a high-availability setup. Internet Gateway – Connects a network to the Internet. Users can route traffic for IP addresses outside their VPC to the Internet gateway. Key pair – A set of security credentials with which users prove their identity electronically. A key pair consists of a private key and a public key. Route table – A set of routing rules that controls the traffic leaving any subnet that is associated with the route table. Users can associate multiple subnets with a single route table, but a subnet can be associated with only one route table at a time. Auto Scale Groups – A web service to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. CloudFormation – A service for writing or changing templates that creates and deletes related AWS resources together as a unit. Web Application Firewall (WAF) – WAF is defined as a security solution protecting the web application layer in the OSI network model. A WAF does not depend on the application it is protecting. This document focuses on the exposition and evaluation of the security methods and functions provided specifically by NetScaler WAF. Bot – Bot is defined as an autonomous device, program, or piece of software on a network (especially the internet) that can interact with computer systems or users to run commands, reply to messages, or perform routine tasks. A bot is a software program on the internet that performs repetitive tasks. Some bots can be good, while others can have a huge negative impact on a website or application. Sample NetScaler WAF on AWS Architecture The preceding image shows a virtual private cloud (VPC) with default parameters that builds a NetScaler WAF environment in the AWS Cloud. In a production deployment, the following parameters are set up for the NetScaler WAF environment: This architecture assumes the use of an AWS CloudFormation Template and an AWS Quick Start Guide, which can be found here: GitHub/AWS-Quickstart/Quickstart-NetScaler-ADC-VPX . A VPC that spans two Availability Zones, configured with two public and four private subnets, according to AWS best practices, to provide you with your own virtual network on AWS with a /16 Classless Inter-Domain Routing (CIDR) block (a network with 65,536 private IP addresses). * Two instances of NetScaler WAF (Primary and Secondary), one in each Availability Zone. Three security groups, one for each network interface (Management, Client, Server), that acts as virtual firewalls to control the traffic for their associated instances. Three subnets, for each instance- one for management, one for client, and one for back-end server. An internet gateway attached to the VPC, and a Public Subnets route table which is associated with public subnets so as to allow access to the internet. This gateway is used by the WAF host to send and receive traffic. For more information on Internet Gateways, see: Internet Gateways. * 5 Route tables-one public route table associated with client subnets of both primary and secondary WAF. The remaining 4 route tables link to each of the 4 private subnets (management and server-side subnets of primary and secondary WAF). * AWS Lambda in WAF takes care of the following: Configuring two WAF in each availability zone of HA mode Creating a sample WAF Profile and thus pushing this configuration with respect to WAF AWS Identity and Access Management (IAM) to securely control access to AWS services and resources for your users. By default, the CloudFormation Template (CFT) creates the required IAM role. However, users can provide their own IAM role for NetScaler ADC instances. In the public subnets, two managed Network Address Translation (NAT) gateways to allow outbound internet access for resources in public subnets. Note: The CFT WAF template that deploys the NetScaler WAF into an existing VPC skips the components marked by asterisks and prompts users for their existing VPC configuration. Backend servers are not deployed by the CFT. Logical Flow of NetScaler WAF on AWS Logical Flow The Web Application Firewall can be installed as either a Layer 3 network device or a Layer 2 network bridge between customer servers and customer users, usually behind the customer company’s router or firewall. It must be installed in a location where it can intercept traffic between the web servers that users want to protect and the hub or switch through which users access those web servers. Users then configure the network to send requests to the Web Application Firewall instead of directly to their web servers, and responses to the Web Application Firewall instead of directly to their users. The Web Application Firewall filters that traffic before forwarding it to its final destination, using both its internal rule set and the user additions and modifications. It blocks or renders harmless any activity that it detects as harmful, and then forwards the remaining traffic to the web server. The preceding image provides an overview of the filtering process. Note: The diagram omits the application of a policy to incoming traffic. It illustrates a security configuration in which the policy is to process all requests. Also, in this configuration, a signatures object has been configured and associated with the profile, and security checks have been configured in the profile. As the diagram shows, when a user requests a URL on a protected website, the Web Application Firewall first examines the request to ensure that it does not match a signature. If the request matches a signature, the Web Application Firewall either displays the error object (a webpage that is located on the Web Application Firewall appliance and which users can configure by using the imports feature) or forwards the request to the designated error URL (the error page). If a request passes signature inspection, the Web Application Firewall applies the request security checks that have been enabled. The request security checks verify that the request is appropriate for the user website or web service and does not contain material that might pose a threat. For example, security checks examine the request for signs indicating that it might be of an unexpected type, request unexpected content, or contain unexpected and possibly malicious web form data, SQL commands, or scripts. If the request fails a security check, the Web Application Firewall either sanitizes the request and then sends it back to the NetScaler ADC appliance (or NetScaler ADC virtual appliance), or displays the error object. If the request passes the security checks, it is sent back to the NetScaler ADC appliance, which completes any other processing and forwards the request to the protected web server. When the website or web service sends a response to the user, the Web Application Firewall applies the response security checks that have been enabled. The response security checks examine the response for leaks of sensitive private information, signs of website defacement, or other content that should not be present. If the response fails a security check, the Web Application Firewall either removes the content that should not be present or blocks the response. If the response passes the security checks, it is sent back to the NetScaler ADC appliance, which forwards it to the user. Cost and Licensing Users are responsible for the cost of the AWS services used while running AWS deployments. The AWS CloudFormation templates that can be used for this deployment include configuration parameters that users can customize as necessary. Some of those settings, such as instance type, affect the cost of deployment. For cost estimates, users should refer to the pricing pages for each AWS service they are using. Prices are subject to change. A NetScaler ADC WAF on AWS requires a license. To license NetScaler WAF, users must place the license key in an S3 bucket and specify its location when they launch the deployment. Note: When users elect the Bring your own license (BYOL) licensing model, they should ensure that they have an AppFlow feature enabled. For more information on BYOL licensing, see: AWS Marketplace/NetScaler ADC VPX - Customer Licensed . The following licensing options are available for NetScaler ADC WAF running on AWS. Users can choose an AMI (Amazon Machine Image) based on a single factor such as throughput. License model: Pay as You Go (PAYG, for the production licenses) or Bring Your Own License (BYOL, for the Customer Licensed AMI - NetScaler ADC Pooled Capacity). For more information on NetScaler ADC Pooled Capacity, see: NetScaler ADC Pooled Capacity. For BYOL, there are 3 licensing modes: Configure NetScaler ADC Pooled Capacity: Configure NetScaler ADC Pooled Capacity NetScaler ADC VPX Check-in and Check-out Licensing (CICO): NetScaler ADC VPX Check-in and Check-out Licensing Tip: If users elect CICO Licensing with VPX-200, VPX-1000, VPX-3000, VPX-5000, or VPX-8000 application platform type, they should ensure that they have the same throughput license present in their ADM licensing server. NetScaler ADC virtual CPU Licensing: NetScaler ADC virtual CPU Licensing Note: If users want to dynamically modify the bandwidth of a VPX instance, they should elect a BYOL option, for example NetScaler ADC pooled capacity where they can allocate the licenses from NetScaler ADM, or they can check out the licenses from NetScaler ADC instances according to the minimum and maximum capacity of the instance on demand and without a restart. A restart is required only if users want to change the license edition. Throughput: 200 Mbps or 1 Gbps Bundle: Premium Deployment Options This deployment guide provides two deployment options: The first option is to deploy using a Quick Start Guide format and the following options: Deploy NetScaler WAF into a new VPC (end-to-end deployment). This option builds a new AWS environment consisting of the VPC, subnets, security groups, and other infrastructure components, and then deploys NetScaler WAF into this new VPC. Deploy NetScaler WAF into an existing VPC. This option provisions NetScaler WAF in the user existing AWS infrastructure. The second option is to deploy using WAF StyleBooks using NetScaler ADM Deployment Steps using a Quick Start Guide Step 1: Sign in to the User AWS Account Sign in to the user account at AWS: AWS with an IAM (Identity and Access Management) user role that has the necessary permissions to create an Amazon Account (if necessary) or sign in to an Amazon Account. Use the region selector in the navigation bar to choose the AWS Region where users want to deploy High Availability across AWS Availability Zones. Ensure that the user AWS account is configured correctly, refer to the Technical Requirements section of this document for more information. Step 2: Subscribe to the NetScaler WAF AMI This deployment requires a subscription to the AMI for NetScaler WAF in the AWS Marketplace. Sign in to the user AWS account. Open the page for the NetScaler WAF offering by choosing one of the links in the following table. When users launch the Quick Start Guide in to deploy NetScaler WAF in Step 3 below, they use the NetScaler WAF Image parameter to select the bundle and throughput option that matches their AMI subscription. The following list shows the AMI options and corresponding parameter settings. The VPX AMI instance requires a minimum of 2 virtual CPUs and 2 GB of memory. Note: To retrieve the AMI ID, refer to the NetScaler Products on AWS Marketplace page on GitHub: NetScaler Products on AWS Marketplace . AWS Marketplace AMI NetScaler Web Application Firewall (WAF) - 200 Mbps: NetScaler Web App Firewall (WAF) - 200 Mbps NetScaler Web Application Firewall (WAF) - 1000 Mbps: NetScaler Web App Firewall (WAF) - 1000 Mbps On the AMI page, choose Continue to Subscribe. Review the terms and conditions for software usage, and then choose Accept Terms. Note: Users receive a confirmation page, and an email confirmation is sent to the account owner. For detailed subscription instructions, see Getting Started in the AWS Marketplace Documentation: Getting Started . When the subscription process is complete, exit out of AWS Marketplace without further action. Do not provision the software from AWS Marketplace—users will deploy the AMI with the Quick Start Guide. Step 3: Launch the Quick Start Guide to Deploy the AMI Sign in to the user AWS account, and choose one of the following options to launch the AWS CloudFormation template. For help with choosing an option, see deployment options earlier in this guide. Deploy NetScaler VPX into a new VPC on AWS using one of the AWS CloudFormation Templates located here: Citrix/Citrix-ADC-AWS-CloudFormation/Templates/High-Availability/Across-Availability-Zone Citrix/Citrix-ADC-AWS-CloudFormation/Templates/High-Availability/Same-Availability-Zone Deploy NetScaler WAF into a new or existing VPC on AWS using the AWS Quickstart template located here: AWS-Quickstart/Quickstart-Citrix-ADC- WAF Important: If users are deploying NetScaler WAF into an existing VPC, they must ensure that their VPC spans across two Availability Zones, with one public and two private subnets in each Availability Zone for the workload instances, and that the subnets are not shared. This deployment guide does not support shared subnets, see Working with Shared VPCs: Working with Shared VPCs . These subnets require NAT Gateways in their route tables to allow the instances to download packages and software without exposing them to the internet. For more information about NAT Gateways, see: NAT Gateways . Configure the subnets so there is no overlapping of subnets. Also, users should ensure that the domain name option in the DHCP options is configured as explained in the Amazon VPC documentation found here: DHCP Options Sets: DHCP Options Sets. Users are prompted for their VPC settings when they launch the Quick Start Guide. Each deployment takes about 15 minutes to complete. Check the AWS Region that is displayed in the upper-right corner of the navigation bar, and change it if necessary. This is where the network infrastructure for NetScaler WAF will be built. The template is launched in the US East (Ohio) Region by default. Note: This deployment includes NetScaler WAF, which isn’t currently supported in all AWS Regions. For a current list of supported Regions, see the AWS Service Endpoints: AWS Service Endpoints . On the Select Template page, keep the default setting for the template URL, and then choose Next. On the Specify Details page, specify the stack name as per user convenience. Review the parameters for the template. Provide values for the parameters that require input. For all other parameters, review the default settings and customize them as necessary. In the following table, parameters are listed by category and described separately for the deployment option: Parameters for deploying NetScaler WAF into a new or existing VPC (Deployment Option 1) When users finish reviewing and customizing the parameters, they should choose Next. Parameters for Deploying NetScaler WAF into a new VPC VPC Network Configuration For reference information on this deployment refer to the CFT template here: AWS-Quickstart/Quickstart-Citrix-ADC-WAF/Templates. Parameter label (name) Default Description Primary Availability Zone (PrimaryAvailabilityZone) Requires input The Availability Zone for Primary NetScaler WAF deployment Secondary Availability Zone (SecondaryAvailabilityZone) Requires input The Availability Zone for Secondary NetScaler WAF deployment VPC CIDR (VPCCIDR) 10.0.0.0/16 The CIDR block for the VPC. Must be a valid IP CIDR range of the form x.x.x.x/x. Remote SSH CIDR IP(Management) (RestrictedSSHCIDR) Requires input The IP address range that can SSH to the EC2 instance (port: 22). For example Using 0.0.0.0/0, will enable all IP addresses to access the user instance using SSH or RDP. Note: Authorize only a specific IP address or range of addresses to access the user instance because it is unsafe to use it in production. Remote HTTP CIDR IP(Client) (RestrictedWebAppCIDR) 0.0.0.0/0 The IP address range that can HTTP to the EC2 instance (port: 80) Remote HTTP CIDR IP(Client) (RestrictedWebAppCIDR) 0.0.0.0/0 The IP address range that can HTTP to the EC2 instance (port: 80) Primary Management Private Subnet CIDR (PrimaryManagementPrivateSubnetCIDR) 10.0.1.0/24 The CIDR block for Primary Management Subnet located in Availability Zone 1. Primary Management Private IP (PrimaryManagementPrivateIP) — Private IP assigned to the Primary Management ENI (last octet has to be between 5 and 254) from the Primary Management Subnet CIDR. Primary Client Public Subnet CIDR (PrimaryClientPublicSubnetCIDR) 10.0.2.0/24 The CIDR block for Primary Client Subnet located in Availability Zone 1. Primary Client Private IP (PrimaryClientPrivateIP) — Private IP assigned to the Primary Client ENI (last octet has to be between 5 and 254) from Primary Client IP from the Primary Client Subnet CIDR. Primary Server Private Subnet CIDR (PrimaryServerPrivateSubnetCIDR) 10.0.3.0/24 The CIDR block for Primary Server located in Availability Zone 1. Primary Server Private IP (PrimaryServerPrivateIP) — Private IP assigned to the Primary Server ENI (last octet has to be between 5 and 254) from the Primary Server Subnet CIDR. Secondary Management Private Subnet CIDR (SecondaryManagementPrivateSubnetCIDR) 10.0.4.0/24 The CIDR block for Secondary Management Subnet located in Availability Zone 2. Secondary Management Private IP (SecondaryManagementPrivateIP) — Private IP assigned to the Secondary Management ENI (last octet has to be between 5 and 254). It would allocate Secondary Management IP from the Secondary Management Subnet CIDR. Secondary Client Public Subnet CIDR (SecondaryClientPublicSubnetCIDR) 10.0.5.0/24 The CIDR block for Secondary Client Subnet located in Availability Zone 2. Secondary Client Private IP (SecondaryClientPrivateIP) — Private IP assigned to the Secondary Client ENI (last octet has to be between 5 and 254). It would allocate Secondary Client IP from the Secondary Client Subnet CIDR. Secondary Server Private Subnet CIDR (SecondaryServerPrivateSubnetCIDR) 10.0.6.0/24 The CIDR block for Secondary Server Subnet located in Availability Zone 2. Secondary Server Private IP (SecondaryServerPrivateIP) — Private IP assigned to the Secondary Server ENI (last octet has to be between 5 and 254). It would allocate Secondary Server IP from the Secondary Server Subnet CIDR. VPC Tenancy attribute (VPCTenancy) default The allowed tenancy of instances launched into the VPC. Choose Dedicated tenancy to launch EC2 instances dedicated to a single customer. Bastion host configuration Parameter label (name) Default Description Bastion Host required (LinuxBastionHostEIP) No By default, no bastion host will be configured. But if users want to opt for sandbox deployment select “yes” from the menu which would deploy a Linux Bastion Host in the public subnet with an EIP that would give users access to the components in the private and public subnet. NetScaler WAF Configuration Parameter label (name) Default Description Key pair name (KeyPairName) Requires input A public/private key pair, which allows users to connect securely to the user instance after it launches. This is the key pair users created in their preferred AWS Region; see the Technical Requirements section. NetScaler ADC Instance Type (CitrixADCInstanceType) m4.xlarge The EC2 instance type to use for the ADC instances. Ensure that the instance type opted for aligns with the instance types available in the AWS marketplace or else the CFT might fail. NetScaler ADC AMI ID (CitrixADCImageID) — The AWS Marketplace AMI to be used for NetScaler WAF deployment. This must match the AMI users subscribed to in step 2. NetScaler ADC VPX IAM role (iam:GetRole) — This Template: AWS-Quickstart/Quickstart-Citrix-ADC-VPX/Templates creates the IAM role and the Instance Profile required for NetScaler ADC VPX. If left empty, CFT creates the required IAM role. Client PublicIP(EIP) (ClientPublicEIP) No Select "Yes" if users want to assign a public EIP to the user Client Network interface. Otherwise, even after the deployment, users still have the option of assigning it later if necessary. Pooled Licensing configuration Parameter label (name) Default Description ADM Pooled Licensing No If choosing the BYOL option for licensing, select yes from the list. This allows users to upload their already purchased licenses. Before users begin, they should Configure NetScaler ADC Pooled Capacity to ensure ADM pooled licensing is available, see: Configure NetScaler ADC Pooled Capacity. Reachable ADM / ADM Agent IP Requires input For the Customer Licensed option, whether users deploy NetScaler ADM on-prem or an agent in the cloud, make sure to have a reachable ADM IP which would then be used as an input parameter. Licensing Mode Optional Users can choose from the 3 licensing modes: Configure NetScaler ADC Pooled Capacity: Configure NetScaler ADC Pooled Capacity NetScaler ADC VPX Check-in and Check-out Licensing (CICO): NetScaler ADC VPX Check-in and Check-out Licensing NetScaler ADC virtual CPU Licensing: NetScaler ADC virtual CPU Licensing| |License Bandwidth in Mbps|0 Mbps|Only if the licensing mode is Pooled-Licensing, then this field comes into the picture. It allocates an initial bandwidth of the license in Mbps to be allocated after BYOL ADCs are created. It should be a multiple of 10 Mbps.| |License Edition|Premium|License Edition for Pooled Capacity Licensing Mode is Premium| |Appliance Platform Type|Optional|Choose the required Appliance Platform Type, only if users opt for CICO licensing mode. Users get the options listed: VPX-200, VPX-1000, VPX-3000, VPX-5000, VPX-8000| |License Edition|Premium|License Edition for vCPU based Licensing is Premium.| AWS Quick Start Guide Configuration Note: We recommend that users keep the default settings for the following two parameters, unless they are customizing the Quick Start Guide templates for their own deployment projects. Changing the settings of these parameters will automatically update code references to point to a new Quick Start Guide location. For more details, see the AWS Quick Start Guide Contributor’s Guide located here: AWS Quick Starts/Option 1 - Adopt a Quick Start . Parameter label (name) Default Description Quick Start Guide S3 bucket name (QSS3BucketName) aws-quickstart The S3 bucket users created for their copy of Quick Start Guide assets, if users decide to customize or extend the Quick Start Guide for their own use. The bucket name can include numbers, lowercase letters, uppercase letters, and hyphens, but should not start or end with a hyphen. Quick Start Guide S3 key prefix (QSS3KeyPrefix) quickstart-citrix-adc-vpx/ The S3 key name prefix, from the Object Key and Metadata: Object Key and Metadata, is used to simulate a folder for the user copy of Quick Start Guide assets, if users decide to customize or extend the Quick Start Guide for their own use. This prefix can include numbers, lowercase letters, uppercase letters, hyphens, and forward slashes. On the Options page, users can specify a Resource Tag or key-value pair for resources in your stack and set advanced options. For more information on Resource Tags, see: Resource Tag. For more information on setting AWS CloudFormation Stack Options, see: Setting AWS CloudFormation Stack Options. When users are done, they should choose Next. On the Review page, review and confirm the template settings. Under Capabilities, select the two check boxes to acknowledge that the template creates IAM resources and that it might require the capability to auto-expand macros. Choose Create to deploy the stack. Monitor the status of the stack. When the status is CREATE_COMPLETE, the NetScaler WAF instance is ready. Use the URLs displayed in the Outputs tab for the stack to view the resources that were created. Step 4: Test the Deployment We refer to the instances in this deployment as primary and secondary. Each instance has different IP addresses associated with it. When the Quick Start has been deployed successfully, traffic goes through the primary NetScaler WAF instance configured in Availability Zone 1. During failover conditions, when the primary instance does not respond to client requests, the secondary WAF instance takes over. The Elastic IP address of the virtual IP address of the primary instance migrates to the secondary instance, which takes over as the new primary instance. In the failover process, NetScaler WAF does the following: NetScaler WAF checks the virtual servers that have IP sets attached to them. NetScaler WAF finds the IP address that has an associated public IP address from the two IP addresses that the virtual server is listening on. One that is directly attached to the virtual server, and one that is attached through the IP set. NetScaler WAF reassociates the public Elastic IP address to the private IP address that belongs to the new primary virtual IP address. To validate the deployment, perform the following: Connect to the primary instance For example, with a proxy server, jump host (a Linux/Windows/FW instance running in AWS, or the bastion host), or another device reachable to that VPC or a Direct Connect if dealing with on-prem connectivity. Perform a trigger action to force failover and check whether the secondary instance takes over. Tip: To further validate the configuration with respect to NetScaler WAF, run the following command after connecting to the Primary NetScaler WAF instance : Sh appfw profile QS-Profile Connect to NetScaler WAF HA Pair using Bastion Host If users are opting for Sandbox deployment (for example, as part of CFT, users opt for configuring a Bastion Host), a Linux bastion host deployed in a public subnet will be configured to access the WAF interfaces. In the AWS CloudFormation console, which is accessed by signing in here: Sign in, choose the master stack, and on the Outputs tab, find the value of LinuxBastionHostEIP1. PrivateManagementPrivateNSIP and PrimaryADCInstanceID key’s value to be used in the later steps to SSH into the ADC. Choose Services. On the Compute tab, select EC2. Under Resources, choose Running Instances. On the Description tab of the primary WAF instance, note the IPv4 public IP address. Users need that IP address to construct the SSH command. To store the key in the user keychain, run the command ssh-add -K [your-key-pair].pem On Linux, users might need to omit the -K flag. Log in to the bastion host using the following command, using the value for LinuxBastionHostEIP1 that users noted in step 1. ssh -A ubuntu@[LinuxBastionHostEIP1] From the bastion host, users can connect to the primary WAF instance by using SSH. ssh nsroot@[Primary Management Private NSIP] Password: [Primary ADC Instance ID] Now users are connected to the primary NetScaler WAF instance. To see the available commands, users can run the help command. To view the current HA configuration, users can run the show HA node command. NetScaler Application Delivery Management NetScaler Application Delivery Management Service (NetScaler ADM) provides an easy and scalable solution to manage NetScaler ADC deployments that include NetScaler ADC MPX, NetScaler ADC VPX, NetScaler Gateway, NetScaler Secure Web Gateway, NetScaler ADC SDX, NetScaler ADC CPX, and NetScaler SD-WAN appliances that are deployed on-premises or on the cloud. Users can use this cloud solution to manage, monitor, and troubleshoot the entire global application delivery infrastructure from a single, unified, and centralized cloud-based console. NetScaler ADM Service provides all the capabilities required to quickly set up, deploy, and manage application delivery in NetScaler ADC deployments and with rich analytics of application health, performance, and security. NetScaler ADM Service provides the following benefits: Agile – Easy to operate, update, and consume. The service model of NetScaler ADM Service is available over the cloud, making it easy to operate, update, and use the features provided by NetScaler ADM Service. The frequency of updates, combined with the automated update feature, quickly enhances user NetScaler ADC deployment. Faster time to value – Quicker business goals achievement. Unlike with the traditional on-premises deployment, users can use their NetScaler ADM Service with a few clicks. Users not only save the installation and configuration time, but also avoid wasting time and resources on potential errors. Multi-Site Management – Single Pane of Glass for instances across Multi-Site data centers. With the NetScaler ADM Service, users can manage and monitor NetScaler ADCs that are in various types of deployments. Users have one-stop management for NetScaler ADCs deployed on-premises and in the cloud. Operational Efficiency – Optimized and automated way to achieve higher operational productivity. With the NetScaler ADM Service, user operational costs are reduced by saving user time, money, and resources on maintaining and upgrading the traditional hardware deployments. How NetScaler ADM Service Works NetScaler ADM Service is available as a service on the NetScaler Cloud. After users sign up for NetScaler Cloud and start using the service, install agents in the user network environment or initiate the built-in agent in the instances. Then, add the instances users want to manage to the service. An agent enables communication between the NetScaler ADM Service and the managed instances in the user data center. The agent collects data from the managed instances in the user network and sends it to the NetScaler ADM Service. When users add an instance to the NetScaler ADM Service, it implicitly adds itself as a trap destination and collects an inventory of the instance. The service collects instance details such as: Host name Software version Running and saved configuration Certificates Entities configured on the instance, and so on. NetScaler ADM Service periodically polls managed instances to collect information. The following image illustrates the communication between the service, the agents, and the instances: Documentation Guide The NetScaler ADM Service documentation includes information about how to get started with the service, a list of features supported on the service, and configuration specific to this service solution. Deploying NetScaler ADC VPX Instances on AWS using NetScaler ADM When customers move their applications to the cloud, the components that are part of their application increase, become more distributed, and need to be dynamically managed. With NetScaler ADC VPX instances on AWS, users can seamlessly extend their L4-L7 network stack to AWS. With NetScaler ADC VPX, AWS becomes a natural extension of their on-premises IT infrastructure. Customers can use NetScaler ADC VPX on AWS to combine the elasticity and flexibility of the cloud, with the same optimization, security, and control features that support the most demanding websites and applications in the world. With NetScaler Application Delivery Management (ADM) monitoring their NetScaler ADC instances, users gain visibility into the health, performance, and security of their applications. They can automate the setup, deployment, and management of their application delivery infrastructure across hybrid multi-cloud environments. Architecture Diagram The following image provides an overview of how NetScaler ADM connects with AWS to provision NetScaler ADC VPX instances in AWS. Configuration Tasks Perform the following tasks on AWS before provisioning NetScaler ADC VPX instances in NetScaler ADM: Create subnets Create security groups Create an IAM role and define a policy Perform the following tasks on NetScaler ADM to provision the instances on AWS: Create site Provision NetScaler ADC VPX instance on AWS To Create Subnets Create three subnets in a VPC. The three subnets that are required to provision NetScaler ADC VPX instances in a VPC - are management, client, and server. Specify an IPv4 CIDR block from the range that is defined in the VPC for each of the subnets. Specify the availability zone in which the subnet is to reside. Create all the three subnets in the same availability zone. The following image illustrates the three subnets created in the customer region and their connectivity to the client system. For more information on VPC and subnets, see VPCs and Subnets. To Create Security Groups Create a security group to control inbound and outbound traffic in the NetScaler ADC VPX instance. A security group acts as a virtual firewall for a user instance. Create security groups at the instance level, and not at the subnet level. It is possible to assign each instance in a subnet in the user VPC to a different set of security groups. Add rules for each security group to control the inbound traffic that is passing through the client subnet to instances. Users can also add a separate set of rules that control the outbound traffic that passes through the server subnet to the application servers. Although users can use the default security group for their instances, they might want to create their own groups. Create three security groups - one for each subnet. Create rules for both incoming and outgoing traffic that users want to control. Users can add as many rules as they want. For more information on security groups, see: Security Groups for your VPC. To Create an IAM Role and Define a Policy Create an IAM role so that customers can establish a trust relationship between their users and the NetScaler trusted AWS account and create a policy with NetScaler permissions. In AWS, click Services. In the left side navigation pane, select IAM > Roles, and click Create role. Users are connecting their AWS account with the AWS account in NetScaler ADM. So, select Another AWS account to allow NetScaler ADM to perform actions in the AWS account. Type in the 12-digit NetScaler ADM AWS account ID. The NetScaler ID is 835822366011. Users can also find the NetScaler ID in NetScaler ADM when they create the cloud access profile. Enable Require external ID to connect to a third-party account. Users can increase the security of their roles by requiring an optional external identifier. Type an ID that can be a combination of any characters. Click Permissions. In Attach permissions policies page, click Create policy. Users can create and edit a policy in the visual editor or by using JSON. The list of permissions from NetScaler is provided in the following box: {"Version": "2012-10-17","Statement":[ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeImageAttribute", "ec2:DescribeInstanceAttribute", "ec2:DescribeRegions", "ec2:DescribeDhcpOptions", "ec2:DescribeSecurityGroups", "ec2:DescribeHosts", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:DescribeSubnets", "ec2:DescribeNetworkInterfaces", "ec2:DescribeAvailabilityZones", "ec2:DescribeNetworkInterfaceAttribute", "ec2:DescribeInstanceStatus", "ec2:DescribeAddresses", "ec2:DescribeKeyPairs", "ec2:DescribeTags", "ec2:DescribeVolumeStatus", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:CreateTags", "ec2:DeleteTags", "ec2:CreateKeyPair", "ec2:DeleteKeyPair", "ec2:ResetInstanceAttribute", "ec2:RunScheduledInstances", "ec2:ReportInstanceStatus", "ec2:StartInstances", "ec2:RunInstances", "ec2:StopInstances", "ec2:UnmonitorInstances", "ec2:MonitorInstances", "ec2:RebootInstances", "ec2:TerminateInstances", "ec2:ModifyInstanceAttribute", "ec2:AssignPrivateIpAddresses", "ec2:UnassignPrivateIpAddresses", "ec2:CreateNetworkInterface", "ec2:AttachNetworkInterface", "ec2:DetachNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:ResetNetworkInterfaceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:AssociateAddress", "ec2:AllocateAddress", "ec2:ReleaseAddress", "ec2:DisassociateAddress", "ec2:GetConsoleOutput" ], "Resource": "*" }]} Copy and paste the list of permissions in the JSON tab and click Review policy. In the Review policy page, type a name for the policy, enter a description, and click Create policy. To Create a Site in NetScaler ADM Create a site in NetScaler ADM and add the details of the VPC associated with the AWS role. In NetScaler ADM, navigate to Networks > Sites. Click Add. Select the service type as AWS and enable Use existing VPC as a site. Select the cloud access profile. If the cloud access profile does not exist in the field, click Add to create a profile. In the Create Cloud Access Profile page, type the name of the profile with which users want to access AWS. Type the ARN associated with the role that users have created in AWS. Type the external ID that users provided while creating an Identity and Access Management (IAM) role in AWS. See step 4 in “To create an IAM role and define a policy” task. Ensure that the IAM role name specified in AWS starts with NetScaler-ADM- and it correctly appears in the Role ARN. The details of the VPC, such as the region, VPC ID, name and CIDR block, associated with your IAM role in AWS are imported in NetScaler ADM. Type a name for the site. Click Create. To Provision NetScaler ADC VPX on AWS Use the site that users created earlier to provision the NetScaler ADC VPX instances on AWS. Provide NetScaler ADM service agent details to provision those instances that are bound to that agent. In NetScaler ADM, navigate to Networks > Instances > NetScaler ADC. In the VPX tab, click Provision. This option displays the Provision NetScaler ADC VPX on Cloud page. Select Amazon Web Services (AWS) and click Next. In Basic Parameters, Select the Type of Instance from the list. Standalone: This option provisions a standalone NetScaler ADC VPX instance on AWS. HA: This option provisions the high availability NetScaler ADC VPX instances on AWS. To provision the NetScaler ADC VPX instances in the same zone, select the Single Zone option under Zone Type. To provision the NetScaler ADC VPX instances across multiple zones, select the Multi Zone option under Zone type. In the Cloud Parameters tab, make sure to specify the network details for each zone that is created on AWS. Specify the name of the NetScaler ADC VPX instance. In Site, select the site that you created earlier. In Agent, select the agent that is created to manage the NetScaler ADC VPX instance. In Cloud Access Profile, select the cloud access profile created during site creation. In Device Profile, select the profile to provide authentication. NetScaler ADM uses the device profile when it requires to log on to the NetScaler ADC VPX instance. Click Next. In Cloud Parameters, Select the NetScaler IAM Role created in AWS. An IAM role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. In the Product field, select the NetScaler ADC product version that users want to provision. Select the EC2 instance type from the Instance Type list. Select the Version of NetScaler ADC that users want to provision. Select both Major and Minor version of NetScaler ADC. In Security Groups, select the Management, Client, and Server security groups that users created in their virtual network. In IPs in server Subnet per Node, select the number of IP addresses in server subnet per node for the security group. In Subnets, select the Management, Client, and Server subnets for each zone that are created in AWS. Users can also select the region from the Availability Zone list. Click Finish. The NetScaler ADC VPX instance is now provisioned on AWS. Note: NetScaler ADM doesn’t support deprovisioning of NetScaler ADC instances from AWS. To View the NetScaler ADC VPX Provisioned in AWS From the AWS home page, navigate to Services and click EC2. On the Resources page, click Running Instances. Users can view the NetScaler ADC VPX provisioned in AWS. The name of the NetScaler ADC VPX instance is the same name users provided while provisioning the instance in NetScaler ADM. To View the NetScaler ADC VPX Provisioned in NetScaler ADM In NetScaler ADM, navigate to Networks > Instances > NetScaler ADC. Select NetScaler ADC VPX tab. The NetScaler ADC VPX instance provisioned in AWS is listed here. NetScaler ADC WAF and OWASP Top 10 – 2017 The Open Web Application Security Project: OWASP released the OWASP Top 10 for 2017 for web application security. This list documents the most common web application vulnerabilities and is a great starting point to evaluate web security. Here we detail how to configure the NetScaler ADC Web Application Firewall (WAF) to mitigate these flaws. WAF is available as an integrated module in the NetScaler ADC (Premium Edition) as well as a complete range of appliances. The full OWASP Top 10 document is available at OWASP Top Ten. OWASP Top-10 2017 NetScaler ADC WAF Features A1:2017- Injection Injection attack prevention (SQL or any other custom injections such as OS Command injection, XPath injection, and LDAP Injection), auto update signature feature A2:2017 - Broken Authentication NetScaler ADC AAA, Cookie Tampering protection, Cookie Proxying, Cookie Encryption, CSRF tagging, Use SSL A3:2017 - Sensitive Data Exposure Credit Card protection, Safe Commerce, Cookie proxying, and Cookie Encryption A4:2017 XML External Entities (XXE) XML protection including WSI checks, XML message validation & XML SOAP fault filtering check A5:2017 Broken Access Control NetScaler ADC AAA, Authorization security feature within NetScaler ADC AAA module of NetScaler, Form protections, and Cookie tampering protections, StartURL, and ClosureURL A6:2017 - Security Misconfiguration PCI reports, SSL features, Signature generation from vulnerability scan reports such as Cenznic, Qualys, AppScan, WebInspect, Whitehat. Also, specific protections such as Cookie encryption, proxying, and tampering A7:2017 - Cross Site Scripting (XSS) XSS Attack Prevention, Blocks all OWASP XSS cheat sheet attacks A8:2017 – Insecure Deserialisation XML Security Checks, GWT content type, custom signatures, Xpath for JSON and XML A9:2017 - Using Components with known Vulnerabilities Vulnerability scan reports, Application Firewall Templates, and Custom Signatures A10:2017 – Insufficient Logging & Monitoring User configurable custom logging, NetScaler ADC Management and Analytics System A1:2017- Injection Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into running unintended commands or accessing data without proper authorization. ADC WAF Protections SQL Injection prevention feature protects against common injection attacks. Custom injection patterns can be uploaded to protect against any type of injection attack including XPath and LDAP. This is applicable for both HTML and XML payloads. The auto update signature feature keeps the injection signatures up to date. Field format protection feature allows the administrator to restrict any user parameter to a regular expression. For instance, you can enforce that a zip-code field contains integers only or even 5-digit integers. Form field consistency: Validate each submitted user form against the user session form signature to ensure the validity of all form elements. Buffer overflow checks ensure that the URL, headers, and cookies are in the right limits blocking any attempts to inject large scripts or code. A2:2017 – Broken Authentication Application functions related to authentication and session management are often implemented incorrectly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities temporarily or permanently. ADC WAF Protections NetScaler ADC AAA module performs user authentication and provides Single Sign-On functionality to back end applications. This is integrated into the NetScaler ADC AppExpert policy engine to allow custom policies based on user and group information. Using SSL offloading and URL transformation capabilities, the firewall can also help sites to use secure transport layer protocols to prevent stealing of session tokens by network sniffing. Cookie Proxying and Cookie Encryption can be employed to completely mitigate cookie stealing. A3:2017 - Sensitive Data Exposure Many web applications and APIs do not properly protect sensitive data, such as financial, healthcare, and PII. Attackers may steal or modify such poorly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data may be compromised without extra protection, such as encryption at rest or in transit, and requires special precautions when exchanged with the browser. ADC WAF Protections Application Firewall protects applications from leaking sensitive data like credit card details. Sensitive data can be configured as Safe objects in Safe Commerce protection to avoid exposure. Any sensitive data in cookies can be protected by Cookie Proxying and Cookie Encryption. A4:2017 XML External Entities (XXE) Many older or poorly configured XML processors evaluate external entity references within XML documents. External entities can be used to disclose internal files using the file URI handler, internal file shares, internal port scanning, remote code execution, and denial of service attacks. ADC WAF Protections In addition to detecting and blocking common application threats that can be adapted for attacking XML-based applications (that is, cross-site scripting, command injection, and so forth). ADC Application Firewall includes a rich set of XML-specific security protections. These include schema validation to thoroughly verify SOAP messages and XML payloads, and a powerful XML attachment check to block attachments containing malicious executables or viruses. Automatic traffic inspection methods block XPath injection attacks on URLs and forms aimed at gaining access. ADC Application Firewall also thwarts various DoS attacks, including external entity references, recursive expansion, excessive nesting, and malicious messages containing either long or a large number of attributes and elements. A5:2017 Broken Access Control Restrictions on what authenticated users are allowed to do are often not properly enforced. Attackers can exploit these flaws to access unauthorized functionality and data, such as access other users' accounts, view sensitive files, modify other users’ data, change access rights, and so on. ADC WAF Protections NetScaler ADC AAA feature that supports authentication, authorization, and auditing for all application traffic allows a site administrator to manage access controls with the ADC appliance. The Authorization security feature within the NetScaler ADC AAA module of the ADC appliance enables the appliance to verify, which content on a protected server it should allow each user to access. Form field consistency: If object references are stored as hidden fields in forms, then using form field consistency you can validate that these fields are not tampered on subsequent requests. Cookie Proxying and Cookie consistency: Object references that are stored in cookie values can be validated with these protections. Start URL check with URL closure: Allows user access to a predefined allow list of URLs. URL closure builds a list of all URLs seen in valid responses during the user session and automatically allows access to them during that session. A6:2017 - Security Misconfiguration Security misconfiguration is the most commonly seen issue. This is commonly a result of insecure default configurations, incomplete or improvised configurations, open cloud storage, misconfigured HTTP headers, and verbose error messages containing sensitive information. Not only must all operating systems, frameworks, libraries, and applications be securely configured, but they must be patched and upgraded in a timely fashion. ADC WAF Protections The PCI-DSS report generated by the Application Firewall, documents the security settings on the Firewall device. Reports from the scanning tools are converted to ADC WAF Signatures to handle security misconfigurations. ADC WAF supports Cenzic, IBM AppScan (Enterprise and Standard), Qualys, TrendMicro, WhiteHat, and custom vulnerability scan reports. A7:2017 - Cross Site Scripting (XSS) XSS flaws occur whenever an application includes untrusted data in a new web page without proper validation or escaping, or updates an existing webpage with user-supplied data using a browser API that can create HTML or JavaScript. Cross-site scripting allows attackers to run scripts in the victim’s browser which can hijack user sessions, deface websites, or redirect the user to malicious sites. ADC WAF Protections Cross-site scripting protection protects against common XSS attacks. Custom XSS patterns can be uploaded to modify the default list of allowed tags and attributes. The ADC WAF uses an allow list of allowed HTML attributes and tags to detect XSS attacks. This is applicable for both HTML and XML payloads. ADC WAF blocks all the attacks listed in the OWASP XSS Filter Evaluation Cheat Sheet. Field format check prevents an attacker from sending inappropriate web form data which can be a potential XSS attack. Form field consistency. A8:2017 - Insecure Deserialization Insecure deserialization often leads to remote code execution. Even if deserialization flaws do not result in remote code execution, they can be used to perform attacks, including replay attacks, injection attacks, and privilege escalation attacks. ADC WAF Protections JSON payload inspection with custom signatures. XML security: protects against XML denial of service (xDoS), XML SQL and Xpath injection and cross site scripting, format checks, WS-I basic profile compliance, XML attachments check. Field Format checks in addition to Cookie Consistency and Field Consistency can be used. A9:2017 - Using Components with Known Vulnerabilities Components, such as libraries, frameworks, and other software modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications and APIs using components with known vulnerabilities may undermine application defenses and enable various attacks and impacts. ADC WAF Protections NetScaler recommends having the third-party components up to date. Vulnerability scan reports that are converted to ADC Signatures can be used to virtually patch these components. Application Firewall templates that are available for these vulnerable components can be used. Custom Signatures can be bound with the firewall to protect these components. A10:2017 - Insufficient Logging & Monitoring Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems, and tamper, extract, or destroy data. Most breach studies show that the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring. ADC WAF Protections When the log action is enabled for security checks or signatures, the resulting log messages provide information about the requests and responses that the application firewall has observed while protecting your websites and applications. The application firewall offers the convenience of using the built-in ADC database for identifying the locations corresponding to the IP addresses from which malicious requests are originating. Default format (PI) expressions give the flexibility to customize the information included in the logs with the option to add the specific data to capture in the application firewall generated log messages. The application firewall supports CEF logs. Application Security Protection NetScaler ADM NetScaler Application Delivery Management Service (NetScaler ADM) provides a scalable solution to manage NetScaler ADC deployments that include NetScaler ADC MPX, NetScaler ADC VPX, NetScaler Gateway, NetScaler Secure Web Gateway, NetScaler ADC SDX, NetScaler ADC CPX, and NetScaler SD-WAN appliances that are deployed on-premises or on the cloud. NetScaler ADM Application Analytics and Management Features The following features are key to the ADM role in App Security. Application Analytics and Management The Application Analytics and Management feature of NetScaler ADM strengthens the application-centric approach to help users address various application delivery challenges. This approach gives users visibility into the health scores of applications, helps users determine the security risks, and helps users detect anomalies in the application traffic flows and take corrective actions. The most important among these roles for App Security is Application Security Analytics: Application security analytics: Application Security Analytics. The App Security Dashboard provides a holistic view of the security status of user applications. For example, it shows key security metrics such as security violations, signature violations, threat indexes. The App Security dashboard also displays attack related information such as SYN attacks, small window attacks, and DNS flood attacks for the discovered NetScaler ADC instances. StyleBooks StyleBooks simplify the task of managing complex NetScaler ADC configurations for user applications. A StyleBook is a template that users can use to create and manage NetScaler ADC configurations. Here users are primarily concerned with the StyleBook used to deploy the Web Application Firewall. For more information on StyleBooks, see: StyleBooks. Analytics Provides an easy and scalable way to look into the various insights of the NetScaler ADC instances’ data to describe, predict, and improve application performance. Users can use one or more analytics features simultaneously. Most important among these roles for App Security are: Security Insight: Security Insight. Provides a single-pane solution to help users assess user application security status and take corrective actions to secure user applications. Bot Insight For more information on analytics, see Analytics: Analytics. Other features that are important to ADM functionality are: Event Management Events represent occurrences of events or errors on a managed NetScaler ADC instance. For example, when there is a system failure or change in configuration, an event is generated and recorded on NetScaler ADM. Following are the related features that users can configure or view by using NetScaler ADM: Creating event rules: Create Event Rules View and export syslog messages: View and Export Syslog Messages For more information on event management, see: Events. Instance Management Enables users to manage the NetScaler ADC, NetScaler Gateway, NetScaler Secure Web Gateway, and NetScaler SD-WAN instances. For more information on instance management, see: Adding Instances. License Management Allows users to manage NetScaler ADC licenses by configuring NetScaler ADM as a license manager. NetScaler ADC pooled capacity: Pooled Capacity. A common license pool from which a user NetScaler ADC instance can check out one instance license and only as much bandwidth as it needs. When the instance no longer requires these resources, it checks them back in to the common pool, making the resources available to other instances that need them. NetScaler ADC VPX check-in and check-out licensing: NetScaler ADC VPX Check-in and Check-out Licensing. NetScaler ADM allocates licenses to NetScaler ADC VPX instances on demand. A NetScaler ADC VPX instance can check out the license from the NetScaler ADM when a NetScaler ADC VPX instance is provisioned, or check back in its license to NetScaler ADM when an instance is removed or destroyed. For more information on license management, see: Pooled Capacity. Configuration Management NetScaler ADM allows users to create configuration jobs that help them perform configuration tasks, such as creating entities, configuring features, replication of configuration changes, system upgrades, and other maintenance activities with ease on multiple instances. Configuration jobs and templates simplify the most repetitive administrative tasks to a single task on NetScaler ADM. For more information on configuration management, see Configuration jobs: Configuration Jobs. Configuration Audit Enables users to monitor and identify anomalies in the configurations across user instances. Configuration advice: Get Configuration Advice on Network Configuration. Allows users to identify any configuration anomaly. Audit template: Create Audit Templates. Allows users to monitor the changes across a specific configuration. For more information on configuration audit, see: Configuration Audit. Signatures provide the following deployment options to help users to optimize the protection of user applications: Negative Security Model: With the negative security model, users employ a rich set of preconfigured signature rules to apply the power of pattern matching to detect attacks and protect against application vulnerabilities. Users block only what they don’t want and allow the rest. Users can add their own signature rules, based on the specific security needs of user applications, to design their own customized security solutions. Hybrid security Model: In addition to using signatures, users can use positive security checks to create a configuration ideally suited for user applications. Use signatures to block what users don’t want, and use positive security checks to enforce what is allowed. To protect user applications by using signatures, users must configure one or more profiles to use their signatures object. In a hybrid security configuration, the SQL injection and cross-site scripting patterns, and the SQL transformation rules, in the user signatures object are used not only by the signature rules, but also by the positive security checks configured in the Web Application Firewall profile that is using the signatures object. The Web Application Firewall examines the traffic to user protected websites and web services to detect traffic that matches a signature. A match is triggered only when every pattern in the rule matches the traffic. When a match occurs, the specified actions for the rule are invoked. Users can display an error page or error object when a request is blocked. Log messages can help users to identify attacks being launched against user applications. If users enable statistics, the Web Application Firewall maintains data about requests that match a Web Application Firewall signature or security check. If the traffic matches both a signature and a positive security check, the more restrictive of the two actions are enforced. For example, if a request matches a signature rule for which the block action is disabled, but the request also matches an SQL Injection positive security check for which the action is block, the request is blocked. In this case, the signature violation might be logged as [not blocked], although the request is blocked by the SQL injection check. Customization: If necessary, users can add their own rules to a signatures object. Users can also customize the SQL/XSS patterns. The option to add their own signature rules, based on the specific security needs of user applications, gives users the flexibility to design their own customized security solutions. Users block only what they don’t want and allow the rest. A specific fast-match pattern in a specified location can significantly reduce processing overhead to optimize performance. Users can add, modify, or remove SQL injection and cross-site scripting patterns. Built-in RegEx and expression editors help users configure user patterns and verify their accuracy. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on AWS combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, flexible licensing, and other essential application delivery capabilities in a single VPX instance, conveniently available via the AWS Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on AWS enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. NetScaler Web Application Firewall (WAF) NetScaler Web Application Firewall (WAF) is an enterprise grade solution offering state of the art protections for modern applications. NetScaler WAF mitigates threats against public-facing assets, including websites, web applications, and APIs. NetScaler WAF includes IP reputation-based filtering, Bot mitigation, OWASP Top 10 application threats protections, Layer 7 DDoS protection and more. Also included are options to enforce authentication, strong SSL/TLS ciphers, TLS 1.3, rate limiting and rewrite policies. Using both basic and advanced WAF protections, NetScaler WAF provides comprehensive protection for your applications with unparalleled ease of use. Getting up and running is a matter of minutes. Further, using an automated learning model, called dynamic profiling, NetScaler WAF saves users precious time. By automatically learning how a protected application works, NetScaler WAF adapts to the application even as developers deploy and alter the applications. NetScaler WAF helps with compliance for all major regulatory standards and bodies, including PCI-DSS, HIPAA, and more. With our CloudFormation templates, it has never been easier to get up and running quickly. With auto scaling, users can rest assured that their applications remain protected even as their traffic scales up. Web Application Firewall Deployment Strategy The first step to deploying the web application firewall is to evaluate which applications or specific data need maximum security protection, which ones are less vulnerable, and the ones for which security inspection can safely be bypassed. This helps users in coming up with an optimal configuration, and in designing appropriate policies and bind points to segregate the traffic. For example, users might want to configure a policy to bypass security inspection of requests for static web content, such as images, MP3 files, and movies, and configure another policy to apply advanced security checks to requests for dynamic content. Users can use multiple policies and profiles to protect different contents of the same application. The next step is to baseline the deployment. Start by creating a virtual server and run test traffic through it to get an idea of the rate and amount of traffic flowing through the user system. Then, deploy the Web Application Firewall. Use NetScaler ADM and the Web Application Firewall StyleBook to configure the Web Application Firewall. See the StyleBook section below in this guide for details. After the Web Application Firewall is deployed and configured with the Web Application Firewall StyleBook, a useful next step would be to implement the NetScaler ADC WAF and OWASP Top 10. Finally, three of the Web Application Firewall protections are especially effective against common types of Web attacks, and are therefore more commonly used than any of the others. Thus, they should be implemented in the initial deployment. They are: HTML Cross-Site Scripting. Examines requests and responses for scripts that attempt to access or modify content on a different website than the one on which the script is located. When this check finds such a script, it either renders the script harmless before forwarding the request or response to its destination, or it blocks the connection. HTML SQL Injection. Examines requests that contain form field data for attempts to inject SQL commands into a SQL database. When this check detects injected SQL code, it either blocks the request or renders the injected SQL code harmless before forwarding the request to the Web server. Note: If both of the following conditions apply to the user configuration, users should make certain that your Web Application Firewall is correctly configured: If users enable the HTML Cross-Site Scripting check or the HTML SQL Injection check (or both), and User protected websites accept file uploads or contain Web forms that can contain large POST body data. For more information about configuring the Web Application Firewall to handle this case, see Configuring the Application Firewall: Configuring the Web App Firewall. Buffer Overflow. Examines requests to detect attempts to cause a buffer overflow on the Web server. Configuring the Web Application Firewall (WAF) The following steps assume that the WAF is already enabled and functioning correctly. NetScaler recommends that users configure WAF using the Web Application Firewall StyleBook. Most users find it the easiest method to configure the Web Application Firewall, and it is designed to prevent mistakes. Both the GUI and the command line interface are intended for experienced users, primarily to modify an existing configuration or use advanced options. SQL Injection The Application Firewall HTML SQL Injection check provides special defenses against the injection of unauthorized SQL code that might break user Application security. NetScaler Web Application Firewall examines the request payload for injected SQL code in three locations: 1) POST body, 2) headers, and 3) cookies. A default set of keywords and special characters provides known keywords and special characters that are commonly used to launch SQL attacks. Users can also add new patterns, and they can edit the default set to customize the SQL check inspection. There are several parameters that can be configured for SQL injection processing. Users can check for SQL wildcard characters. Users can change the SQL Injection type and select one of the 4 options (SQLKeyword, SQLSplChar, SQLSplCharANDKeyword, SQLSplCharORKeyword) to indicate how to evaluate the SQL keywords and SQL special characters when processing the payload. The SQL Comments Handling parameter gives users an option to specify the type of comments that need to be inspected or exempted during SQL Injection detection. Users can deploy relaxations to avoid false positives. The learning engine can provide recommendations for configuring relaxation rules. The following options are available for configuring an optimized SQL Injection protection for the user application: Block — If users enable block, the block action is triggered only if the input matches the SQL injection type specification. For example, if SQLSplCharANDKeyword is configured as the SQL injection type, a request is not blocked if it contains no key words, even if SQL special characters are detected in the input. Such a request is blocked if the SQL injection type is set to either SQLSplChar, or SQLSplCharORKeyword. Log — If users enable the log feature, the SQL Injection check generates log messages indicating the actions that it takes. If block is disabled, a separate log message is generated for each input field in which the SQL violation was detected. However, only one message is generated when the request is blocked. Similarly, 1 log message per request is generated for the transform operation, even when SQL special characters are transformed in multiple fields. Users can monitor the logs to determine whether responses to legitimate requests are getting blocked. A large increase in the number of log messages can indicate attempts to launch an attack. Stats — If enabled, the stats feature gathers statistics about violations and logs. An unexpected surge in the stats counter might indicate that the user application is under attack. If legitimate requests are getting blocked, users might have to revisit the configuration to see if they need to configure new relaxation rules or modify the existing ones. Learn — If users are not sure which SQL relaxation rules might be ideally suited for their applications, they can use the learn feature to generate recommendations based on the learned data. The Web Application Firewall learning engine monitors the traffic and provides SQL learning recommendations based on the observed values. To get optimal benefit without compromising performance, users might want to enable the learn option for a short time to get a representative sample of the rules, and then deploy the rules and disable learning. Transform SQL special characters—The Web Application Firewall considers three characters, Single straight quote (‘), Backslash (), and Semicolon (;) as special characters for SQL security check processing. The SQL Transformation feature modifies the SQL Injection code in an HTML request to ensure that the request is rendered harmless. The modified HTML request is then sent to the server. All default transformation rules are specified in the /netscaler/default_custom_settings.xml file. The transform operation renders the SQL code inactive by making the following changes to the request: Single straight quote (‘) to double straight quote (“). Backslash () to double backslash (). Semicolon (;) is dropped completely. These three characters (special strings) are necessary to issue commands to a SQL server. Unless a SQL command is prefaced with a special string, most SQL servers ignore that command. Therefore, the changes that the Web Application Firewall performs when transformation is enabled prevent an attacker from injecting active SQL. After these changes are made, the request can safely be forwarded to the user protected website. When web forms on the user protected website can legitimately contain SQL special strings, but the web forms do not rely on the special strings to operate correctly, users can disable blocking and enable transformation to prevent blocking of legitimate web form data without reducing the protection that the Web Application Firewall provides to the user protected websites. The transform operation works independently of the SQL Injection Type setting. If transform is enabled and the SQL Injection type is specified as a SQL keyword, SQL special characters are transformed even if the request does not contain any keywords. Tip: Users normally enable either transformation or blocking, but not both. If the block action is enabled, it takes precedence over the transform action. If users have blocking enabled, enabling transformation is redundant. Check for SQL Wildcard Characters—Wild card characters can be used to broaden the selections of a SQL (SQL-SELECT) statement. These wild card operators can be used with LIKE and NOT LIKE operators to compare a value to similar values. The percent (%), and underscore (_) characters are frequently used as wild cards. The percent sign is analogous to the asterisk (*) wildcard character used with MS-DOS and to match zero, one, or multiple characters in a field. The underscore is similar to the MS-DOS question mark (?) wildcard character. It matches a single number or character in an expression. For example, users can use the following query to do a string search to find all customers whose names contain the D character. SELECT * from customer WHERE name like “%D%”: The following example combines the operators to find any salary values that have 0 in the second and third place. SELECT * from customer WHERE salary like ‘_00%’: Different DBMS vendors have extended the wildcard characters by adding extra operators. The NetScaler Web Application Firewall can protect against attacks that are launched by injecting these wildcard characters. The 5 default Wildcard characters are percent (%), underscore (_), caret (^), opening bracket ([), and closing bracket (]). This protection applies to both HTML and XML profiles. The default wildcard chars are a list of literals specified in the *Default Signatures: <wildchar type=” LITERAL”>% <wildchar type=”LITERAL”]>_ <wildchar type=”LITERAL”>^ <wildchar type=”LITERAL”>[ <wildchar type=”LITERAL”>] Wildcard characters in an attack can be PCRE, like [^A-F]. The Web Application Firewall also supports PCRE wildcards, but the literal wildcard chars shown here are sufficient to block most attacks. Note: The SQL wildcard character check is different from the SQL special character check. This option must be used with caution to avoid false positives. Check Request Containing SQL Injection Type—The Web Application Firewall provides 4 options to implement the desired level of strictness for SQL Injection inspection, based on the individual need of the application. The request is checked against the injection type specification for detecting SQL violations. The 4 SQL injection type options are: SQL Special Character and Keyword—Both a SQL keyword and a SQL special character must be present in the input to trigger a SQL violation. This least restrictive setting is also the default setting. SQL Special Character—At least one of the special characters must be present in the input to trigger a SQL violation. SQL key word—At least one of the specified SQL keywords must be present in the input to trigger a SQL violation. Do not select this option without due consideration. To avoid false positives, make sure that none of the keywords are expected in the inputs. SQL Special Character or Keyword—Either the key word or the special character string must be present in the input to trigger the security check violation. Tip: If users configure the Web Application Firewall to check for inputs that contain a SQL special character, the Web Application Firewall skips web form fields that do not contain any special characters. Since most SQL servers do not process SQL commands that are not preceded by a special character, enabling this option can significantly reduce the load on the Web Application Firewall and speed up processing without placing the user protected websites at risk. SQL comments handling — By default, the Web Application Firewall checks all SQL comments for injected SQL commands. Many SQL servers ignore anything in a comment, however, even if preceded by an SQL special character. For faster processing, if your SQL server ignores comments, you can configure the Web Application Firewall to skip comments when examining requests for injected SQL. The SQL comments handling options are: ANSI — Skip ANSI-format SQL comments, which are normally used by UNIX-based SQL databases. For example: /– (Two Hyphens) - This is a comment that begins with two hyphens and ends with end of line. - Braces (Braces enclose the comment. The { precedes the comment, and the } follows it. Braces can delimit single- or multiple-line comments, but comments cannot be nested) /**/: C style comments (Does not allow nested comments). Please note /*! <comment that begins with a slash followed by an asterisk and an exclamation mark is not a comment > */ MySQL Server supports some variants of C-style comments. These enable users to write code that includes MySQL extensions, but is still portable, by using comments of the following form: [/*! MySQL-specific code */] .#: Mysql comments : This is a comment that begins with the # character and ends with an end of the line Nested — Skip nested SQL comments, which are normally used by Microsoft SQL Server. For example; – (Two Hyphens), and /**/ (Allows nested comments) ANSI/Nested — Skip comments that adhere to both the ANSI and nested SQL comment standards. Comments that match only the ANSI standard, or only the nested standard, are still checked for injected SQL. Check all Comments — Check the entire request for injected SQL without skipping anything. This is the default setting. Tip: In most cases, users should not choose the Nested or the ANSI/Nested option unless their back-end database runs on Microsoft SQL Server. Most other types of SQL server software do not recognize nested comments. If nested comments appear in a request directed to another type of SQL server, they might indicate an attempt to breach security on that server. Check Request headers — Enable this option if, in addition to examining the input in the form fields, users want to examine the request headers for HTML SQL Injection attacks. If users use the GUI, they can enable this parameter in the Advanced Settings -> Profile Settings pane of the Web Application Firewall profile. Note: If users enable the Check Request header flag, they might have to configure a relaxation rule for the User-Agent header. Presence of the SQL keyword like and a SQL special character semi-colon (;) might trigger false positive and block requests that contain this header. Warning: If users enable both request header checking and transformation, any SQL special characters found in headers are also transformed. The Accept, Accept-Charset, Accept-Encoding, Accept-Language, Expect, and User-Agent headers normally contain semicolons (;). Enabling both Request header checking and transformation simultaneously might cause errors. InspectQueryContentTypes — Configure this option if users want to examine the request query portion for SQL Injection attacks for the specific content-types. If users use the GUI, they can configure this parameter in the Advanced Settings -> Profile Settings pane of the Application Firewall profile. Cross-Site Scripting The HTML Cross-Site Scripting (cross-site scripting) check examines both the headers and the POST bodies of user requests for possible cross-site scripting attacks. If it finds a cross-site script, it either modifies (transforms) the request to render the attack harmless, or blocks the request. Note: The HTML Cross-Site Scripting (cross-site scripting) check works only for content type, content length, and so forth. It does not work for cookie. Also ensure to have the ‘checkRequestHeaders’ option enabled in the user Web Application Firewall profile. To prevent misuse of the scripts on user protected websites to breach security on user websites, the HTML Cross-Site Scripting check blocks scripts that violate the same origin rule, which states that scripts should not access or modify content on any server but the server on which they are located. Any script that violates the same origin rule is called a cross-site script, and the practice of using scripts to access or modify content on another server is called cross-site scripting. The reason cross-site scripting is a security issue is that a web server that allows cross-site scripting can be attacked with a script that is not on that web server, but on a different web server, such as one owned and controlled by the attacker. Unfortunately, many companies have a large installed base of JavaScript-enhanced web content that violates the same origin rule. If users enable the HTML Cross-Site Scripting check on such a site, they have to generate the appropriate exceptions so that the check does not block legitimate activity. The Web Application Firewall offers various action options for implementing HTML Cross-Site Scripting protection. In addition to the Block, Log, Stats and Learn actions, users also have the option to Transform cross-site scripts to render an attack harmless by entity encoding the script tags in the submitted request. Users can configure Check complete URLs for the cross-site scripting parameter to specify if they want to inspect not just the query parameters but the entire URL to detect a cross-site scripting attack. Users can configure the InspectQueryContentTypes parameter to inspect the request query portion for a cross-site scripting attack for the specific content-types. Users can deploy relaxations to avoid false positives. The Web Application Firewall learning engine can provide recommendations for configuring relaxation rules. The following options are available for configuring an optimized HTML Cross-Site Scripting protection for the user application: Block — If users enable block, the block action is triggered if the cross-site scripting tags are detected in the request. Log — If users enable the log feature, the HTML Cross-Site Scripting check generates log messages indicating the actions that it takes. If block is disabled, a separate log message is generated for each header or form field in which the cross-site scripting violation was detected. However, only one message is generated when the request is blocked. Similarly, 1 log message per request is generated for the transform operation, even when cross-site scripting tags are transformed in multiple fields. Users can monitor the logs to determine whether responses to legitimate requests are getting blocked. A large increase in the number of log messages can indicate attempts to launch an attack. Stats — If enabled, the stats feature gathers statistics about violations and logs. An unexpected surge in the stats counter might indicate that the user application is under attack. If legitimate requests are getting blocked, users might have to revisit the configuration to see if they must configure new relaxation rules or modify the existing ones. Learn — If users are not sure which relaxation rules might be ideally suited for their application, they can use the learn feature to generate HTML Cross-Site Scripting rule recommendations based on the learned data. The Web Application Firewall learning engine monitors the traffic and provides learning recommendations based on the observed values. To get optimal benefit without compromising performance, users might want to enable the learn option for a short time to get a representative sample of the rules, and then deploy the rules and disable learning. Transform cross-site scripts — If enabled, the Web Application Firewall makes the following changes to requests that match the HTML Cross-Site Scripting check: Left angle bracket (<) to HTML character entity equivalent (<) Right angle bracket (>) to HTML character entity equivalent (>) This ensures that browsers do not interpret unsafe html tags, such as <script>, and thereby run malicious code. If users enable both request-header checking and transformation, any special characters found in request headers are also modified as described above. If scripts on the user protected website contain cross-site scripting features, but the user website does not rely upon those scripts to operate correctly, users can safely disable blocking and enable transformation. This configuration ensures that no legitimate web traffic is blocked, while stopping any potential cross-site scripting attacks. Check complete URLs for cross-site scripting — If checking of complete URLs is enabled, the Web Application Firewall examines entire URLs for HTML cross-site scripting attacks instead of checking just the query portions of URLs. Check Request headers — If Request header checking is enabled, the Web Application Firewall examines the headers of requests for HTML cross-site scripting attacks, instead of just URLs. If users use the GUI, they can enable this parameter in the Settings tab of the Web Application Firewall profile. InspectQueryContentTypes — If Request query inspection is configured, the Application Firewall examines the query of requests for cross-site scripting attacks for the specific content-types. If users use the GUI, they can configure this parameter in the Settings tab of the Application Firewall profile. Important: As part of the streaming changes, the Web Application Firewall processing of the cross-site scripting tags has changed. In earlier releases, the presence of either open bracket (<), or close bracket (>), or both open and close brackets (<>) was flagged as a cross-site scripting Violation. The behavior has changed in the builds that include support for request side streaming. Only the close bracket character (>) is no longer considered as an attack. Requests are blocked even when an open bracket character (<) is present, and is considered as an attack. The Cross-site scripting attack gets flagged. Buffer Overflow Check The Buffer Overflow check detects attempts to cause a buffer overflow on the web server. If the Web Application Firewall detects that the URL, cookies, or header are longer than the configured length, it blocks the request because it can cause a buffer overflow. The Buffer Overflow check prevents attacks against insecure operating-system or web-server software that can crash or behave unpredictably when it receives a data string that is larger than it can handle. Proper programming techniques prevent buffer overflows by checking incoming data and either rejecting or truncating overlong strings. Many programs, however, do not check all incoming data and are therefore vulnerable to buffer overflows. This issue especially affects older versions of web-server software and operating systems, many of which are still in use. The Buffer Overflow security check allows users to configure the Block, Log, and Stats actions. In addition, users can also configure the following parameters: Maximum URL Length. The maximum length the Web Application Firewall allows in a requested URL. Requests with longer URLs are blocked. Possible Values: 0–65535. Default: 1024 Maximum Cookie Length. The maximum length the Web Application Firewall allows for all cookies in a request. Requests with longer cookies trigger the violations. Possible Values: 0–65535. Default: 4096 Maximum Header Length. The maximum length the Web Application Firewall allows for HTTP headers. Requests with longer headers are blocked. Possible Values: 0–65535. Default: 4096 Query string length. Maximum length allowed for a query string in an incoming request. Requests with longer queries are blocked. Possible Values: 0–65535. Default: 1024 Total request length. Maximum request length allowed for an incoming request. Requests with a longer length are blocked. Possible Values: 0–65535. Default: 24820 Virtual Patching/Signatures The signatures provide specific, configurable rules to simplify the task of protecting user websites against known attacks. A signature represents a pattern that is a component of a known attack on an operating system, web server, website, XML-based web service, or other resource. A rich set of preconfigured built-in or native rules offers an easy to use security solution, applying the power of pattern matching to detect attacks and protect against application vulnerabilities. Users can create their own signatures or use signatures in the built-in templates. The Web Application Firewall has two built-in templates: Default Signatures: This template contains a preconfigured list of over 1,300 signatures, in addition to a complete list of SQL injection keywords, SQL special strings, SQL transform rules, and SQL wildcard characters. It also contains denied patterns for cross-site scripting, and allowed attributes and tags for cross-site scripting. This is a read-only template. Users can view the contents, but they cannot add, edit, or delete anything in this template. To use it, users must make a copy. In their own copy, users can enable the signature rules that they want to apply to their traffic, and specify the actions to be taken when the signature rules match the traffic. The signatures are derived from the rules published by SNORT: SNORT, which is an open source intrusion prevention system capable of performing real-time traffic analysis to detect various attacks and probes. *Xpath Injection Patterns: This template contains a preconfigured set of literal and PCRE keywords and special strings that are used to detect XPath (XML Path Language) injection attacks. Blank Signatures: In addition to making a copy of the built-in Default Signatures template, users can use a blank signatures template to create a signature object. The signature object that users create with the blank signatures option does not have any native signature rules, but, just like the *Default template, it has all the SQL/XSS built-in entities. External-Format Signatures: The Web Application Firewall also supports external format signatures. Users can import the third-party scan report by using the XSLT files that are supported by the NetScaler Web Application Firewall. A set of built-in XSLT files is available for selected scan tools to translate external format files to native format (see the list of built-in XSLT files later in this section). While signatures help users to reduce the risk of exposed vulnerabilities and protect the user mission critical Web Servers while aiming for efficacy, Signatures do come at a Cost of additional CPU Processing. It is important to choose the right Signatures for user Application needs. Enable only the signatures that are relevant to the Customer Application/environment. NetScaler offers signatures in more than 10 different categories across platforms/OS/Technologies. The signature rules database is substantial, as attack information has built up over the years. So, most of the old rules may not be relevant for all networks as Software Developers may have patched them already or customers are running a more recent version of the OS. Signatures Updates NetScaler Web Application Firewall supports both Auto & Manual Update of Signatures. We also suggest enabling Auto-update for signatures to stay up to date. These signatures files are hosted on the AWS Environment and it is important to allow outbound access to NetScaler IPs from Network Firewalls to fetch the latest signature files. There is no effect of updating signatures to the ADC while processing Real Time Traffic Application Security Analytics The Application Security Dashboard provides a holistic view of the security status of user applications. For example, it shows key security metrics such as security violations, signature violations, and threat indexes. Application Security dashboard also displays attack related information such as syn attacks, small window attacks, and DNS flood attacks for the discovered NetScaler ADC instances. Note: To view the metrics of the Application Security Dashboard, AppFlow for Security insight should be enabled on the NetScaler ADC instances that users want to monitor. To view the security metrics of a NetScaler ADC instance on the application security dashboard Log on to NetScaler ADM using the administrator credentials. Navigate to Applications > App Security Dashboard, and select the instance IP address from the Devices list. Users can further drill down on the discrepancies reported on the Application Security Investigator by clicking the bubbles plotted on the graph. Centralized Learning on ADM NetScaler Web Application Firewall (WAF) protects user web applications from malicious attacks such as SQL injection and cross-site scripting (XSS). To prevent data breaches and provide the right security protection, users must monitor their traffic for threats and real-time actionable data on attacks. Sometimes, the attacks reported might be false-positives and those need to be provided as an exception. The Centralized Learning on NetScaler ADM is a repetitive pattern filter that enables WAF to learn the behavior (the normal activities) of user web applications. Based on monitoring, the engine generates a list of suggested rules or exceptions for each security check applied on the HTTP traffic. It is much easier to deploy relaxation rules using the Learning engine than to manually deploy it as necessary relaxations. To deploy the learning feature, users must first configure a Web Application Firewall profile (set of security settings) on the user NetScaler ADC appliance. For more information, see Creating Web Application Firewall profiles: Creating Web App Firewall Profiles. NetScaler ADM generates a list of exceptions (relaxations) for each security check. As an administrator, users can review the list of exceptions in NetScaler ADM and decide to deploy or skip. Using the WAF learning feature in NetScaler ADM, users can: Configure a learning profile with the following security checks Buffer Overflow HTML Cross-Site Scripting Note: The cross-site script limitation of location is only FormField. HTML SQL Injection Note: For the HTML SQL Injection check, users must configure set -sqlinjectionTransformSpecialChars to ON and set -sqlinjectiontype sqlspclcharorkeywords in the NetScaler ADC instance. Check the relaxation rules in NetScaler ADM and decide to take necessary action (deploy or skip) Get the notifications through email, slack, and ServiceNow Use the dashboard to view relaxation details To use the WAF learning in NetScaler ADM: Configure the learning profile: Configure the Learning Profile See the relaxation rules: View Relaxation Rules and Idle Rules Use the WAF learning dashboard: View WAF Learning Dashboard StyleBook NetScaler Web Application Firewall is a Web Application Firewall (WAF) that protects web applications and sites from both known and unknown attacks, including all application-layer and zero-day threats. NetScaler ADM now provides a default StyleBook with which users can more conveniently create an application firewall configuration on NetScaler ADC instances. Deploying Application Firewall Configurations The following task assists you in deploying a load balancing configuration along with the application firewall and IP reputation policy on NetScaler ADC instances in your business network. To Create an LB Configuration with Application Firewall Settings In NetScaler ADM, navigate to Applications > Configurations > StyleBooks. The StyleBooks page displays all the StyleBooks available for customer use in NetScaler ADM. Scroll down and find HTTP/SSL Load Balancing StyleBook with application firewall policy and IP reputation policy. Users can also search for the StyleBook by typing the name as lb-appfw. Click Create Configuration. The StyleBook opens as a user interface page on which users can enter the values for all the parameters defined in this StyleBook. Enter values for the following parameters: Load Balanced Application Name. Name of the load balanced configuration with an application firewall to deploy in the user network. Load balanced App Virtual IP address. Virtual IP address at which the NetScaler ADC instance receives client requests. Load Balanced App Virtual Port. The TCP Port to be used by the users in accessing the load balanced application. Load Balanced App Protocol. Select the front-end protocol from the list. Application Server Protocol. Select the protocol of the application server. As an option, users can enable and configure the Advanced Load Balancer Settings. Optionally, users can also set up an authentication server for authenticating traffic for the load balancing virtual server. Click “+” in the server IPs and Ports section to create application servers and the ports that they can be accessed on. Users can also create FQDN names for application servers. Users can also specify the details of the SSL certificate. Users can also create monitors in the target NetScaler ADC instance. To configure the application firewall on the virtual server, enable WAF Settings. Ensure that the application firewall policy rule is true if users want to apply the application firewall settings to all traffic on that VIP. Otherwise, specify the NetScaler ADC policy rule to select a subset of requests to which to apply the application firewall settings. Next, select the type of profile that has to be applied - HTML or XML. Optionally, users can configure detailed application firewall profile settings by enabling the application firewall Profile Settings check box. Optionally, if users want to configure application firewall signatures, enter the name of the signature object that is created on the NetScaler ADC instance where the virtual server is to be deployed. Note: Users cannot create signature objects by using this StyleBook. Next, users can also configure any other application firewall profile settings such as, StartURL settings, DenyURL settings and others. For more information on application firewall and configuration settings, see Application Firewall. In the Target Instances section, select the NetScaler ADC instance on which to deploy the load balancing virtual server with the application firewall. Note: Users can also click the refresh icon to add recently discovered NetScaler ADC instances in NetScaler ADM to the available list of instances in this window. Users can also enable IP Reputation check to identify the IP address that is sending unwanted requests. Users can use the IP reputation list to preemptively reject requests that are coming from the IP with the bad reputation. Tip: NetScaler recommends that users select Dry Run to check the configuration objects that must be created on the target instance before they run the actual configuration on the instance. When the configuration is successfully created, the StyleBook creates the required load balancing virtual server, application server, services, service groups, application firewall labels, application firewall policies, and binds them to the load balancing virtual server. The following figure shows the objects created in each server: To see the ConfigPack created on NetScaler ADM, navigate to Applications > Configurations. Security Insight Analytics Web and web service applications that are exposed to the Internet have become increasingly vulnerable to attacks. To protect applications from attack, users need visibility into the nature and extent of past, present, and impending threats, real-time actionable data on attacks, and recommendations on countermeasures. Security Insight provides a single-pane solution to help users assess user application security status and take corrective actions to secure user applications. How Security Insight Works Security Insight is an intuitive dashboard-based security analytics solution that gives users full visibility into the threat environment associated with user applications. Security insight is included in NetScaler ADM, and it periodically generates reports based on the user Application Firewall and ADC system security configurations. The reports include the following information for each application: Threat index. A single-digit rating system that indicates the criticality of attacks on the application, regardless of whether the application is protected by an ADC appliance. The more critical the attacks on an application, the higher the threat index for that application. Values range from 1 through 7. The threat index is based on attack information. The attack-related information, such as violation type, attack category, location, and client details, gives users insight into the attacks on the application. Violation information is sent to NetScaler ADM only when a violation or attack occurs. Many breaches and vulnerabilities lead to a high threat index value. Safety index. A single-digit rating system that indicates how securely users have configured the ADC instances to protect applications from external threats and vulnerabilities. The lower the security risks for an application, the higher the safety index. Values range from 1 through 7. The safety index considers both the application firewall configuration and the ADC system security configuration. For a high safety index value, both configurations must be strong. For example, if rigorous application firewall checks are in place but ADC system security measures, such as a strong password for the nsroot user, have not been adopted, applications are assigned a low safety index value. Actionable Information. Information that users need for lowering the threat index and increasing the safety index, which significantly improves application security. For example, users can review information about violations, existing and missing security configurations for the application firewall and other security features, the rate at which the applications are being attacked, and so on. Configuring Security Insight Note: Security Insight is supported on ADC instances with Premium license or ADC Advanced with AppFirewall license only. To configure security insight on an ADC instance, first configure an application firewall profile and an application firewall policy, and then bind the application firewall policy globally. Then, enable the AppFlow feature, configure an AppFlow collector, action, and policy, and bind the policy globally. When users configure the collector, they must specify the IP address of the NetScaler ADM service agent on which they want to monitor the reports. Configure Security Insight on an ADC Instance Run the following commands to configure an application firewall profile and policy, and bind the application firewall policy globally or to the load balancing virtual server. add appfw profile <name> [-defaults ( basic or advanced )] set appfw profile <name> [-startURLAction <startURLAction> ...] add appfw policy <name> <rule> <profileName> bind appfw global <policyName> <priority> or, bind lb vserver <lb vserver> -policyName <policy> -priority <priority> Sample: add appfw profile pr_appfw -defaults advancedset appfw profile pr_appfw -startURLaction log stats learnadd appfw policy pr_appfw_pol "HTTP.REQ.HEADER("Host").EXISTS" pr_appfwbind appfw global pr_appfw_pol 1or,bind lb vserver outlook –policyName pr_appfw_pol –priority "20" Run the following commands to enable the AppFlow feature, configure an AppFlow collector, action, and policy, and bind the policy globally or to the load balancing virtual server: add appflow collector <name> -IPAddress <ipaddress> set appflow param [-SecurityInsightRecordInterval <secs>] [-SecurityInsightTraffic ( ENABLED or DISABLED )] add appflow action <name> -collectors <string> add appflow policy <name> <rule> <action> bind appflow global <policyName> <priority> [<gotoPriorityExpression>] [-type <type>] or, bind lb vserver <vserver> -policyName <policy> -priority <priority> Sample: add appflow collector col -IPAddress 10.102.63.85set appflow param -SecurityInsightRecordInterval 600 -SecurityInsightTraffic ENABLEDadd appflow action act1 -collectors coladd appflow action af_action_Sap_10.102.63.85 -collectors coladd appflow policy pol1 true act1add appflow policy af_policy_Sap_10.102.63.85 true af_action_Sap_10.102.63.85bind appflow global pol1 1 END -type REQ_DEFAULTor,bind lb vserver Sap –policyName af_action_Sap_10.102.63.85 –priority "20" Enable Security Insight from NetScaler ADM Navigate to Networks > Instances > NetScaler ADC and select the instance type. For example, VPX. Select the instance and from the Select Action list, select Configure Analytics. On the Configure Analytics on virtual server window: Select the virtual servers that you want to enable security insight and click Enable Analytics. The Enable Analytics window is displayed. Select Security Insight Under Advanced Options, select Logstream or IPFIX as the Transport Mode The Expression is true by default Click OK Note: If users select virtual servers that are not licensed, then NetScaler ADM first licenses those virtual servers and then enables analytics For admin partitions, only Web Insight is supported After users click OK, NetScaler ADM processes to enable analytics on the selected virtual servers. Note: When users create a group, they can assign roles to the group, provide application-level access to the group, and assign users to the group. NetScaler ADM analytics now supports virtual IP address-based authorization. Customer users can now see reports for all Insights for only the applications (virtual servers) for which they are authorized. For more information on groups and assigning users to the group, see Configure Groups on NetScaler ADM: Configure Groups on NetScaler ADM . Thresholds Users can set and view thresholds on the safety index and threat index of applications in Security Insight. To set a threshold Navigate to System > Analytics Settings > Thresholds, and select Add. Select the traffic type as Security in the Traffic Type field, and enter required information in the other appropriate fields such as Name, Duration, and entity. In the Rule section, use the Metric, Comparator, and Value fields to set a threshold. For example, “Threat Index” “>” “5” Click Create. To view the threshold breaches Navigate to Analytics > Security Insight > Devices, and select the ADC instance. In the Application section, users can view the number of threshold breaches that have occurred for each virtual server in the Threshold Breach column. Security Insight Use Case The following use cases describe how users can use security insight to assess the threat exposure of applications and improve security measures. Obtain an Overview of the Threat Environment In this use case, users have a set of applications that are exposed to attacks, and they have configured NetScaler ADM to monitor the threat environment. Users need to frequently review the threat index, safety index, and the type and severity of any attacks that the applications might have experienced, so that they can focus first on the applications that need the most attention. The security insight dashboard provides a summary of the threats experienced by the user applications over a time period of user choosing, and for a selected ADC device. It displays the list of applications, their threat and safety indexes, and the total number of attacks for the chosen time period. For example, users might be monitoring Microsoft Outlook, Microsoft Lync, SharePoint, and an SAP application, and users might want to review a summary of the threat environment for these applications. To obtain a summary of the threat environment, log on to NetScaler ADM, and then navigate to Analytics > Security Insight. Key information is displayed for each application. The default time period is 1 hour. To view information for a different time period, from the list at the top-left, select a time period. To view a summary for a different ADC instance, under Devices, click the IP address of the ADC instance. To sort the application list by a given column, click the column header. Determine the Threat Exposure of an Application After reviewing a summary of the threat environment on the Security Insight dashboard to identify the applications that have a high threat index and a low safety index, users want to determine their threat exposure before deciding how to secure them. That is, users want to determine the type and severity of the attacks that have degraded their index values. Users can determine the threat exposure of an application by reviewing the application summary. In this example, Microsoft Outlook has a threat index value of 6, and users want to know what factors are contributing to this high threat index. To determine the threat exposure of Microsoft Outlook, on the Security Insight dashboard, click Outlook. The application summary includes a map that identifies the geographic location of the server. Click Threat Index > Security Check Violations and review the violation information that appears. Click Signature Violations and review the violation information that appears. Determine Existing and Missing Security Configurations for an Application After reviewing the threat exposure of an application, users want to determine what application security configurations are in place and what configurations are missing for that application. Users can obtain this information by drilling down into the application’s safety index summary. The safety index summary gives users information about the effectiveness of the following security configurations: Application Firewall Configuration. Shows how many signature and security entities are not configured. NetScaler ADM System Security. Shows how many system security settings are not configured. In the previous use case, users reviewed the threat exposure of Microsoft Outlook, which has a threat index value of 6. Now, users want to know what security configurations are in place for Outlook and what configurations can be added to improve its threat index. On the Security Insight dashboard, click Outlook, and then click the Safety Index tab. Review the information provided in the Safety Index Summary area. On the Application Firewall Configuration node, click Outlook_Profile and review the security check and signature violation information in the pie charts. Review the configuration status of each protection type in the application firewall summary table. To sort the table on a column, click the column header. Click the NetScaler ADM System Security node and review the system security settings and NetScaler recommendations to improve the application safety index. Identify Applications That Require Immediate Attention The applications that need immediate attention are those having a high threat index and a low safety index. In this example, both Microsoft Outlook and Microsoft Lync have a high threat index value of 6, but Lync has the lower of the two safety indexes. Therefore, users might have to focus their attention on Lync before improving the threat environment for Outlook. Determine the Number of Attacks in a Given Period of Time Users might want to determine how many attacks occurred on a given application at a given point in time, or they might want to study the attack rate for a specific time period. On the Security Insight page, click any application and in the Application Summary, click the number of violations. The Total Violations page displays the attacks in a graphical manner for one hour, one day, one week, and one month. The Application Summary table provides the details about the attacks. Some of them are as follows: Attack time IP address of the client from which the attack happened Severity Category of violation URL from which the attack originated, and other details. While users can always view the time of attack in an hourly report as seen in the image, now they can view the attack time range for aggregated reports even for daily or weekly reports. If users select “1 Day” from the time-period list, the Security Insight report displays all attacks that are aggregated and the attack time is displayed in a one-hour range. If users choose “1 Week” or “1 Month,” all attacks are aggregated and the attack time is displayed in a one-day range. Obtain Detailed Information about Security Breaches Users might want to view a list of the attacks on an application and gain insights into the type and severity of attacks, actions taken by the ADC instance, resources requested, and the source of the attacks. For example, users might want to determine how many attacks on Microsoft Lync were blocked, what resources were requested, and the IP addresses of the sources. On the Security Insight dashboard, click Lync > Total Violations. In the table, click the filter icon in the Action Taken column header, and then select Blocked. For information about the resources that were requested, review the URL column. For information about the sources of the attacks, review the Client IP column. View Log Expression Details NetScaler ADC instances use log expressions configured with the Application Firewall profile to take action for the attacks on an application in the user enterprise. In Security Insight, users can view the values returned for the log expressions used by the ADC instance. These values include, request header, request body and so on. In addition to the log expression values, users can also view the log expression name and the comment for the log expression defined in the Application Firewall profile that the ADC instance used to take action for the attack. Prerequisites Ensure that users: Configure log expressions in the Application Firewall profile. For more information, see Application Firewall. Enable log expression-based Security Insights settings in NetScaler ADM. Do the following: Navigate to Analytics > Settings, and click Enable Features for Analytics. In the Enable Features for Analytics page, select Enable Security Insight under the Log Expression Based Security Insight Setting section and click OK. For example, users might want to view the values of the log expression returned by the ADC instance for the action it took for an attack on Microsoft Lync in the user enterprise. On the Security Insight dashboard, navigate to Lync > Total Violations. In the Application Summary table, click the URL to view the complete details of the violation in the Violation Information page including the log expression name, comment, and the values returned by the ADC instance for the action. Determine the Safety Index before Deploying the Configuration Security breaches occur after users deploy the security configuration on an ADC instance, but users might want to assess the effectiveness of the security configuration before they deploy it. For example, users might want to assess the safety index of the configuration for the SAP application on the ADC instance with IP address 10.102.60.27. On the Security Insight dashboard, under Devices, click the IP address of the ADC instance that users configured. Users can see that both the threat index and the total number of attacks are 0. The threat index is a direct reflection of the number and type of attacks on the application. Zero attacks indicate that the application is not under any threat. Click Sap > Safety Index > SAP_Profile and assess the safety index information that appears. In the application firewall summary, users can view the configuration status of different protection settings. If a setting is set to log or if a setting is not configured, the application is assigned a lower safety index. Security Violations View Application Security Violation Details Web applications that are exposed to the internet have become drastically more vulnerable to attacks. NetScaler ADM enables users to visualize actionable violation details to protect applications from attacks. Navigate to Security > Security Violations for a single-pane solution to: Access the application security violations based on their categories such as Network, Bot, and WAF Take corrective actions to secure the applications To view the security violations in NetScaler ADM, ensure: Users have a premium license for the NetScaler ADC instance (for WAF and BOT violations). Users have applied a license on the load balancing or content switching virtual servers (for WAF and BOT). For more information, see Manage Licensing on Virtual Servers. Users enable more settings. For more information, see the procedure available at the Setting up section in the NetScaler product documentation: Setting up. Violation Categories** NetScaler ADM enables users to view the following violations: NETWORK Bot WAF HTTP Slow Loris Excessive Client Connections Unusually High Upload Transactions DNS Slow Loris Account Takeover** Unusually High Download Transactions HTTP Slow Post Unusually High Upload Volume Excessive Unique IPs NXDomain Flood Attack Unusually High Request Rate Excessive Unique IPs Per Geo HTTP desync attack Unusually High Download Volume Bleichenbacher Attack Segment smack Attack Syn Flood Attack ** - Users must configure the account takeover setting in NetScaler ADM. See the prerequisite mentioned in Account Takeover: Account Takeover. Apart from these violations, users can also view the following Security Insight and Bot Insight violations under the WAF and Bot categories respectively: WAF Bot Buffer Overflow Crawler Content type Feed Fetcher Cookie Consistency Link Checker CSRF Form Tagging Marketing Deny URL Scraper Form Field Consistency Screenshot Creator Field Formats Search Engine Maximum Uploads Service Agent Referrer Header Site Monitor Safe Commerce Speed Tester Safe Object Tool HTML SQL Inject Uncategorized Start URL Virus Scanner XSS Vulnerability Scanner XML DoS DeviceFP Wait Exceeded XML Format Invalid DeviceFP XML WSI Invalid Captcha Response XML SSL Captcha Attempts Exceeded XML Attachment Valid Captcha Response XML SOAP Fault Captcha Client Muted XML Validation Captcha Wait Time Exceeded Others Request Size Limit Exceeded IP Reputation Rate Limit Exceeded HTTP DOS Block list (IP, subnet, policy expression) TCP Small Window Allow list (IP, subnet, policy expression) Signature Violation Zero Pixel Request File Upload Type Source IP JSON XSS Host JSON SQL Geo Location JSON DOS URL Command Injection Infer Content Type XML Cookie Hijack Setting up Users must enable Advanced Security Analytics and set Web Transaction Settings to All to view the following violations in NetScaler ADM: Unusually High Upload Transactions (WAF) Unusually High Download Transactions (WAF) Excessive Unique IPs (WAF) Account takeover (BOT) For other violations, ensure whether Metrics Collector is enabled. By default, Metrics Collector is enabled on the NetScaler ADC instance. For more information, see: Configure Intelligent App Analytics . Enable Advanced Security Analytics Navigate to Networks > Instances > NetScaler ADC, and select the instance type. For example, MPX. Select the NetScaler ADC instance and from the Select Action list, select Configure Analytics. Select the virtual server and click Enable Analytics. On the Enable Analytics window: Select Web Insight. After users select Web Insight, the read-only Advanced Security Analytics option is enabled automatically. Note: The Advanced Security Analytics option is displayed only for premium licensed ADC instances. Select Logstream as Transport Mode The Expression is true by default Click OK Enable Web Transaction settings Navigate to Analytics > Settings. The Settings page is displayed. Click Enable Features for Analytics. Under Web Transaction Settings, select All. Click Ok. Security violations dashboard In the security violations dashboard, users can view: Total violations occurred across all ADC instances and applications. The total violations are displayed based on the selected time duration. Total violations under each category. Total ADCs affected, total applications affected, and top violations based on the total occurrences and the affected applications. Violation details For each violation, NetScaler ADM monitors the behavior for a specific time duration and detects violations for unusual behaviors. Click each tab to view the violation details. Users can view details such as: The total occurrences, last occurred, and total applications affected Under event details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating violations. Drag and select on the graph that lists the violations to narrow down the violation search. Click Reset Zoom to reset the zoom result Recommended Actions that suggest users troubleshoot the issue Other violation details such as violence occurrence time and detection message Bot Insight Using Bot Insight in NetScaler ADM After users configure the bot management in NetScaler ADC, they must enable Bot Insight on virtual servers to view insights in NetScaler ADM. To enable Bot Insight: Navigate to Networks > Instances > NetScaler ADC and select the instance type. For example, VPX. Select the instance and from the Select Action list, select Configure Analytics. Select the virtual server and click Enable Analytics. On the Enable Analytics window: Select Bot Insight Under Advanced Option, select Logstream. Click OK. After enabling Bot Insight, navigate to Analytics > Security > Bot Insight. Time list to view bot details Drag the slider to select a specific time range and click Go to display the customized results Total instances affected from bots Virtual server for the selected instance with total bot attacks Total Bots – Indicates the total bot attacks (inclusive of all bot categories) found for the virtual server. Total Human Browsers – Indicates the total human users accessing the virtual server. Bot Human Ratio – Indicates the ratio between human users and bots accessing the virtual server. Signature Bots, Fingerprinted Bot, Rate Based Bots, IP Reputation Bots, allow list Bots, and block list Bots – Indicates the total bot attacks occurred based on the configured bot category. For more information about bot categories, see: Configure Bot Detection Techniques in NetScaler ADC. Click > to view bot details in a graph format. View events history Users can view the bot signature updates in the Events History, when: New bot signatures are added in NetScaler ADC instances. Existing bot signatures are updated in NetScaler ADC instances. You can select the time duration on the bot insight page to view the events history. The following diagram shows how the bot signatures are retrieved from the AWS cloud, updated on NetScaler ADC and view signature update summary on NetScaler ADM. The bot signature auto update scheduler retrieves the mapping file from the AWS URI. Checks the latest signatures in the mapping file with the existing signatures in the ADC appliance. Downloads the new signatures from AWS and verifies the signature integrity. Updates the existing bot signatures with the new signatures in the bot signature file. Generates an SNMP alert and sends the signature update summary to NetScaler ADM. View Bots Click the virtual server to view the Application Summary Provides the Application Summary details such as: Average RPS – Indicates the average bot transaction requests per second (RPS) received on virtual servers. Bots by Severity – Indicates the highest bot transactions occurred based on the severity. The severity is categorized based on Critical, High, Medium, and Low. For example, if the virtual servers have 11770 high severity bots and 1550 critical severity bots, then NetScaler ADM displays Critical 1.55 K under Bots by Severity. Largest Bot Category – Indicates the highest bot attacks occurred based on the bot category. For example, if the virtual servers have 8000 block listed bots, 5000 allow listed bots, and 10000 Rate Limit Exceeded bots, then NetScaler ADM displays Rate Limit Exceeded 10 K under Largest Bot Category. Largest Geo Source – Indicates the highest bot attacks occurred based on a region. For example, if the virtual servers have 5000 bot attacks in Santa Clara, 7000 bot attacks in London, and 9000 bot attacks in Bangalore, then NetScaler ADM displays Bangalore 9 K under Largest Geo Source. Average % Bot Traffic – Indicates the human bot ratio. Displays the severity of the bot attacks based on locations in map view Displays the types of bot attacks (Good, Bad, and All) Displays the total bot attacks along with the corresponding configured actions. For example, if you have configured: IP address range (192.140.14.9 to 192.140.14.254) as block list bots and selected Drop as an action for these IP address ranges IP range (192.140.15.4 to 192.140.15.254) as block list bots and selected to create a log message as an action for these IP ranges In this scenario, NetScaler ADM displays: Total block listed bots Total bots under Dropped Total bots under Log View CAPTCHA bots In webpages, CAPTCHAs are designed to identify if the incoming traffic is from a human or an automated bot. To view the CAPTCHA activities in NetScaler ADM, users must configure CAPTCHA as a bot action for IP reputation and device fingerprint detection techniques in a NetScaler ADC instance. For more information, see: Configure Bot Management. The following are the CAPTCHA activities that NetScaler ADM displays in Bot insight: Captcha attempts exceeded – Denotes the maximum number of CAPTCHA attempts made after login failures Captcha client muted – Denotes the number of client requests that are dropped or redirected because these requests were detected as bad bots earlier with the CAPTCHA challenge Human – Denotes the captcha entries performed from the human users Invalid captcha response – Denotes the number of incorrect CAPTCHA responses received from the bot or human, when NetScaler ADC sends a CAPTCHA challenge View bot traps To view bot traps in NetScaler ADM, you must configure the bot trap in the NetScaler ADC instance. For more information, see Configure Bot Management. To identify the bot trap, a script is enabled in the webpage and this script is hidden from humans, but not to bots. NetScaler ADM identifies and reports the bot traps, when this script is accessed by bots. Click the virtual server and select Zero Pixel Request View bot details For further details, click the bot attack type under Bot Category. The details such as attack time and total number of bot attacks for the selected captcha category are displayed. Users can also drag the bar graph to select the specific time range to be displayed with bot attacks. To get additional information of the bot attack, click to expand. Instance IP – Indicates the NetScaler ADC instance IP address Total Bots – Indicates the total bot attacks occurred for that particular time HTTP Request URL – Indicates the URL that is configured for captcha reporting Country Code – Indicates the country where the bot attack occurred Region – Indicates the region where the bot attack occurred Profile Name – Indicates the profile name that users provided during the configuration Advanced search Users can also use the search text box and time duration list, where they can view bot details as per the user requirement. When users click the search box, the search box gives them the following list of search suggestions. Instance IP – NetScaler ADC instance IP address Client-IP – Client IP address Bot-Type – Bot type such as Good or Bad Severity – Severity of the bot attack Action-Taken – Action taken after the bot attack such as Drop, No action, Redirect Bot-Category – Category of the bot attack such as block list, allow list, fingerprint, and so on. Based on a category, users can associate a bot action to it Bot-Detection – Bot detection types (block list, allow list, and so on) that users have configured on NetScaler ADC instance Location – Region/country where the bot attack has occurred Request-URL – URL that has the possible bot attacks Users can also use operators in the user search queries to narrow the focus of the user search. For example, if users want to view all bad bots: Click the search box and select Bot-Type Click the search box again and select the operator = Click the search box again and select Bad Click Search to display the results Bot violation details Excessive Client Connections When a client tries to access the web application, the client request is processed in NetScaler ADC appliance, instead of connecting to the server directly. Web traffic comprises bots and bots can perform various actions at a faster rate than a human. Using the Excessive Client Connections indicator, users can analyze scenarios when an application receives unusually high client connections through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total IP addresses transacting the application The accepted IP address range that the application can receive Account Takeover Note: Ensure users enable the advanced security analytics and web transaction options. For more information, see Setting up: Setting up . Some malicious bots can steal user credentials and perform various kinds of cyberattacks. These malicious bots are known as bad bots. It is essential to identify bad bots and protect the user appliance from any form of advanced security attacks. Prerequisite Users must configure the Account Takeover settings in NetScaler ADM. Navigate to Analytics > Settings > Security Violations Click Add On the Add Application page, specify the following parameters: Application - Select the virtual server from the list. Method - Select the HTTP method type from the list. The available options are GET, PUSH, POST, and UPDATE. Login URL and Success response code - Specify the URL of the web application and specify the HTTP status code (for example, 200) for which users want NetScaler ADM to report the account takeover violation from bad bots. Click Add. After users configure the settings, using the Account Takeover indicator, users can analyze if bad bots attempted to take over the user account, giving multiple requests along with credentials. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating total unusual failed login activity, successful logins, and failed logins The bad bot IP address. Click to view details such as time, IP address, total successful logins, total failed logins, and total requests made from that IP address. Unusually High Upload Volume Web traffic also comprises data that is processed for uploading. For example, if the user average upload data per day is 500 MB and if users upload 2 GB of data, then this can be considered as an unusually high upload data volume. Bots are also capable to process uploading of data more quickly than humans. Using the Unusually High Upload Volume indicator, users can analyze abnormal scenarios of upload data to the application through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total upload data volume processed The accepted range of upload data to the application Unusually High Download Volume Similar to high upload volume, bots can also perform downloads more quickly than humans. Using the Unusually High Download Volume indicator, users can analyze abnormal scenarios of download data from the application through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total download data volume processed The accepted range of download data from the application Unusually High Request Rate Users can control the incoming and outgoing traffic from or to an application. A bot attack can perform an unusually high request rate. For example, if users configure an application to allow 100 requests/minute and if users observe 350 requests, then it might be a bot attack. Using the Unusually High Request Rate indicator, users can analyze the unusual request rate received to the application. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total requests received and % of excessive requests received than the expected requests The accepted range of expected request rate range from the application Use Cases Bot Sometimes the incoming web traffic is comprised of bots and most organizations suffer from bot attacks. Web and mobile applications are significant revenue drivers for business and most companies are under the threat of advanced cyberattacks, such as bots. A bot is a software program that automatically performs certain actions repeatedly at a much faster rate than a human. Bots can interact with webpages, submit forms, run actions, scan texts, or download content. They can access videos, post comments, and tweet on social media platforms. Some bots, known as chatbots, can hold basic conversations with human users. A bot that performs a helpful service, such as customer service, automated chat, and search engine crawlers are good bots. At the same time, a bot that can scrape or download content from a website, steal user credentials, spam content, and perform other kinds of cyberattacks are bad bots. With a good number of bad bots performing malicious tasks, it is essential to manage bot traffic and protect the user web applications from bot attacks. By using NetScaler bot management, users can detect the incoming bot traffic and mitigate bot attacks to protect the user web applications. NetScaler bot management helps identify bad bots and protect the user appliance from advanced security attacks. It detects good and bad bots and identifies if incoming traffic is a bot attack. By using bot management, users can mitigate attacks and protect the user web applications. NetScaler ADC bot management provides the following benefits: Defends against bots, scripts, and toolkits. Provides real-time threat mitigation using static signature-based defense and device fingerprinting. Neutralizes automated basic and advanced attacks. Prevents attacks, such as App layer DDoS, password spraying, password stuffing, price scrapers, and content scrapers. Protects user APIs and investments. Protects user APIs from unwarranted misuse and protects infrastructure investments from automated traffic. Some use cases where users can benefit by using the NetScaler bot management system are: Brute force login. A government web portal is constantly under attack by bots attempting brute force user logins. The organization discovers the attack by looking through web logs and seeing specific users being hit over and over again with rapid login attempts and passwords incrementing using a dictionary attack approach. By law, they must protect themselves and their users. By deploying the NetScaler bot management, they can stop brute force login using device fingerprinting and rate limiting techniques. Block bad bots and device fingerprint unknown bots. A web entity gets 100,000 visitors each day. They have to upgrade the underlying footprint and they are spending a fortune. In a recent audit, the team discovered that 40 percent of the traffic came from bots, scraping content, picking news, checking user profiles, and more. They want to block this traffic to protect their users and reduce their hosting costs. Using bot management, they can block known bad bots, and fingerprint unknown bots that are hammering their site. By blocking these bots, they can reduce bot traffic by 90 percent. Permit good bots. “Good” bots are designed to help businesses and consumers. They have been around since the early 1990s when the first search engine bots were developed to crawl the Internet. Google, Yahoo, and Bing would not exist without them. Other examples of good bots—mostly consumer-focused—include: Chatbots (a.k.a. chatterbots, smart bots, talk bots, IM bots, social bots, conversation bots) interact with humans through text or sound. One of the first text uses was for online customer service and text messaging apps like Facebook Messenger and iPhone Messages. Siri, Cortana, and Alexa are chatbots; but so are mobile apps that let users order coffee and then tell them when it will be ready, let users watch movie trailers and find local theater showtimes, or send users a picture of the car model and license plate when they request a ride service. Shopbots scour the Internet looking for the lowest prices on items users are searching for. Monitoring bots check on the health (availability and responsiveness) of websites. Downdetector is an example of an independent site that provides real-time status information, including outages, of websites and other kinds of services. For more information about Downdetector, see: Downdetector. Bot Detection Configuring Bot Management by using NetScaler ADC GUI Users can configure NetScaler ADC bot management by first enabling the feature on the appliance. Once users enable, they can create a bot policy to evaluate the incoming traffic as bot and send the traffic to the bot profile. Then, users create a bot profile and then bind the profile to a bot signature. As an alternative, users can also clone the default bot signature file and use the signature file to configure the detection techniques. After creating the signature file, users can import it into the bot profile. All these steps are performed in the following sequence: Enable bot management feature Configure bot management settings Clone NetScaler bot default signature Import NetScaler bot signature Configure bot signature settings Create bot profile Create bot policy Enable Bot Management Feature On the navigation pane, expand System and then click Settings. On the Configure Advanced Features page, select the Bot Management check box. Click OK, and then click Close. Clone Bot Signature File Navigate to Security > NetScaler Bot Management > Signatures. In NetScaler Bot Management Signatures page, select the default bot signatures record and click Clone. In the Clone Bot Signature page, enter a name and edit the signature data. Click Create. Import Bot Signature File If users have their own signature file, then they can import it as a file, text, or URL. Perform the following the steps to import the bot signature file: Navigate to Security > NetScaler Bot Management and Signatures. On the NetScaler Bot Management Signatures page, import the file as URL, File, or text. Click Continue. On the Import NetScaler Bot Management Signature page, set the following parameters. Name. Name of the bot signature file. Comment. Brief description about the imported file. Overwrite. Select the check box to allow overwriting of data during file update. Signature Data. Modify signature parameters Click Done. IP Reputation Configure IP Reputation by using NetScaler ADC GUI This configuration is a prerequisite for the bot IP reputation feature. The detection technique enables users to identify if there is any malicious activity from an incoming IP address. As part of the configuration, we set different malicious bot categories and associate a bot action to each of them. Navigate to Security > NetScaler Bot Management and Profiles. On the NetScaler Bot Management Profiles page, select a signature file and click Edit. On the NetScaler Bot Management Profile page, go to Signature Settings section and click IP Reputation. On the IP Reputation section, set the following parameters: Enabled. Select the check box to validate incoming bot traffic as part of the detection process. Configure Categories. Users can use the IP reputation technique for incoming bot traffic under different categories. Based on the configured category, users can drop or redirect the bot traffic. Click Add to configure a malicious bot category. In the Configure NetScaler Bot Management Profile IP Reputation Binding page, set the following parameters: Category. Select a malicious bot category from the list. Associate a bot action based on category. Enabled. Select the check box to validate the IP reputation signature detection. Bot action. Based on the configured category, users can assign no action, drop, redirect, or CAPTCHA action. Log. Select the check box to store log entries. Log Message. Brief description of the log. Comments. Brief description about the bot category. Click OK. Click Update. Click Done. Auto Update for Bot Signatures The bot static signature technique uses a signature lookup table with a list of good bots and bad bots. The bots are categorized based on user-agent string and domain names. If the user-agent string and domain name in incoming bot traffic matches a value in the lookup table, a configured bot action is applied. The bot signature updates are hosted on the AWS cloud and the signature lookup table communicates with the AWS database for signature updates. The auto signature update scheduler runs every 1-hour to check the AWS database and updates the signature table in the ADC appliance. The Bot signature mapping auto update URL to configure signatures is: Bot Signature Mapping. Note: Users can also configure a proxy server and periodically update signatures from the AWS cloud to the ADC appliance through a proxy. For proxy configuration, users must set the proxy IP address and port address in the bot settings. Configure Bot Signature Auto Update For configuring bot signature auto update, complete the following steps: Enable Bot Signature Auto Update Users must enable the auto update option in the bot settings on the ADC appliance. At the command prompt, type: set bot settings –signatureAutoUpdate ON Configure Bot Signature Auto Update using the NetScaler ADC GUI Complete the following steps to configure bot signature auto update: Navigate to Security > NetScaler Bot Management. In the details pane, under Settings click Change NetScaler Bot Management Settings. In the Configure NetScaler Bot Management Settings, select the Auto Update Signature check box. Click OK and Close. For more information on configuring IP Reputation using the CLI, see: Configure the IP Reputation Feature Using the CLI. References For information on using SQL Fine Grained Relaxations, see: SQL Fine Grained Relaxations. For information on how to configure the SQL Injection Check using the command line, see: HTML SQL Injection Check. For information on how to configure the SQL Injection Check using the GUI, see: Using the GUI to Configure the SQL Injection Security Check. For information on using the Learn Feature with the SQL Injection Check, see: Using the Learn Feature with the SQL Injection Check. For information on using the Log Feature with the SQL Injection Check, see: Using the Log Feature with the SQL Injection Check. For information on Statistics for the SQL Injection violations, see: Statistics for the SQL Injection Violations. For information on SQL Injection Check Highlights, see: Highlights. For information about XML SQL Injection Checks, see: XML SQL Injection Check. For information on using Cross-Site Scripting Fine Grained Relaxations, see: SQL Fine Grained Relaxations. For information on configuring HTML Cross-Site Scripting using the command line, see: Using the Command Line to Configure the HTML Cross-Site Scripting Check. For information on configuring HTML Cross-Site Scripting using the GUI, see: Using the GUI to Configure the HTML Cross-Site Scripting Check. For information on using the Learn Feature with the HTML Cross-Site Scripting Check, see: Using the Learn Feature with the HTML Cross-Site Scripting Check. For information on using the Log Feature with the HTML Cross-Site Scripting Check, see: Using the Log Feature with the HTML Cross-Site Scripting Check. For information on statistics for the HTML Cross-Site Scripting violations, see: Statistics for the HTML Cross-Site Scripting Violations. For information on HTML Cross-Site Scripting highlights, see: Highlights. For information about XML Cross-Site Scripting, visit: XML Cross-Site Scripting Check. For information on using the command line to configure the Buffer Overflow Security Check, see: Using the Command Line to Configure the Buffer Overflow Security Check. For information on using the GUI to configure the Buffer Overflow Security Check, see: Configure Buffer Overflow Security Check by using the NetScaler ADC GUI. For information on using the Log Feature with the Buffer Overflow Security Check, see: Using the Log Feature with the Buffer Overflow Security Check. For information on Statistics for the Buffer Overflow violations, see: Statistics for the Buffer Overflow Violations. For information on the Buffer Overflow Security Check Highlights, see: Highlights. For information on Adding or Removing a Signature Object, see: Adding or Removing a Signature Object. For information on creating a signatures object from a template, see: To Create a Signatures Object from a Template. For information on creating a signatures object by importing a file, see: To Create a Signatures Object by Importing a File. For information on creating a signatures object by importing a file using the command line, see: To Create a Signatures Object by Importing a File using the Command Line. For information on removing a signatures object by using the GUI, see: To Remove a Signatures Object by using the GUI. For information on removing a signatures object by using the command line, see: To Remove a Signatures Object by using the Command Line. For information on configuring or modifying a signatures object, see: Configuring or Modifying a Signatures Object. For more information on updating a signature object, see: Updating a Signature Object. For information on using the command line to update Web Application Firewall Signatures from the source, see: To Update the Web Application Firewall Signatures from the Source by using the Command Line. For information on updating a signatures object from a NetScaler format file, see: Updating a Signatures Object from a NetScaler Format File. For information on updating a signatures object from a supported vulnerability scanning tool, see: Updating a Signatures Object from a Supported Vulnerability Scanning Tool. For information on Snort Rule Integration, see: Snort Rule Integration. For information on configuring Snort Rules, see: Configure Snort Rules. For information about configuring Bot Management using the command line, see: Configure Bot Management. For information about configuring bot management settings for device fingerprint technique, see: Configure Bot Management Settings for Device Fingerprint Technique. For information on configuring bot allow lists by using the NetScaler ADC GUI, see: Configure Bot White List by using NetScaler ADC GUI. For information on configuring bot block lists by using the NetScaler ADC GUI, see: Configure Bot Black List by using NetScaler ADC GUI. For more information on configuring Bot management, see: Configure Bot Management. Prerequisites Before attempting to create a VPX instance in AWS, users should ensure they have the following: An AWS account to launch a NetScaler ADC VPX AMI in an Amazon Web Services (AWS) Virtual Private Cloud (VPC). Users can create an AWS account for free at Amazon Web Services: AWS. An AWS Identity and Access Management (IAM) user account to securely control access to AWS services and resources for users. For more information about how to create an IAM user account, see the topic: Creating IAM Users (Console). An IAM role is mandatory for both standalone and high availability deployments. The IAM role must have the following privileges: ec2:DescribeInstances ec2:DescribeNetworkInterfaces ec2:DetachNetworkInterface ec2:AttachNetworkInterface ec2:StartInstances ec2:StopInstances ec2:RebootInstances ec2:DescribeAddresses ec2:AssociateAddress ec2:DisassociateAddress ec2:AssignPrivateIpAddresses autoscaling:* sns:* sqs:* cloudwatch:* iam:SimulatePrincipalPolicy iam:GetRole For more information on IAM permissions, see: AWS Managed Policies for Job Functions. If the NetScaler CloudFormation template is used, the IAM role is automatically created. The template does not allow selecting an already created IAM role. Note: When users log on the VPX instance through the GUI, a prompt to configure the required privileges for the IAM role appears. Ignore the prompt if the privileges have already been configured. Note: AWS CLI is required to use all the functionality provided by the AWS Management Console from the terminal program. For more information, see the AWS CLI user guide: What Is the AWS Command Line Interface? . Users also need the AWS CLI to change the network interface type to SR-IOV. For more information about NetScaler ADC and AWS including support for the NetScaler Networking VPX within AWS see NetScaler ADC and Amazon Web Services Validated Reference Design guide: NetScaler ADC and Amazon Web Services Validated Reference Design . Limitations and Usage Guidelines The following limitations and usage guidelines apply when deploying a NetScaler ADC VPX instance on AWS: Users should read the AWS terminology listed above before starting a new deployment. The clustering feature is supported only when provisioned with NetScaler ADM Auto Scale Groups. For the high availability setup to work effectively, associate a dedicated NAT device to the management Interface or associate an Elastic IP (EIP) to NSIP. For more information on NAT, in the AWS documentation, see: NAT Instances. Data traffic and management traffic must be segregated with ENIs belonging to different subnets. Only the NSIP address must be present on the management ENI. If a NAT instance is used for security instead of assigning an EIP to the NSIP, appropriate VPC level routing changes are required. For instructions on making VPC level routing changes, in the AWS documentation, see: Scenario 2: VPC with Public and Private Subnets. A VPX instance can be moved from one EC2 instance type to another (for example, from m3.large to an m3.xlarge). For more information, visit: Limitations and Usage Guidelines. For storage media for VPX on AWS, NetScaler recommends EBS, because it is durable and the data is available even after it is detached from the instance. Dynamic addition of ENIs to VPX is not supported. Restart the VPX instance to apply the update. NetScaler recommends users to stop the standalone or HA instance, attach the new ENI, and then restart the instance. The primary ENI cannot be changed or attached to a different subnet once it is deployed. Secondary ENIs can be detached and changed as needed while the VPX is stopped. Users can assign multiple IP addresses to an ENI. The maximum number of IP addresses per ENI is determined by the EC2 instance type, see the section “IP Addresses Per Network Interface Per Instance Type” in Elastic Network Interfaces: Elastic Network Interfaces. Users must allocate the IP addresses in AWS before they assign them to ENIs. For more information, see Elastic Network Interfaces: Elastic Network Interfaces. NetScaler recommends that users avoid using the enable and disable interface commands on NetScaler ADC VPX interfaces. The NetScaler ADC set ha node <NODE_ID> -haStatus STAYPRIMARY and set ha node <NODE_ID> -haStatus STAYSECONDARY commands are disabled by default. IPv6 is not supported for VPX. Due to AWS limitations, these features are not supported: Gratuitous ARP(GARP) L2 mode (bridging). Transparent virtual servers are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC For RNAT, routing, and Transparent virtual server to work, ensure Source/Destination Check is disabled for all ENIs in the data path. For more information, see “Changing the Source/Destination Checking” in Elastic Network Interfaces: Elastic Network Interfaces. In a NetScaler ADC VPX deployment on AWS, in some AWS regions, the AWS infrastructure might not be able to resolve AWS API calls. This happens if the API calls are issued through a non-management interface on the NetScaler ADC VPX instance. As a workaround, restrict the API calls to the management interface only. To do that, create an NSVLAN on the VPX instance and bind the management interface to the NSVLAN by using the appropriate command. For example: set ns config -nsvlan <vlan id> -ifnum 1/1 -tagged NO save config Restart the VPX instance at the prompt. For more information about configuring nsvlan, see Configuring NSVLAN: Configuring NSVLAN. In the AWS console, the vCPU usage shown for a VPX instance under the Monitoring tab might be high (up to 100 percent), even when the actual usage is much lower. To see the actual vCPU usage, navigate to View all CloudWatch metrics. For more information, see: Monitor your Instances using Amazon CloudWatch. Alternately, if low latency and performance are not a concern, users may enable the CPU Yield feature allowing the packet engines to idle when there is no traffic. Visit Citrix Support Knowledge Center for more details about the CPU Yield feature and how to enable it. Technical Requirements Before users launch the Quick Start Guide to begin a deployment, the user account must be configured as specified in the following table. Otherwise, the deployment might fail. Resources If necessary, sign in to the user amazon account and request service limit increases for the following resources here: AWS/Sign in. You might need to do this if you already have an existing deployment that uses these resources, and you think you might exceed the default limits with this deployment. For default limits, see the AWS Service Quotas in the AWS documentation: AWS Service Quotas. The AWS Trusted Advisor, found here: AWS/Sign in, offers a service limits check that displays usage and limits for some aspects of some services. Resource This deployment uses VPCs 1 Elastic IP addresses 0/1(for Bastion host) IAM security groups 3 IAM roles 1 Subnets 6(3/Availability zone) Internet Gateway 1 Route Tables 5 WAF VPX instances 2 Bastion host 0/1 NAT gateway 2 Regions NetScaler WAF on AWS isn’t currently supported in all AWS Regions. For a current list of supported Regions, see AWS Service Endpoints in the AWS documentation: AWS Service Endpoints. For more information on AWS regions and why cloud infrastructure matters, see: Global Infrastructure. Key Pair Make sure that at least one Amazon EC2 key pair exists in the user AWS account in the Region where users are planning to deploy using the Quick Start Guide. Make note of the key pair name. Users are prompted for this information during deployment. To create a key pair, follow the instructions for Amazon EC2 Key Pairs and Linux Instances in the AWS documentation: Amazon EC2 Key Pairs and Linux Instances. If users are deploying the Quick Start Guide for testing or proof-of-concept purposes, we recommend that they create a new key pair instead of specifying a key pair that’s already being used by a production instance.
  15. Introduction NetScaler ADC is a world-class product in the application delivery controller (ADC) space. It has the proven ability to load balance, manage global traffic, compress, and secure applications. Azure DNS is a service on the Microsoft Azure infrastructure for hosting DNS domains and providing name resolution. Azure DNS Private Zones is a service that resolves domain names in a private network. With Private Zones, customers can use their own custom domain names rather than the Azure-provided names available today. Overview of Azure DNS The Domain Name System, or DNS, is responsible for translating (or resolving) a service name to its IP address. Azure DNS, a hosting service for DNS domains, provides name resolution using the Microsoft Azure infrastructure. In addition to supporting internet-facing DNS domains, Azure DNS now also supports private DNS domains. Azure DNS provides a reliable, secure DNS service to manage and resolve domain names in a virtual network without needing a custom DNS solution. Private DNS zones allow you to use your own custom domain names rather than the Azure-provided names available today. Using custom domain names helps you to tailor your virtual network architecture to suit your organization's needs best. It provides name resolution for virtual machines (VMs) within a virtual network and between virtual networks. Also, customers can configure zones names with a split-horizon view, which allows a private and a public DNS zone to share a name. Why NetScaler GSLB for Azure DNS private zone? Today’s businesses want to transition their workloads from on-premises to Azure cloud. The transition to the cloud allows them to leverage time to market, capital expenses/price, ease of deployment, and security. Azure DNS Private Zone service provides a unique proposition for businesses that are transitioning part of their workloads to the Azure Cloud. These businesses can create their private DNS Name, which they had for years in on-premises deployments when they use the private zone service. With this hybrid model of intranet application servers being in on-premises and Azure cloud connected via secure VPN tunnels, the one challenge is how a user can have seamless access to these intranet applications. NetScaler ADC solves this unique use case with its global load balancing feature, which routes the application traffic to the most optimal distributed workloads/servers either on-premises or on Azure cloud and provides application server health status. Use Case Users in on-prem network and in different Azure VNETs should be able to connect to the most optimal servers in an internal network for accessing the required content. This ensures that the application is always available, optimized cost and user experience is good. Azure private traffic management (PTM) is the primary requirement here. Azure PTM ensures that users’ DNS queries resolve to an appropriate private IP address of the application server. Use Case Solution NetScaler ADC includes the global server load balancing (GSLB) feature, which can help meet the Azure PTM requirement. GSLB acts like a DNS server, which gets the DNS requests and resolves the DNS request into an appropriate IP address to provide: Seamless DNS based failover Phased migration from on-premises to cloud A/B Testing a new feature Among many load balancing methods supported, following methods can be useful in this solution: Round Robin Static proximity (Location based server selection): It can be deployed in two ways EDNS Client Subnet (ECS) based GSLB on NetScaler ADC Deploy a DNS forwarder for every Virtual network Topology The NetScaler ADC GSLB deployment for Azure private DNS zone logically looks shown in Figure 1. A user can access any application server either on Azure or on-prem based on NetScaler ADC GSLB load balancing method in an Azure private DNS zone All traffic between on-prem and Azure Virtual Network is through a secure VPN tunnel only Application traffic, DNS traffic, and monitoring traffic are shown in the preceding topology. Depending on the required redundancy, NetScaler ADC and DNS forwarder can be deployed in the Virtual Networks and data centers. For simplicity purpose, only one NetScaler ADC is shown here but we recommend at least one set of NetScaler ADC and DNS forwarder for Azure region. All user DNS queries first go to the DNS forwarder that has rules defined for forwarding the queries to appropriate DNS server. Configuring NetScaler ADC for Azure DNS Private Zone Products and Versions tested Product Version Azure Cloud Subscription NetScaler ADC VPX BYOL (Bring your own license) Note: The deployment is tested and remains same with NetScaler ADC version 12.0 and above. Prerequisites and configuration notes The following are general prerequisites and configuration tested for this guide, please cross-check before configuring NetScaler ADC: Microsoft Azure portal account with a valid subscription Ensure connectivity (Secure VPN Tunnel) between on-prem and Azure cloud. To set up a secure VPN tunnel in Azure, see Step-By-Step: Configuring a site-to-site VPN Gateway between Azure and on-premises Solution description Let suppose Customer want to host one application Azure DNS private zone (rr.ptm.mysite.net) which runs on HTTPs and is deployed across Azure and On-premises with intranet access based on round robin GSLB load balancing method. To achieve this deployment by enabling GSLB for Azure private DNS zone with NetScaler ADC consists of two parts – configuring the Azure, On-premises and the NetScaler ADC appliance. Part 1: Configure Azure, On-premises Setup As shown in Topology, set up Azure Virtual Network (VNet A, VNet B in this case) and on-premises setup. Step 1: Create an Azure private DNS zone with domain name (mysite.net) Step 2: Create two Virtual Networks (VNet A, VNet B) in Hub and Spoke model in an Azure region Step 3: Deploy App Server, DNS Forwarder, Windows 10 Pro client, NetScaler ADC in VNet A Step 4: Deploy App Server and deploy a DNS Forwarder if any clients are in VNet B Step 5: Deploy App server, DNS Forwarder, and Windows 10 pro client on on-premises Azure private DNS Zone Log into the Azure Portal and select or create a dashboard. Now click create a resource and search for DNS zone to create one (mysite.net in this case) as shown in the following image. Azure virtual Networks (VNet A, VNet B) in Hub and spoke Model Select the same dashboard and click create a resource and search for virtual networks to create two virtual networks namely VNet A, VNet B in same region and peer them to form a Hub and Spoke model as shown in the following image. See Implement a hub-spoke network topology in Azure for information about how to set up a hub and spoke topology. VNet A to VNet B peering To peer VNet A and VNet B click peerings from settings menu of VNet A and peer VNet B, enable Allow forwarded traffic and Allow gateway transit as shown in the following image. After successful peering you see as shown in the following image: VNet B to VNet A peering To peer VNet B and VNet A click peerings from settings menu of VNet B and peer VNet A, enable Allow forwarded traffic and Use remote gateways as shown in the following image. After successful peering you see as shown in the following image: Deploy App Server, DNS Forwarder, Windows 10 Pro client, NetScaler ADC in VNet A We discuss briefly about App server, DNS forwarder, Windows 10 pro client, and NetScaler ADC on VNet A. Select the same dashboard, click create a resource, search for the respective instances and assign an IP from VNet A subnet App Server App server is nothing but the web server (HTTP server) where an Ubuntu server 16.04 is deployed as an instance on Azure or on-premises VM and run a CLI command: sudo apt install apache2 to make it as a web server Windows 10 Pro Client Launch Windows 10 pro instance as Client Machine on VNet A and on-premises too. NetScaler ADC NetScaler ADC compliments the Azure DNA private zone by health check and Analytics from NetScaler ADM. Launch a NetScaler ADC from Azure Marketplace based on your requirement, here we have used NetScaler ADC (BYOL) for this deployment. Please refer below URL for detailed steps on How to deploy NetScaler ADC on Microsoft Azure. After deployment, use NetScaler ADC IP to configure NetScaler ADC GSLB. See Deploy a NetScaler VPX Instance on Microsoft Azure DNS Forwarder It is used to forward the client requests of hosted domains bound to NetScaler ADC GSLB (ADNS IP).Launch an Ubuntu server 16.04 as Linux instance (Ubuntu server 16.04) and refer below URL on how to set up it as a DNS forwarder. Note: For Round Robin GSLB load balancing method one DNS forwarder for Azure Region is sufficient but for Static proximity we need one DNS forwarder per virtual Network. After deploying forwarder change the DNS server settings of Virtual network A from default to custom with VNet A DNS forwarder IP as shown in the following image, and then modify the named.conf.options file in VNet A DNS forwarder to add forwarding rules for domain (mysite.net) and subdomain (ptm.mysite.net) to the ADNS IP of NetScaler ADC GSLB. Now, restart the DNS forwarder to reflect the changes made in file named.conf.options. VNet A DNS Forwarder Settings zone "mysite.net" { type forward;forwarders { 168.63.129.16; };};zone "ptm.mysite.net" { type forward; forwarders { 10.8.0.5; };}; Note: For the domain ("mysite.net") zone IP address, use the DNS IP of your Azure region. For the subdomain ("ptm.mysite.net") zone IP address, use all ADNS IP addresses of your GSLB instances. Deploy App Server and deploy a DNS Forwarder if any clients are in VNet B Now for Virtual Network B, select the same dashboard, click create a resource, then search for the respective instances, and assign an IP from VNet B subnet. Launch App server and DNS Forwarder if there is static proximity GSLB load balancing similar to VNet A. Edit the VNet B DNS Forwarder settings in named.conf.options as shown: VNet B DNS Forwarder Settings: zone "ptm.mysite.net" { type forward; forwarders { 10.8.0.5; };}; Deploy app server, DNS Forwarder, and Windows 10 pro client on on-premises Now for on-premises, launch the VMs on bare metal and bring App server, DNS Forwarder and Windows 10 pro client similar to VNet A. Edit the on-premises DNS Forwarder settings in the named.conf.options as shown in the following example. On-Premises DNS Forwarder Settings zone "mysite.net" { type forward; forwarders { 10.8.0.6; };};zone "ptm.mysite.net" { type forward; forwarders { 10.8.0.5; };}; Here for mysite.net we have given DNS forwarder IP of VNet A instead of Azure private DNS zone server IP because it is a special IP not reachable from on-premises. Hence this change is required in the DNS forwarder setting of on-premises. Part 2: Configure the NetScaler ADC As shown in Topology, deploy the NetScaler ADC on Azure Virtual Network (VNet A in this case) and access it through NetScaler ADC GUI. Configuring NetScaler ADC GSLB Step 1: Create ADNS Service Step 2: Create sites – local and remote Step 3: Create services for the local virtual servers Step 4: Create virtual servers for the GSLB services Add ADNS Service Log into the NetScaler ADC GUI. On the Configuration tab, navigate to Traffic Management>Load Balancing > Services. Add a service. It is recommended to configure ADNS service both in TCP and UDP as shown here: Add GSLB Sites Add local and remote sites between which GSLB will be configured. On the Configuration tab, navigate to Traffic Management > GSLB > GSLB Sites. Add a site as shown here and repeat the same procedure for other sites. Add GSLB Services Add GSLB services for the local and remote virtual servers which load balances App servers. On the Configuration tab, navigate to Traffic Management>GSLB > GSLB Services. Add the services as shown in the following examples. Bind HTTP monitor to check server status. After creating the service, go to the advanced settings tab inside the GSLB service and add Monitors tab to bind GSLB service with an HTTP monitor to bring up the state of service Once you bind with HTTP monitor, the state of services are UP as shown here: Add GSLB Virtual Server Add GSLB virtual server through which App servers’ alias GSLB Services are accessible. On the Configuration tab, navigate to Traffic Management>GSLB > GSLB Virtual Servers. Add the virtual servers as shown in the following example. Bind GSLB services and domain name to it. After creating the GSLB virtual server and selecting the appropriate load balancing method (Round Robin in this case), bind GSLB services and domains to complete the step Go to the advanced settings tab inside the virtual server and add Domains tab to bind a domain Go to Advanced > Services and click the arrow to bind a GSLB service and bind all three services (VNet A, VNet B, On-premises) to virtual server After binding GSLB services and Domain to the virtual server it appears as shown here: Check if GSLB virtual server is up and 100% healthy. When the monitor shows that the server is up and healthy, it means that sites are in sync and back-end services are available. To test the deployment now access domain URL rr.ptm.mysite.net from either Cloud client machine or on-premises client machine. For suppose access it from cloud windows client machine see that even on-premises App server is accessed in a private DNS zone without any need for third party or custom DNS solutions. Conclusion NetScaler ADC, the leading application delivery solution, is best suited to provide load balancing and GSLB capabilities for Azure DNS private zone. By subscribing to Azure DNS Private Zone, businesses can rely on NetScaler ADC Global Server Load Balancing’s (GSLB) power and intelligence to distribute intranet traffic across workloads located in multiple geographies and across data centers, connected via secure VPN tunnels. This collaboration guarantees businesses seamless access to part of their workloads they want to move to Azure public cloud.
  16. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: Citrix Hypervisor VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler ADC VPX on Amazon Web Services. Amazon Web Services Amazon Web Services (AWS) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services can offer tools such as compute power, database storage, and content delivery services. AWS offers the following essential services: AWS Compute Services Migration Services Storage Database Services Management Tools Security Services Analytics Networking Messaging Developer Tools Mobile Services AWS Terminology Here is a brief description of key terms used in this document that users must be familiar with: Elastic Network Interface (ENI) - A virtual network interface that users can attach to an instance in a Virtual Private Cloud (VPC). Elastic IP (EIP) address - A static, public IPv4 address that users have allocated in Amazon EC2 or Amazon VPC and then attached to an instance. Elastic IP addresses are associated with user accounts, not a specific instance. They are elastic because users can easily allocate, attach, detach, and free them as their needs change. Subnet - A segment of the IP address range of a VPC with which EC2 instances can be attached. Users can create subnets to group instances according to security and operational needs. Virtual Private Cloud (VPC) - A web service for provisioning a logically isolated section of the AWS cloud where users can launch AWS resources in a virtual network that they define. Here is a brief description of other terms used in this document that users should be familiar with: Amazon Machine Image (AMI) - A machine image, which provides the information required to launch an instance, which is a virtual server in the cloud. Elastic Block Store - Provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Simple Storage Service (S3) - Storage for the Internet. It is designed to make web-scale computing easier for developers. Elastic Compute Cloud (EC2) - A web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Elastic Load Balancing (ELB) - Distributes incoming application traffic across multiple EC2 instances, in multiple Availability Zones. ELB increases the fault tolerance of user applications. Instance type - Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give users the flexibility to choose the appropriate mix of resources for their applications. Identity and Access Management (IAM) - An AWS identity with permission policies that determine what the identity can and cannot do in AWS. Users can use an IAM role to enable applications running on an EC2 instance to securely access their AWS resources. IAM role is required for deploying VPX instances in a high-availability setup. Internet Gateway - Connects a network to the Internet. Users can route traffic for IP addresses outside their VPC to the Internet gateway. Key pair - A set of security credentials with which users prove their identity electronically. A key pair consists of a private key and a public key. Route table - A set of routing rules that controls the traffic leaving any subnet that is associated with the route table. Users can associate multiple subnets with a single route table, but a subnet can be associated with only one route table at a time. Auto Scaling - A web service to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. CloudFormation - A service for writing or changing templates that create and delete related AWS resources together as a unit. Use Cases Disaster Recovery (DR) Disaster is a sudden disruption of business functions caused by natural calamities or human caused events. Disasters affect data center operations, after which resources and the data lost at the disaster site must be fully rebuilt and restored. The loss of data or downtime in the data center is critical and collapses the business continuity. One of the challenges that customers face today is deciding where to put their DR site. Businesses are looking for consistency and performance regardless of any underlying infrastructure or network faults. Possible reasons many organizations are deciding to migrate to the cloud are: Usage economics — The capital expense of having a data center on-prem is well documented and by using the cloud, these businesses can free up time and resources from expanding their own systems. Faster recovery times — Much of the automated orchestration enables recovery in mere minutes. Also, there are technologies that help replicate data by providing continuous data protection or continuous snapshots to guard against any outage or attack. Finally, there are use cases where customers need many different types of compliance and security control which are already present on the public clouds. These make it easier to achieve the compliance they need rather than building their own. Deployment Types One-NIC Deployment Typical Deployments Standalone Use Cases Customers typically use One-NIC Deployments to deploy into a non-production environment, to set up an environment for testing, or to stage a new environment before production deployment. One-NIC Deployments are also used to deploy directly to the cloud quickly and efficiently. One-NIC Deployments are used when customers seek the simplicity of a single subnet configuration. Three-NIC Deployment Typical Deployments Standalone High Availability Use Cases Three-NIC Deployments are used to achieve real isolation of data and management traffic. Three-NIC Deployments also improve scale and performance of the ADC. Three-NIC Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. CFT Deployment Customers would deploy using CloudFormation Templates if they are customizing their deployments or they are automating their deployments. Sample NetScaler ADC VPX Deployment on AWS Architecture The preceding figure shows a typical topology of an AWS VPC with a NetScaler ADC VPX deployment. The AWS VPC has A single Internet gateway to route traffic in and out of the VPC. Network connectivity between the Internet gateway and the Internet. Three subnets, one each for management, client, and server. Network connectivity between the Internet gateway and the two subnets (management and client). A standalone NetScaler ADC VPX instance deployed within the VPC. The VPX instance has three ENIs, one attached to each subnet. Deployment Steps One-NIC Deployment for DR The NetScaler ADC VPX Express instance is available as an Amazon Machine Image (AMI) in AWS marketplace. The minimum EC2 instance type allowed as a supported AMI on NetScaler VPX is m4.large. Download and create an instance of the VPX using a single VPC subnet. The NetScaler ADC VPX instance requires a minimum of 2 virtual CPUs and 2 GB of memory. Initial configuration performed includes network interface configuration, VIP configuration, and feature configuration. Further configuration can be performed by logging in to the GUI or via SSH (user name: nsroot). The output of the configuration includes: InstanceIdNS - Instance Id of newly created VPX instance. This is the default password for the GUI / ssh access. ManagementURL - Use this HTTPS URL to the Management GUI (uses self-signed cert) to log in to the VPX and configure it further. ManagementURL2 - Use this HTTP URL to the Management GUI (if your browser has problems with the self-signed cert) to log in to the VPX. PublicNSIP - Use this public IP to ssh into the appliance. PublicIpVIP - The Public IP where load balanced applications can be accessed. The VPX is deployed in a single-NIC mode. The standard NetScaler IP addresses: NSIP (management IP), VIP (where load balanced applications are accessed), and SNIP (the IP used to send traffic to back end instances) are all provisioned on the single NIC and are drawn from the (RFC1918) address space of the provided VPC subnet. The (RFC1918) NSIP is mapped to the Public IP of the VPX Instance and the RFC1918 VIP is mapped to a public Elastic IP. Licensing A NetScaler ADC VPX instance on AWS requires a license. The following licensing options are available for NetScaler ADC VPX instances running on AWS: Free (unlimited) Hourly Annual Bring your own license Free Trial (all NetScaler ADC VPX-AWS subscription offerings for 21 days free in AWS marketplace). Deployment Options Users can deploy a NetScaler ADC VPX standalone instance on AWS by using the following options AWS web console Citrix-authored CloudFormation template AWS CLI Deployment Steps Users can deploy a NetScaler ADC VPX instance on AWS through the AWS web console. The deployment process includes the following steps: Create a Key Pair Create a Virtual Private Cloud (VPC) Create the VPX instance Create a single VPC subnet Create network interface configuration Map the NSIP to the Public IP of the VPX Instance Map the VIP to a public Elastic IP Connect to the VPX instance Create a Key Pair Amazon EC2 uses a key pair to encrypt and decrypt logon information. To log on to an instance, users must create a key pair, specify the name of the key pair when they launch the instance, and provide the private key when they connect to the instance. When users review and launch an instance by using the AWS Launch Instance wizard, they are prompted to use an existing key pair or create a new key pair. For more information about how to create a key pair, see Amazon EC2 Key Pairs and Linux Instances Create a VPC A NetScaler ADC VPC instance is deployed inside an AWS VPC. A VPC allows users to define virtual networks dedicated to their AWS account. For more information about AWS VPC, see Getting Started With IPv4 for Amazon VPC. While creating a VPC for a NetScaler ADC VPX instance, keep the following points in mind. Use the VPC with a Single Public Subnet Only option to create an AWS VPC in an AWS availability zone. Citrix recommends that users map the previously created NSIP and VIP addresses to the public subnet. Create a NetScaler ADC VPX Instance by using the AWS Express AMI Create a NetScaler ADC VPX instance from the AWS VPX Express AMI. From the AWS dashboard, go to Compute > Launch Instance > AWS Marketplace. Before clicking Launch Instance, users should ensure their region is correct by checking the note that appears under Launch Instance. In the Search AWS Marketplace bar, search with the keyword NetScaler ADC VPX. Select the desired version to deploy and then click Select. For the NetScaler ADC VPX version, users have the following options A licensed version NetScaler ADC VPX Express appliance (a free virtual appliance, which is available from NetScaler ADC 12.0 56.20.) Bring your own device The Launch Instance wizard starts. Follow the wizard to create an instance. The wizard prompts users to Choose Instance Type Configure Instance Add Storage Add Tags Review Allocate and Associate Elastic IPs If users assign a public IP address to an instance, it remains assigned only until the instance is stopped. After that, the address is released back to the pool. When users restart the instance, a new public IP address is assigned. In contrast, an elastic IP (EIP) address remains assigned until the address is disassociated from an instance. Allocate and associate an elastic IP for the management NIC. For more information about how to allocate and associate elastic IP addresses, see these topics: Allocating an Elastic IP Address Associating an Elastic IP Address with a Running Instance These steps complete the procedure to create a NetScaler ADC VPX instance on AWS. It can take a few minutes for the instance to be ready. Check that the instance has passed its status checks. Users can view this information in the Status Checks column on the Instances page. Connect to the VPX Instance After users have created the VPX instance, users can connect to the instance by using the GUI and an SSH client. GUI connection The following are the default administrator credentials to access a NetScaler ADC VPX instance User name: nsroot Password: The default password for the nsroot account is set to the AWS instance-ID of the NetScaler ADC VPX instance. SSH Client connection From the AWS management console, select the NetScaler ADC VPX instance and click Connect. Follow the instructions given on the Connect to Your Instance page. For more information about how to deploy a NetScaler ADC VPX standalone instance on AWS by using the AWS web console, see Scenario: Standalone Instance Configure NetScaler ADC VPX Standalone Instance on AWS using CFT Three-NIC Deployment for DR The NetScaler ADC VPX instance is available as an Amazon Machine Image (AMI) in AWS marketplace, and it can be launched as an Elastic Compute Cloud (EC2) instance within an AWS VPC. The minimum EC2 instance type allowed as a supported AMI on NetScaler VPX is m4.large. The NetScaler ADC VPX AMI instance requires a minimum of 2 virtual CPUs and 2 GB of memory. An EC2 instance launched within an AWS VPC can also provide the multiple interfaces, multiple IP addresses per interface, and public and private IP addresses needed for VPX configuration. Each VPX instance requires at least three IP subnets A management subnet A client-facing subnet (VIP) A back-end facing subnet (SNIP) Citrix recommends three network interfaces for a standard VPX instance on AWS installation. AWS currently makes multi-IP functionality available only to instances running within an AWS VPC. A VPX instance in a VPC can be used to load balance servers running in EC2 instances. An Amazon VPC allows users to create and control a virtual networking environment, including their own IP address range, subnets, route tables, and network gateways. Note: By default, users can create up to 5 VPC instances per AWS region for each AWS account. Users can request higher VPC limits by submitting Amazon’s request form: Amazon VPC Request Licensing A NetScaler ADC VPX instance on AWS requires a license. The following licensing options are available for NetScaler ADC VPX instances running on AWS Free (unlimited) Hourly Annual Bring your own license Free Trial (all NetScaler ADC VPX-AWS subscription offerings for 21 days free in AWS marketplace.) Deployment Options Users can deploy a NetScaler ADC VPX standalone instance on AWS by using the following options AWS web console Citrix-authored CloudFormation template AWS CLI Deployment Steps Users can deploy a NetScaler ADC VPX instance on AWS through the AWS web console. The deployment process includes the following steps Create a Key Pair Create a Virtual Private Cloud (VPC) Add more subnets Create security groups and security rules Add route tables Create an internet gateway Create a NetScaler ADC VPX instance Create and attach more network interfaces Attach elastic IPs to the management NIC Connect to the VPX instance Create a Key Pair Amazon EC2 uses a key pair to encrypt and decrypt logon information. To log on to an instance, users must create a key pair, specify the name of the key pair when they launch the instance, and provide the private key when they connect to the instance. When users review and launch an instance by using the AWS Launch Instance wizard, they are prompted to use an existing key pair or create a new key pair. For more information about how to create a key pair, see Amazon EC2 Key Pairs and Linux Instances Create a VPC A NetScaler ADC VPC instance is deployed inside an AWS VPC. A VPC allows users to define virtual networks dedicated to their AWS account. For more information about AWS VPC, see Getting Started With IPv4 for Amazon VPC While creating a VPC for a NetScaler ADC VPX instance, keep the following points in mind Use the VPC with a Single Public Subnet Only option to create an AWS VPC in an AWS availability zone. Citrix recommends that users create at least three subnets, of the following types: One subnet for management traffic. Place the management IP (NSIP) on this subnet. By default, elastic network interface (ENI) eth0 is used for the management IP. One or more subnets for client-access (user-to-NetScaler ADC VPX) traffic, through which clients connect to one or more virtual IP (VIP) addresses assigned to NetScaler ADC load balancing virtual servers. One or more subnets for the server-access (VPX-to-server) traffic, through which user servers connect to VPX-owned subnet IP (SNIP) addresses. All subnets must be in the same availability zone. Add Subnets When the VPC wizard is used for deployment, only one subnet is created. Depending on user requirements, users may want to create more subnets. For more information about how to create more subnets, see VPCs and Subnets. Create Security Groups and Security Rules To control inbound and outbound traffic, create security groups and add rules to the groups. For more information about how to create groups and add rules, see Security Groups for Your VPC. For NetScaler ADC VPX instances, the EC2 wizard gives default security groups, which are generated by AWS Marketplace and is based on recommended settings by Citrix. However, users can create more security groups based on their requirements. Note: Port 22, 80, 443 to be opened on the Security group for SSH, HTTP, and HTTPS access respectively. Add Route Tables Route tables contain a set of rules, called routes, that are used to determine where network traffic is directed. Each subnet in a VPC must be associated with a route table. For more information about how to create a route table, see Route Tables. Create an Internet Gateway An internet gateway serves two purposes: to provide a target in the VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. Create an internet gateway for internet traffic. For more information about how to create an Internet Gateway, see the section Creating and Attaching an Internet Gateway Create a NetScaler ADC VPX Instance by using the AWS EC2 Service To create a NetScaler ADC VPX instance by using the AWS EC2 service, complete the following steps From the AWS dashboard, go to Compute > EC2 > Launch Instance > AWS Marketplace. Before clicking Launch Instance, users should ensure their region is correct by checking the note that appears under Launch Instance. In the Search AWS Marketplace bar, search with the keyword NetScaler ADC VPX. Select the version users want to deploy and then click Select. For the NetScaler ADC VPX version, users have the following options: A licensed version NetScaler ADC VPX Express appliance (a free virtual appliance, which is available from NetScaler ADC 12.0 56.20.) Bring your own device The Launch Instance wizard starts. Follow the wizard to create an instance. The wizard prompts users to Choose Instance Type Configure Instance Add Storage Add Tags Configure Security Group Review Create and Attach more Network Interfaces Create two more network interfaces for the VIP and SNIP. For more information about how to create more network interfaces, see the section Creating a Network Interface. After users have created the network interfaces, they must attach the interfaces to the VPX instance. Before attaching the interfaces, shut down the VPX instance, attach the interfaces, and power on the instance. For more information about how to attach network interfaces, see the section Attaching a Network Interface When Launching an Instance. Allocate and Associate Elastic IPs If users assign a public IP address to an EC2 instance, it remains assigned only until the instance is stopped. After that, the address is released back to the pool. When users restart the instance, a new public IP address is assigned. In contrast, an elastic IP (EIP) address remains assigned until the address is disassociated from an instance. Allocate and associate an elastic IP for the management NIC. For more information about how to allocate and associate elastic IP addresses, see these topics Allocating an Elastic IP Address Associating an Elastic IP Address with a Running Instance These steps complete the procedure to create a NetScaler ADC VPX instance on AWS. It can take a few minutes for the instance to be ready. Check that the instance has passed its status checks. Users can view this information in the Status Checks column on the Instances page. Connect to the VPX Instance After users have created the VPX instance, users can connect to the instance by using the GUI and an SSH client. GUI connection The following are the default administrator credentials to access a NetScaler ADC VPX instance User name: nsroot Password: The default password for the nsroot account is set to the AWS instance-ID of the NetScaler ADC VPX instance. SSH Client connection From the AWS management console, select the NetScaler ADC VPX instance and click Connect. Follow the instructions given on the Connect to Your Instance page. For more information about how to deploy a NetScaler ADC VPX standalone instance on AWS by using the AWS web console, see Scenario: Standalone Instance Configure NetScaler ADC VPX Standalone Instance on AWS using CFT CFT Deployment NetScaler ADC VPX is available as Amazon Machine Images (AMI) in the AWS Marketplace. AWS Marketplace Before using a CloudFormation template to provision a NetScaler ADC VPX in AWS, the AWS user has to accept the terms and subscribe to the AWS Marketplace product. Each edition of the NetScaler ADC VPX in the Marketplace requires this step. Each template in the CloudFormation repository has collocated documentation describing the usage and architecture of the template. The templates attempt to codify recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature, edition, or option. Users can reuse, modify, or enhance the templates to suit their particular production and testing needs. Most templates require full EC2 permissions in addition to permissions to create IAM roles. The CloudFormation templates contain AMI Ids that are specific to a particular release of the NetScaler ADC VPX (for example, release 12.0-56.20) and edition (for example, NetScaler ADC VPX Platinum Edition - 10 Mbps) OR NetScaler ADC BYOL. To use a different version / edition of the NetScaler ADC VPX with a CloudFormation template requires the user to edit the template and replace the AMI Ids. The latest NetScaler ADC AWS-AMI-IDs are available on GitHub at NetScaler ADC AWS CloudFormation Master. CFT Single-NIC Deployment The CloudFormation template requires sufficient permissions to create IAM roles and lambda functions, beyond normal EC2 full privileges. The user of this template also needs to accept the terms and subscribe to the AWS Marketplace product before using this CloudFormation template. This CloudFormation template creates an instance of the VPX Express from the VPX Express AMI using a single VPC subnet. The CloudFormation template also provisions a lambda function that initializes the VPX instance. Initial configuration performed by the lambda function includes network interface configuration, VIP configuration, and feature configuration. Further configuration can be performed by logging in to the GUI or via SSH (user name: nsroot). The output of the CloudFormation template includes InstanceIdNS - Instance Id of newly created VPX instance. This is the default password for the GUI / ssh access. ManagementURL2 - Use this HTTP URL to the Management GUI (if your browser has problems with the self-signed cert) to log in to the VPX. PublicNSIP - Use this public IP to ssh into the appliance. PublicIpVIP - The Public IP where load balanced applications can be accessed. The CloudFormation template deploys the VPX in a single-NIC mode. The standard NetScaler IP addresses: NSIP (management IP), VIP (where load balanced applications are accessed) and SNIP (the IP used to send traffic to back end instances) are all provisioned on the single NIC and are drawn from the (RFC1918) address space of the provided VPC subnet. The (RFC1918) NSIP is mapped to the Public IP of the VPX Instance and the RFC1918 VIP is mapped to a public Elastic IP. If the VPX is restarted, the Public NSIP mapping is lost. In this case the NSIP is only accessible from within the VPC subnet, from another EC2 instance in the same subnet. Other possible architectures include 2 and 3-NIC configurations across multiple VPC subnets. CFT Three-NIC Deployment This template deploys a VPC, with 3 subnets (Management, client, server) for 2 Availability Zones. It deploys an Internet Gateway, with a default route on the public subnets. This template also creates a HA pair across Availability Zones with two instances of NetScaler ADC: 3 ENIs associated to 3 VPC subnets (Management, Client, Server) on primary and 3 ENIs associated to 3 VPC subnets (Management, Client, Server) on secondary. All the resource names created by this CFT are prefixed with a tagName of the stack name. The output of the CloudFormation template includes PrimaryCitrixADCManagementURL - HTTPS URL to the Management GUI of the Primary VPX (uses self-signed cert) PrimaryCitrixADCManagementURL2 - HTTP URL to the Management GUI of the Primary VPX PrimaryCitrixADCInstanceID - Instance Id of the newly created Primary VPX instance PrimaryCitrixADCPublicVIP - Elastic IP address of the Primary VPX instance associated with the VIP PrimaryCitrixADCPrivateNSIP - Private IP (NS IP) used for management of the Primary VPX PrimaryCitrixADCPublicNSIP - Public IP (NS IP) used for management of the Primary VPX PrimaryCitrixADCPrivateVIP - Private IP address of the Primary VPX instance associated with the VIP PrimaryCitrixADCSNIP - Private IP address of the Primary VPX instance associated with the SNIP SecondaryCitrixADCManagementURL - HTTPS URL to the Management GUI of the Secondary VPX (uses self-signed cert) SecondaryCitrixADCManagementURL2 - HTTP URL to the Management GUI of the Secondary VPX SecondaryCitrixADCInstanceID - Instance Id of the newly created Secondary VPX instance SecondaryCitrixADCPrivateNSIP - Private IP (NS IP) used for management of the Secondary VPX SecondaryCitrixADCPublicNSIP - Public IP (NS IP) used for management of the Secondary VPX SecondaryCitrixADCPrivateVIP - Private IP address of the Secondary VPX instance associated with the VIP SecondaryCitrixADCSNIP - Private IP address of the Secondary VPX instance associated with the SNIP SecurityGroup - Security group id that the VPX belongs to When providing input to the CFT, the against any parameter in the CFT implies that it is a mandatory field. For example, VPC ID is a mandatory field. The following prerequisites must be met. The CloudFormation template requires sufficient permissions to create IAM roles, beyond normal EC2 full privileges. The user of this template also needs to accept the terms and subscribe to the AWS Marketplace product before using this CloudFormation template. The following should also be present Key Pair 3 unallocated EIPs Primary Management Client VIP Secondary Management For more information on provisioning NetScaler ADC VPX instances on AWS, users can visit Provisioning NetScaler ADC VPX Instances on AWS Prerequisites Before attempting to create a VPX instance in AWS, users should ensure they have the following An AWS account to launch a NetScaler ADC VPX AMI in an Amazon Web Services (AWS) Virtual Private Cloud (VPC). Users can create an AWS account for free at www.aws.amazon.com. An AWS Identity and Access Management (IAM) user account to securely control access to AWS services and resources for users. For more information about how to create an IAM user account, see Creating IAM Users (Console). An IAM role is mandatory for both standalone and high availability deployments. The IAM role must have the following privileges ec2:DescribeInstances ec2:DescribeNetworkInterfaces ec2:DetachNetworkInterface ec2:AttachNetworkInterface ec2:StartInstances ec2:StopInstances ec2:RebootInstances ec2:DescribeAddresses ec2:AssociateAddress ec2:DisassociateAddress autoscaling:* sns:* sqs:* iam:SimulatePrincipalPolicy iam:GetRole If the Citrix CloudFormation template is used, the IAM role is automatically created. The template does not allow selecting an already created IAM role. Note: When users log on the VPX instance through the GUI, a prompt to configure the required privileges for IAM role appears. Ignore the prompt if the privileges have already been configured. For more information, see What Is the AWS Command Line Interface? Note: Users also need the AWS CLI to change the network interface type to SR-IOV. Limitations and Usage Guidelines The following limitations and usage guidelines apply when deploying a NetScaler ADC VPX instance on AWS Users should read the AWS terminology listed above before starting a new deployment. The clustering feature is supported only when provisioned with Citrix ADM Auto Scale Groups. For the high availability setup to work effectively, associate a dedicated NAT device to the management Interface or associate an Elastic IP (EIP) to NSIP. For more information on NAT, in the AWS documentation, see NAT Instances Data traffic and management traffic must be segregated with ENIs belonging to different subnets. Only the NSIP address must be present on the management ENI. If a NAT instance is used for security instead of assigning an EIP to the NSIP, appropriate VPC level routing changes are required. For instructions on making VPC level routing changes, in the AWS documentation, see Scenario 2: VPC with Public and Private Subnets. A VPX instance can be moved from one EC2 instance type to another (for example, from m3.large to an m3.xlarge). For more information, visit Limitations and Usage Guidelines For storage media for VPX on AWS, Citrix recommends EBS, because it is durable and the data is available even after it is detached from instance. Dynamic addition of ENIs to VPX is not supported. Restart the VPX instance to apply the update. Citrix recommends users to stop the standalone or HA instance, attach the new ENI, and then restart the instance. The primary ENI cannot be changed or attached to a different subnet once it is deployed. Secondary ENIs can be detached and changed as needed while the VPX is stopped. Users can assign multiple IP addresses to an ENI. The maximum number of IP addresses per ENI is determined by the EC2 instance type. See the section “IP Addresses Per Network Interface Per Instance Type” in Elastic Network Interfaces. Users must allocate the IP addresses in AWS before they assign them to ENIs. For more information, see Elastic Network Interfaces. Citrix recommends that users avoid using the enable and disable interface commands on NetScaler ADC VPX interfaces. The NetScaler ADC set ha node \<NODE\_ID\> -haStatus STAYPRIMARY and set ha node \<NODE\_ID\> -haStatus STAYSECONDARY commands are disabled by default. IPv6 is not supported for VPX. Due to AWS limitations, these features are not supported: Gratuitous ARP(GARP) L2 mode (bridging). Transparent vServers are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC For RNAT, routing, and Transparent vServers to work, ensure Source/Destination Check is disabled for all ENIs in the data path. For more information, see “Changing the Source/Destination Checking” in Elastic Network Interfaces In a NetScaler ADC VPX deployment on AWS, in some AWS regions, the AWS infrastructure might not be able to resolve AWS API calls. This happens if the API calls are issued through a non-management interface on the NetScaler ADC VPX instance. As a workaround, restrict the API calls to the management interface only. To do that, create an NSVLAN on the VPX instance and bind the management interface to the NSVLAN by using the appropriate command. For example: set ns config -nsvlan <vlan id> -ifnum 1/1 -tagged NO save config Restart the VPX instance at the prompt. For more information about configuring NSVLAN, see Configuring NSVLAN. In the AWS console, the vCPU usage shown for a VPX instance under the Monitoring tab might be high (up to 100 percent), even when the actual usage is much lower. To see the actual vCPU usage, navigate to View all CloudWatch metrics. For more information, seen Monitor your Instances using Amazon CloudWatch Alternately, if low latency and performance are not a concern, users may enable the CPU Yield feature allowing the packet engines to idle when there is no traffic. For more details about the CPU Yield feature and how to enable it, visit Citrix Support Knowledge Center. AWS-VPX Support Supported VPX Models on AWS** NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 200 Mbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 1000 Mbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 3 Gbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 5 Gbps NetScaler ADC VPX Standard/Advanced/Premium - 10 Mbps NetScaler ADC VPX Express - 20 Mbps NetScaler ADC VPX - Customer Licensed Supported AWS Regions US West (Oregon) Region US West (N. California) Region US East (Ohio) Region US East (N. Virginia) Region Asia Pacific (Seoul) Region Canada (Central) Region Asia Pacific (Singapore) Region Asia Pacific (Sydney) Region Asia Pacific (Tokyo) Region Asia Pacific (Hong Kong) Region Canada (Central) Region China (Beijing) Region China (Ningxia) Region EU (Frankfurt) Region EU (Ireland) Region EU (London) Region EU (Paris) Region South America (São Paulo) Region AWS GovCloud (US-East) Region Supported AWS Instance Types m3.large, m3.large, m3.2xlarge c4.large, c4.large, c4.2xlarge, c4.4xlarge, c4.8xlarge m4.large, m4.large, m4.2xlarge, m4.4xlarge, m4.10xlarge m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge, m5.12xlarge, m5.24xlarge c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.18xlarge, c5.24xlarge C5n.large, C5n.xlarge, C5n.2xlarge, C5n.4xlarge, C5n.9xlarge, C5n.18xlarge Supported AWS Services #EC2 #Lambda #S3 #VPC #route53 #ELB #Cloudwatch #AWS AutoScaling #Cloud formation Simple Queue Service (SQS) Simple Notification Service (SNS) Identity & Access Management (IAM) For higher bandwidth, Citrix recommends the following instance types Instance Type Bandwidth Enhanced Networking (SR-IOV) M4.10x large 3 Gbps and 5 Gbps Yes C4.8x large 3 Gbps and 5 Gbps Yes C5.18xlarge/M5.18xlarge 25 Gbps ENA C5n.18xlarge 30 Gbps ENA To remain updated about the current supported VPX models and AWS regions, instance types, and services, visit VPX-AWS support matrix. The official version of this content is in English. Some of the Citrix documentation content is machine translated for your convenience only. Citrix has no control over machine-translated content, which may contain errors, inaccuracies or unsuitable language. No warranty of any kind, either expressed or implied, is made as to the accuracy, reliability, suitability, or correctness of any translations made from the English original into any other language, or that your Citrix product or service conforms to any machine translated content, and any warranty provided under the applicable end user license agreement or terms of service, or any other agreement with Citrix, that the product or service conforms with any documentation shall not apply to the extent that such documentation has been machine translated. Citrix will not be held responsible for any damage or issues that may arise from using machine-translated content. DIESER DIENST KANN ÜBERSETZUNGEN ENTHALTEN, DIE VON GOOGLE BEREITGESTELLT WERDEN. GOOGLE LEHNT JEDE AUSDRÜCKLICHE ODER STILLSCHWEIGENDE GEWÄHRLEISTUNG IN BEZUG AUF DIE ÜBERSETZUNGEN AB, EINSCHLIESSLICH JEGLICHER GEWÄHRLEISTUNG DER GENAUIGKEIT, ZUVERLÄSSIGKEIT UND JEGLICHER STILLSCHWEIGENDEN GEWÄHRLEISTUNG DER MARKTGÄNGIGKEIT, DER EIGNUNG FÜR EINEN BESTIMMTEN ZWECK UND DER NICHTVERLETZUNG VON RECHTEN DRITTER. CE SERVICE PEUT CONTENIR DES TRADUCTIONS FOURNIES PAR GOOGLE. GOOGLE EXCLUT TOUTE GARANTIE RELATIVE AUX TRADUCTIONS, EXPRESSE OU IMPLICITE, Y COMPRIS TOUTE GARANTIE D'EXACTITUDE, DE FIABILITÉ ET TOUTE GARANTIE IMPLICITE DE QUALITÉ MARCHANDE, D'ADÉQUATION À UN USAGE PARTICULIER ET D'ABSENCE DE CONTREFAÇON. ESTE SERVICIO PUEDE CONTENER TRADUCCIONES CON TECNOLOGÍA DE GOOGLE. GOOGLE RENUNCIA A TODAS LAS GARANTÍAS RELACIONADAS CON LAS TRADUCCIONES, TANTO IMPLÍCITAS COMO EXPLÍCITAS, INCLUIDAS LAS GARANTÍAS DE EXACTITUD, FIABILIDAD Y OTRAS GARANTÍAS IMPLÍCITAS DE COMERCIABILIDAD, IDONEIDAD PARA UN FIN EN PARTICULAR Y AUSENCIA DE INFRACCIÓN DE DERECHOS. 本服务可能包含由 Google 提供技术支持的翻译。Google 对这些翻译内容不做任何明示或暗示的保证,包括对准确性、可靠性的任何保证以及对适销性、特定用途的适用性和非侵权性的任何暗示保证。 このサービスには、Google が提供する翻訳が含まれている可能性があります。Google は翻訳について、明示的か黙示的かを問わず、精度と信頼性に関するあらゆる保証、および商品性、特定目的への適合性、第三者の権利を侵害しないことに関するあらゆる黙示的保証を含め、一切保証しません。 ESTE SERVIÇO PODE CONTER TRADUÇÕES FORNECIDAS PELO GOOGLE. O GOOGLE SE EXIME DE TODAS AS GARANTIAS RELACIONADAS COM AS TRADUÇÕES, EXPRESSAS OU IMPLÍCITAS, INCLUINDO QUALQUER GARANTIA DE PRECISÃO, CONFIABILIDADE E QUALQUER GARANTIA IMPLÍCITA DE COMERCIALIZAÇÃO, ADEQUAÇÃO A UM PROPÓSITO ESPECÍFICO E NÃO INFRAÇÃO.
  17. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler ADC VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms. This deployment guide focuses on NetScaler ADC VPX on Azure. Microsoft Azure Microsoft Azure is an ever-expanding set of cloud computing services to help organizations meet their business challenges. Azure gives users the freedom to build, manage, and deploy applications on a massive, global network using their preferred tools and frameworks. With Azure, users can: Be future-ready with continuous innovation from Microsoft to support their development today—and their product visions for tomorrow. Operate hybrid cloud seamlessly on-premises, in the cloud, and at the edge—Azure meets users where they are. Build on their terms with Azure’s commitment to open source and support for all languages and frameworks, allowing users to be free to build how they want and deploy where they want. Trust their cloud with security from the ground up—backed by a team of experts and proactive, industry-leading compliance that is trusted by enterprises, governments, and startups. Azure Terminology Here is a brief description of key terms used in this document that users must be familiar with: Azure Load Balancer – Azure load balancer is a resource that distributes incoming traffic among computers in a network. Traffic is distributed among virtual machines defined in a load-balancer set. A load balancer can be external or internet-facing, or it can be internal. Azure Resource Manager (ARM) – ARM is the new management framework for services in Azure. Azure Load Balancer is managed using ARM-based APIs and tools. Back-End Address Pool – These are IP addresses associated with the virtual machine NIC to which load will be distributed. BLOB - Binary Large Object – Any binary object like a file or an image that can be stored in Azure storage. Front-End IP Configuration – An Azure Load balancer can include one or more front-end IP addresses, also known as a virtual IPs (VIPs). These IP addresses serve as ingress for the traffic. Instance Level Public IP (ILPIP) – An ILPIP is a public IP address that users can assign directly to a virtual machine or role instance, rather than to the cloud service that the virtual machine or role instance resides in. This does not take the place of the VIP (virtual IP) that is assigned to their cloud service. Rather, it is an extra IP address that can be used to connect directly to a virtual machine or role instance. Note: In the past, an ILPIP was referred to as a PIP, which stands for public IP. Inbound NAT Rules – This contains rules mapping a public port on the load balancer to a port for a specific virtual machine in the back-end address pool. IP-Config - It can be defined as an IP address pair (public IP and private IP) associated with an individual NIC. In an IP-Config, the public IP address can be NULL. Each NIC can have multiple IP configurations associated with it, which can be up to 255. Load Balancing Rules – A rule property that maps a given front-end IP and port combination to a set of back-end IP addresses and port combinations. With a single definition of a load balancer resource, users can define multiple load balancing rules, each rule reflecting a combination of a front-end IP and port and back end IP and port associated with virtual machines. Network Security Group (NSG) – NSG contains a list of Access Control List (ACL) rules that allow or deny network traffic to virtual machine instances in a virtual network. NSGs can be associated with either subnets or individual virtual machine instances within that subnet. When an NSG is associated with a subnet, the ACL rules apply to all the virtual machine instances in that subnet. In addition, traffic to an individual virtual machine can be restricted further by associating an NSG directly to that virtual machine. Private IP addresses – Used for communication within an Azure virtual network, and user on-premises network when a VPN gateway is used to extend a user network to Azure. Private IP addresses allow Azure resources to communicate with other resources in a virtual network or an on-premises network through a VPN gateway or ExpressRoute circuit, without using an Internet-reachable IP address. In the Azure Resource Manager deployment model, a private IP address is associated with the following types of Azure resources – virtual machines, internal load balancers (ILBs), and application gateways. Probes – This contains health probes used to check availability of virtual machines instances in the back-end address pool. If a particular virtual machine does not respond to health probes for some time, then it is taken out of traffic serving. Probes enable users to keep track of the health of virtual instances. If a health probe fails, the virtual instance is taken out of rotation automatically. Public IP Addresses (PIP) – PIP is used for communication with the Internet, including Azure public-facing services and is associated with virtual machines, Internet-facing load balancers, VPN gateways, and application gateways. Region - An area within a geography that does not cross national borders and that contains one or more data centers. Pricing, regional services, and offer types are exposed at the region level. A region is typically paired with another region, which can be up to several hundred miles away, to form a regional pair. Regional pairs can be used as a mechanism for disaster recovery and high availability scenarios. Also referred to generally as location. Resource Group - A container in Resource Manager that holds related resources for an application. The resource group can include all of the resources for an application, or only those resources that are logically grouped. Storage Account – An Azure storage account gives users access to the Azure blob, queue, table, and file services in Azure Storage. A user storage account provides the unique namespace for user Azure storage data objects. Virtual Machine – The software implementation of a physical computer that runs an operating system. Multiple virtual machines can run simultaneously on the same hardware. In Azure, virtual machines are available in various sizes. Virtual Network - An Azure virtual network is a representation of a user network in the cloud. It is a logical isolation of the Azure cloud dedicated to a user subscription. Users can fully control the IP address blocks, DNS settings, security policies, and route tables within this network. Users can also further segment their VNet into subnets and launch Azure IaaS virtual machines and cloud services (PaaS role instances). Also, users can connect the virtual network to their on-premises network using one of the connectivity options available in Azure. In essence, users can expand their network to Azure, with complete control on IP address blocks with the benefit of the enterprise scale Azure provides. Logical Flow of NetScaler WAF on Azure Figure 1: Logical Diagram of NetScaler WAF on Azure Logical Flow The Web Application Firewall can be installed as either a Layer 3 network device or a Layer 2 network bridge between customer servers and customer users, usually behind the customer company’s router or firewall. It must be installed in a location where it can intercept traffic between the web servers that users want to protect and the hub or switch through which users access those web servers. Users then configure the network to send requests to the Web Application Firewall instead of directly to their web servers, and responses to the Web Application Firewall instead of directly to their users. The Web Application Firewall filters that traffic before forwarding it to its final destination, using both its internal rule set and the user additions and modifications. It blocks or renders harmless any activity that it detects as harmful, and then forwards the remaining traffic to the web server. The figure above (Figure 1) provides an overview of the filtering process. Note: The figure omits the application of a policy to incoming traffic. It illustrates a security configuration in which the policy is to process all requests. Also, in this configuration, a signatures object has been configured and associated with the profile, and security checks have been configured in the profile. As the figure shows, when a user requests a URL on a protected website, the Web Application Firewall first examines the request to ensure that it does not match a signature. If the request matches a signature, the Web Application Firewall either displays the error object (a webpage that is located on the Web Application Firewall appliance and which users can configure by using the imports feature) or forwards the request to the designated error URL (the error page). If a request passes signature inspection, the Web Application Firewall applies the request security checks that have been enabled. The request security checks verify that the request is appropriate for the user website or web service and does not contain material that might pose a threat. For example, security checks examine the request for signs indicating that it might be of an unexpected type, request unexpected content, or contain unexpected and possibly malicious web form data, SQL commands, or scripts. If the request fails a security check, the Web Application Firewall either sanitizes the request and then sends it back to the NetScaler ADC appliance (or NetScaler ADC virtual appliance), or displays the error object. If the request passes the security checks, it is sent back to the NetScaler ADC appliance, which completes any other processing and forwards the request to the protected web server. When the website or web service sends a response to the user, the Web Application Firewall applies the response security checks that have been enabled. The response security checks examine the response for leaks of sensitive private information, signs of website defacement, or other content that should not be present. If the response fails a security check, the Web Application Firewall either removes the content that should not be present or blocks the response. If the response passes the security checks, it is sent back to the NetScaler ADC appliance, which forwards it to the user. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on Azure combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the Azure Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on Azure enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Deployment Types Multi-NIC Multi-IP Deployment (Three-NIC Deployment) Typical Deployments StyleBook driven With ADM With GSLB (Azure Traffic Management w/no domain registration) Licensing - Pooled/Marketplace Use Cases Multi-NIC Multi-IP (Three-NIC) Deployments are used to achieve real isolation of data and management traffic. Multi-NIC Multi-IP (Three-NIC) Deployments also improve the scale and performance of the ADC. Multi-NIC Multi-IP (Three-NIC) Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. Multi-NIC Multi-IP (Three-NIC) Deployment for High Availability (HA) Customers would potentially deploy using three-NIC deployment if they are deploying into a production environment where security, redundancy, availability, capacity, and scalability are critical. With this deployment method, complexity and ease of management are not critical concerns to the users. Azure Resource Manager Template Deployment Customers would deploy using ARM (Azure Resource Manager) Templates if they are customizing their deployments or they are automating their deployments. ARM (Azure Resource Manager) Templates The GitHub repository for NetScaler ADC ARM (Azure Resource Manager) templates hosts NetScaler ADC custom templates for deploying NetScaler ADC in Microsoft Azure Cloud Services. All of the templates in this repository have been developed and maintained by the NetScaler ADC engineering team. Each template in this repository has co-located documentation describing the usage and architecture of the template. The templates attempt to codify the recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature / edition / option. Users can reuse / modify or enhance the templates to suit their particular production and testing needs. Most templates require sufficient subscriptions to portal.azure.com to create resources and deploy templates. NetScaler ADC VPX Azure Resource Manager (ARM) templates are designed to ensure an easy and consistent way of deploying standalone NetScaler ADC VPX. These templates increase reliability and system availability with built-in redundancy. These ARM templates support Bring Your Own License (BYOL) or Hourly based selections. Choice of selection is either mentioned in the template description or offered during template deployment. For more information on how to provision a NetScaler ADC VPX instance on Microsoft Azure using ARM (Azure Resource Manager) templates, visit: NetScaler ADC Azure templates. For more information on how to deploy a NetScaler ADC VPX instance on Microsoft Azure, please refer to: Deploy a NetScaler ADC VPX Instance on Microsoft Azure. For more information on how a NetScaler ADC VPX instance works on Azure, please visit: How a NetScaler ADC VPX Instance Works on Azure. Deployment Steps When users deploy a NetScaler ADC VPX instance on Microsoft Azure Resource Manager (ARM), they can use the Azure cloud computing capabilities and use NetScaler ADC load balancing and traffic management features for their business needs. Users can deploy NetScaler ADC VPX instances on Azure Resource Manager either as standalone instances or as high availability pairs in active-standby modes. Users can deploy a NetScaler ADC VPX instance on Microsoft Azure in either of two ways: Through the Azure Marketplace. The NetScaler ADC VPX virtual appliance is available as an image in the Microsoft Azure Marketplace. The Azure Resource Manager Template is published in the Azure Marketplace and can be used to deploy NetScaler ADC in a standalone and in an HA pair deployment. Using the NetScaler ADC Azure Resource Manager (ARM) json template available on GitHub. For more information, see the GitHub repository for NetScaler ADC solution templates. Choosing the Right Azure Instance VPX virtual appliances on Azure can be deployed on any instance type that has two or more cores and more than 2 GB memory. The following table lists the recommended instance types for the ADC VPX license: VPX Model Azure Instance (recommended) VPX10 Standard D2s v3 VPX200 Standard D2s v3 VPX1000 Standard D4s v3 VPX3000 Standard D8s v3 Once the license and instance type that needs to be used for deployment is known, users can provision a NetScaler ADC VPX instance on Azure using the recommended Multi-NIC multi-IP architecture. Multi-NIC Multi-IP Architecture (Three-NIC) In this deployment type, users can have more than one network interfaces (NICs) attached to a VPX instance. Any NIC can have one or more IP configurations - static or dynamic public and private IP addresses assigned to it. Multi-NIC architecture can be used for both Standalone and HA pair deployments. The following ARM templates can be used: NetScaler ADC Standalone: ARM Template-Standalone 3-NIC NetScaler ADC HA Pair: ARM Template-HA Pair 3-NIC Refer to the following use cases: Configure a High-Availability Setup with Multiple IP Addresses and NICs Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands NetScaler ADM Deployment Architecture The following image provides an overview of how NetScaler ADM connects with Azure to provision NetScaler ADC VPX instances in Microsoft Azure. Users are required to have three subnets to provision and manage NetScaler ADC VPX instances in Microsoft Azure. A security group must be created for each subnet. The rules specified in Network Security Group (NSG) govern the communication across the subnets. NetScaler ADM service agent helps users to provision and manage NetScaler ADC VPX instances. Configure a High-Availability Setup with Multiple IP Addresses and NICs In a Microsoft Azure deployment, a high-availability configuration of two NetScaler ADC VPX instances is achieved by using the Azure Load Balancer (ALB). This is achieved by configuring a health probe on ALB, which monitors each VPX instance by sending health probes at every 5 seconds to both primary and secondary instances. In this setup, only the primary node responds to health probes and the secondary does not. Once the primary sends the response to the health probe, the ALB starts sending the data traffic to the instance. If the primary instance misses two consecutive health probes, ALB does not redirect traffic to that instance. On failover, the new primary starts responding to health probes and the ALB redirects traffic to it. The standard VPX high availability failover time is three seconds. The total failover time that might occur for traffic switching can be a maximum of 13 seconds. Users can deploy a pair of NetScaler ADC VPX instances with multiple NICs in an active-passive high availability (HA) setup on Azure. Each NIC can contain multiple IP addresses. The following options are available for a multi-NIC high availability deployment: High availability using Azure availability set High availability using Azure availability zones For more information about Azure Availability Set and Availability Zones, see the Azure documentation Manage the Availability of Linux Virtual Machines. High Availability using Availability Set A high availability setup using availability set must meet the following requirements: An HA Independent Network Configuration (INC) configuration The Azure Load Balancer (ALB) in Direct Server Return (DSR) mode All traffic goes through the primary node. The secondary node remains in standby mode until the primary node fails. Note: For a NetScaler VPX high availability deployment on Azure cloud to work, users need a floating public IP (PIP) that can be moved between the two VPX nodes. The Azure Load Balancer (ALB) provides that floating PIP, which is moved to the second node automatically in the event of a failover. In an active-passive deployment, the ALB front-end public IP (PIP) addresses are added as the VIP addresses in each VPX node. In an HA-INC configuration, the VIP addresses are floating and the SNIP addresses are instance specific. Users can deploy a VPX pair in active-passive high availability mode in two ways by using: NetScaler ADC VPX standard high availability template: use this option to configure an HA pair with the default option of three subnets and six NICs. Windows PowerShell commands: use this option to configure an HA pair according to your subnet and NIC requirements. This section describes how to deploy a VPX pair in active-passive HA setup by using the NetScaler template. If users want to deploy with PowerShell commands, see Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands. Configure HA-INC Nodes by using the NetScaler High Availability Template Users can quickly and efficiently deploy a pair of VPX instances in HA-INC mode by using the standard template. The template creates two nodes, with three subnets and six NICs. The subnets are for management, client, and server-side traffic, and each subnet has two NICs for both of the VPX instances. Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Sets. From Azure Marketplace, select and initiate the NetScaler solution template. The template appears. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Create a Resource Group and select OK. The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group in the Azure portal to see the configuration details, such as LB rules, back-end pools, health probes, and so on. The high availability pair appears as ns-vpx0 and ns-vpx1. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. Next, users need to configure the load-balancing virtual server with the ALB’s Frontend public IP (PIP) address, on the primary node. To find the ALB PIP, select ALB > Frontend IP configuration. See the Resources section for more information about how to configure the load-balancing virtual server. Resources: The following links provide additional information related to HA deployment and virtual server configuration: Configuring High Availability Nodes in Different Subnets Set up Basic Load Balancing Related resources: Configure a High-Availability Setup with Multiple IP Addresses and NICs by using PowerShell Commands Configure GSLB on an Active-Standby High-Availability Setup High Availability using Availability Zones Azure Availability Zones are fault-isolated locations within an Azure region, providing redundant power, cooling, and networking and increasing resiliency. Only specific Azure regions support Availability Zones. For more information, see the Azure documentation Availability Zones in Azure: Configure GSLB on an Active-Standby High-Availability Setup. Users can deploy a VPX pair in high availability mode by using the template called “NetScaler 13.0 HA using Availability Zones,” available in Azure Marketplace. Complete the following steps to launch the template and deploy a high availability VPX pair, by using Azure Availability Zones. From Azure Marketplace, select and initiate the NetScaler solution template. Ensure deployment type is Resource Manager and select Create. The Basics page appears. Enter the details and click OK. Note: Ensure that an Azure region that supports Availability Zones is selected. For more information about regions that support Availability Zones, see Azure documentation Availability Zones in Azure: Regions and Availability Zones in Azure . The General Settings page appears. Type the details and select OK. The Network Setting page appears. Check the VNet and subnet configurations, edit the required settings, and select OK. The Summary page appears. Review the configuration and edit accordingly. Select OK to confirm. The Buy page appears. Select Purchase to complete the deployment. It might take a moment for the Azure Resource Group to be created with the required configurations. After completion, select the Resource Group to see the configuration details, such as LB rules, back-end pools, health probes, and so on, in the Azure portal. The high availability pair appears as ns-vpx0 and ns-vpx1. Also, users can see the location under the Location column. If further modifications are required for the HA setup, such as creating more security rules and ports, users can do that from the Azure portal. For more detailed information on provisioning NetScaler ADC VPX instances on Microsoft Azure, please see: Provisioning NetScaler ADC VPX Instances on Microsoft Azure. NetScaler Application Delivery Management NetScaler Application Delivery Management Service (NetScaler ADM) provides an easy and scalable solution to manage NetScaler ADC deployments that include NetScaler ADC MPX, NetScaler ADC VPX, NetScaler Gateway, NetScaler Secure Web Gateway, NetScaler ADC SDX, NetScaler ADC CPX, and NetScaler SD-WAN appliances that are deployed on-premises or on the cloud. Users can use this cloud solution to manage, monitor, and troubleshoot the entire global application delivery infrastructure from a single, unified, and centralized cloud-based console. NetScaler ADM Service provides all the capabilities required to quickly set up, deploy, and manage application delivery in NetScaler ADC deployments and with rich analytics of application health, performance, and security. NetScaler ADM Service provides the following benefits: Agile – Easy to operate, update, and consume. The service model of NetScaler ADM Service is available over the cloud, making it easy to operate, update, and use the features provided by NetScaler ADM Service. The frequency of updates, combined with the automated update feature, quickly enhances user NetScaler ADC deployment. Faster time to value – Quicker business goals achievement. Unlike with the traditional on-premises deployment, users can use their NetScaler ADM Service with a few clicks. Users not only save the installation and configuration time, but also avoid wasting time and resources on potential errors. Multi-Site Management – Single Pane of Glass for instances across Multi-Site data centers. With the NetScaler ADM Service, users can manage and monitor NetScaler ADCs that are in various types of deployments. Users have one-stop management for NetScaler ADCs deployed on-premises and in the cloud. Operational Efficiency – Optimized and automated way to achieve higher operational productivity. With the NetScaler ADM Service, user operational costs are reduced by saving user time, money, and resources on maintaining and upgrading the traditional hardware deployments. How NetScaler ADM Service Works NetScaler ADM Service is available as a service on the Citrix Cloud. After users sign up for Citrix Cloud and start using the service, install agents in the user network environment or initiate the built-in agent in the instances. Then, add the instances users want to manage to the service. An agent enables communication between the NetScaler ADM Service and the managed instances in the user data center. The agent collects data from the managed instances in the user network and sends it to the NetScaler ADM Service. When users add an instance to the NetScaler ADM Service, it implicitly adds itself as a trap destination and collects an inventory of the instance. The service collects instance details such as: Host name Software version Running and saved configuration Certificates Entities configured on the instance, and so on. NetScaler ADM Service periodically polls managed instances to collect information. The following image illustrates the communication between the service, the agents, and the instances: Documentation Guide The NetScaler ADM Service documentation includes information about how to get started with the service, a list of features supported on the service, and configuration specific to this service solution. NetScaler ADC WAF and OWASP Top Ten – 2017 The Open Web Application Security Project: OWASP (released the OWASP Top 10 for 2017 for web application security. This list documents the most common web application vulnerabilities and is a great starting point to evaluate web security. Here we detail how to configure the NetScaler ADC Web Application Firewall (WAF) to mitigate these flaws. WAF is available as an integrated module in the NetScaler ADC (Premium Edition) and a complete range of appliances. The full OWASP Top 10 document is available at OWASP Top Ten. OWASP Top-10 2017 NetScaler ADC WAF Features A1:2017- Injection Injection attack prevention (SQL or any other custom injections such as OS Command injection, XPath injection, and LDAP Injection), auto update signature feature A2:2017 - Broken Authentication AAA, Cookie Tampering protection, Cookie Proxying, Cookie Encryption, CSRF tagging, Use SSL A3:2017 - Sensitive Data Exposure Credit Card protection, Safe Commerce, Cookie proxying, and Cookie Encryption A4:2017 XML External Entities (XXE) XML protection including WSI checks, XML message validation & XML SOAP fault filtering check A5:2017 Broken Access Control AAA, Authorization security feature within AAA module of NetScaler, Form protections, and Cookie tampering protections, StartURL, and ClosureURL A6:2017 - Security Misconfiguration PCI reports, SSL features, Signature generation from vulnerability scan reports such as Cenzic, Qualys, AppScan, WebInspect, Whitehat. Also, specific protections such as Cookie encryption, proxying, and tampering A7:2017 - Cross Site Scripting (XSS) XSS Attack Prevention, Blocks all OWASP XSS cheat sheet attacks A8:2017 – Insecure Deserialization XML Security Checks, GWT content type, custom signatures, Xpath for JSON and XML A9:2017 - Using Components with known Vulnerabilities Vulnerability scan reports, Application Firewall Templates, and Custom Signatures A10:2017 – Insufficient Logging & Monitoring User configurable custom logging, NetScaler ADC Management and Analytics System A1:2017- Injection Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into running unintended commands or accessing data without proper authorization. ADC WAF Protections SQL Injection prevention feature protects against common injection attacks. Custom injection patterns can be uploaded to protect against any type of injection attack including XPath and LDAP. This is applicable for both HTML and XML payloads. The auto update signature feature keeps the injection signatures up to date. Field format protection feature allows the administrator to restrict any user parameter to a regular expression. For instance, you can enforce that a zip-code field contains integers only or even 5-digit integers. Form field consistency: Validate each submitted user form against the user session form signature to ensure the validity of all form elements. Buffer overflow checks ensure that the URL, headers, and cookies are in the right limits blocking any attempts to inject large scripts or code. A2:2017 – Broken Authentication Application functions related to authentication and session management are often implemented incorrectly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities temporarily or permanently. ADC WAF Protections NetScaler ADC AAA module performs user authentication and provides Single Sign-On functionality to back-end applications. This is integrated into the NetScaler ADC AppExpert policy engine to allow custom policies based on user and group information. Using SSL offloading and URL transformation capabilities, the firewall can also help sites to use secure transport layer protocols to prevent stealing of session tokens by network sniffing. Cookie Proxying and Cookie Encryption can be employed to completely mitigate cookie stealing. A3:2017 - Sensitive Data Exposure Many web applications and APIs do not properly protect sensitive data, such as financial, healthcare, and PII. Attackers may steal or modify such poorly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data may be compromised without extra protection, such as encryption at rest or in transit, and requires special precautions when exchanged with the browser. ADC WAF Protections Application Firewall protects applications from leaking sensitive data like credit card details. Sensitive data can be configured as Safe objects in Safe Commerce protection to avoid exposure. Any sensitive data in cookies can be protected by Cookie Proxying and Cookie Encryption. A4:2017 XML External Entities (XXE) Many older or poorly configured XML processors evaluate external entity references within XML documents. External entities can be used to disclose internal files using the file URI handler, internal file shares, internal port scanning, remote code execution, and denial of service attacks. ADC WAF Protections In addition to detecting and blocking common application threats that can be adapted for attacking XML-based applications (that is, cross-site scripting, command injection, and so on). ADC Application Firewall includes a rich set of XML-specific security protections. These include schema validation to thoroughly verify SOAP messages and XML payloads, and a powerful XML attachment check to block attachments containing malicious executables or viruses. Automatic traffic inspection methods block XPath injection attacks on URLs and forms aimed at gaining access. ADC Application Firewall also thwarts various DoS attacks, including external entity references, recursive expansion, excessive nesting, and malicious messages containing either long or many attributes and elements. A5:2017 Broken Access Control Restrictions on what authenticated users are allowed to do are often not properly enforced. Attackers can exploit these flaws to access unauthorized functionality and data, such as access other users' accounts, view sensitive files, modify other users’ data, change access rights, and so on ADC WAF Protections AAA feature that supports authentication, authorization, and auditing for all application traffic allows a site administrator to manage access controls with the ADC appliance. The Authorization security feature within the AAA module of the ADC appliance enables the appliance to verify, which content on a protected server it should allow each user to access. Form field consistency: If object references are stored as hidden fields in forms, then using form field consistency you can validate that these fields are not tampered on subsequent requests. Cookie Proxying and Cookie consistency: Object references that are stored in cookie values can be validated with these protections. Start URL check with URL closure: Allows user access to a predefined allow list of URLs. URL closure builds a list of all URLs seen in valid responses during the user session and automatically allows access to them during that session. A6:2017 - Security Misconfiguration Security misconfiguration is the most commonly seen issue. This is commonly a result of insecure default configurations, incomplete or improvised configurations, open cloud storage, misconfigured HTTP headers, and verbose error messages containing sensitive information. Not only must all operating systems, frameworks, libraries, and applications be securely configured, but they must be patched and upgraded in a timely fashion. ADC WAF Protections The PCI-DSS report generated by the Application Firewall, documents the security settings on the Firewall device. Reports from the scanning tools are converted to ADC WAF Signatures to handle security misconfigurations. ADC WAF supports Cenzic, IBM AppScan (Enterprise and Standard), Qualys, TrendMicro, WhiteHat, and custom vulnerability scan reports. A7:2017 - Cross Site Scripting (XSS) XSS flaws occur whenever an application includes untrusted data in a new webpage without proper validation or escaping, or updates an existing webpage with user-supplied data using a browser API that can create HTML or JavaScript. XSS allows attackers to run scripts in the victim’s browser which can hijack user sessions, deface websites, or redirect the user to malicious sites. ADC WAF Protections XSS protection protects against common XSS attacks. Custom XSS patterns can be uploaded to modify the default list of allowed tags and attributes. The ADC WAF uses a white list of allowed HTML attributes and tags to detect XSS attacks. This is applicable for both HTML and XML payloads. ADC WAF blocks all the attacks listed in the OWASP XSS Filter Evaluation Cheat Sheet. Field format check prevents an attacker from sending inappropriate web form data which can be a potential XSS attack. Form field consistency. A8:2017 - Insecure Deserialization Insecure deserialization often leads to remote code execution. Even if deserialization flaws do not result in remote code execution, they can be used to perform attacks, including replay attacks, injection attacks, and privilege escalation attacks. ADC WAF Protections JSON payload inspection with custom signatures. XML security: protects against XML denial of service (xDoS), XML SQL and Xpath injection and cross site scripting, format checks, WS-I basic profile compliance, XML attachments check. Field Format checks and Cookie Consistency and Field Consistency can be used. A9:2017 - Using Components with Known Vulnerabilities Components, such as libraries, frameworks, and other software modules, run with the same privileges as the application. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications and APIs using components with known vulnerabilities may undermine application defenses and enable various attacks and impacts. ADC WAF Protections NetScaler recommends having the third-party components up to date. Vulnerability scan reports that are converted to ADC Signatures can be used to virtually patch these components. Application Firewall templates that are available for these vulnerable components can be used. Custom Signatures can be bound with the firewall to protect these components. A10:2017 - Insufficient Logging & Monitoring Insufficient logging and monitoring, coupled with missing or ineffective integration with incident response, allows attackers to further attack systems, maintain persistence, pivot to more systems, and tamper, extract, or destroy data. Most breach studies show the time to detect a breach is over 200 days, typically detected by external parties rather than internal processes or monitoring. ADC WAF Protections When the log action is enabled for security checks or signatures, the resulting log messages provide information about the requests and responses that the application firewall has observed while protecting your websites and applications. The application firewall offers the convenience of using the built-in ADC database for identifying the locations corresponding to the IP addresses from which malicious requests are originating. Default format (PI) expressions give the flexibility to customize the information included in the logs with the option to add the specific data to capture in the application firewall generated log messages. The application firewall supports CEF logs. Application Security Protection NetScaler ADM NetScaler Application Delivery Management Service (NetScaler ADM) provides a scalable solution to manage NetScaler ADC deployments that include NetScaler ADC MPX, NetScaler ADC VPX, NetScaler Gateway, NetScaler Secure Web Gateway, NetScaler ADC SDX, NetScaler ADC CPX, and NetScaler SD-WAN appliances that are deployed on-premises or on the cloud. NetScaler ADM Application Analytics and Management Features Below are listed and summarized the salient features that are key to the ADM role in App Security. Application Analytics and Management The Application Analytics and Management feature of NetScaler ADM strengthens the application-centric approach to help users address various application delivery challenges. This approach gives users visibility into the health scores of applications, helps users determine the security risks, and helps users detect anomalies in the application traffic flows and take corrective actions. Most important among these roles for App Security is Application Security Analytics: Application security analytics: Application Security Analytics. The App Security Dashboard provides a holistic view of the security status of user applications. For example, it shows key security metrics such as security violations, signature violations, threat indexes. The App Security dashboard also displays attack related information such as SYN attacks, small window attacks, and DNS flood attacks for the discovered NetScaler ADC instances. Stylebooks StyleBooks simplify the task of managing complex NetScaler ADC configurations for user applications. A StyleBook is a template that users can use to create and manage NetScaler ADC configurations. Here users are primarily concerned with the StyleBook used to deploy the Web Application Firewall. For more information on StyleBooks, see: StyleBooks. Analytics Provides an easy and scalable way to look into the various insights of the NetScaler ADC instances’ data to describe, predict, and improve application performance. Users can use one or more analytics features simultaneously. Most important among these roles for App Security are: Security Insight: Security Insight. Provides a single-pane solution to help users assess user application security status and take corrective actions to secure user applications. Bot Insight For more information on analytics, see Analytics: Analytics. Other features that are important to ADM functionality are: Event Management Events represent occurrences of events or errors on a managed NetScaler ADC instance. For example, when there is a system failure or change in configuration, an event is generated and recorded on NetScaler ADM. Following are the related features that users can configure or view by using NetScaler ADM: Creating event rules: Create Event Rules View and export syslog messages: View and Export Syslog Messages For more information on event management, see: Events. Instance Management Enables users to manage the NetScaler ADC, NetScaler Gateway, NetScaler Secure Web Gateway, and NetScaler SD-WAN instances. For more information on instance management, see: Adding Instances. License Management Allows users to manage NetScaler ADC licenses by configuring NetScaler ADM as a license manager. NetScaler ADC pooled capacity: Pooled Capacity. A common license pool from which a user NetScaler ADC instance can check out one instance license and only as much bandwidth as it needs. When the instance no longer requires these resources, it checks them back in to the common pool, making the resources available to other instances that need them. NetScaler ADC VPX check-in and check-out licensing: NetScaler ADC VPX Check-in and Check-out Licensing. NetScaler ADM allocates licenses to NetScaler ADC VPX instances on demand. A NetScaler ADC VPX instance can check out the license from the NetScaler ADM when a NetScaler ADC VPX instance is provisioned, or check back in its license to NetScaler ADM when an instance is removed or destroyed. For more information on license management, see: Pooled Capacity. Configuration Management NetScaler ADM allows users to create configuration jobs that help them perform configuration tasks, such as creating entities, configuring features, replication of configuration changes, system upgrades, and other maintenance activities with ease on multiple instances. Configuration jobs and templates simplify the most repetitive administrative tasks to a single task on NetScaler ADM. For more information on configuration management, see Configuration jobs: Configuration Jobs. Configuration Audit Enables users to monitor and identify anomalies in the configurations across user instances. Configuration advice: Get Configuration Advice on Network Configuration. Allows users to identify any configuration anomaly. Audit template: Create Audit Templates. Allows users to monitor the changes across a specific configuration. For more information on configuration audit, see: Configuration Audit. Signatures provide the following deployment options to help users to optimize the protection of user applications: Negative Security Model: With the negative security model, users employ a rich set of preconfigured signature rules to apply the power of pattern matching to detect attacks and protect against application vulnerabilities. Users block only what they don’t want and allow the rest. Users can add their own signature rules, based on the specific security needs of user applications, to design their own customized security solutions. Hybrid security Model: In addition to using signatures, users can use positive security checks to create a configuration ideally suited for user applications. Use signatures to block what users don’t want, and use positive security checks to enforce what is allowed. To protect user applications by using signatures, users must configure one or more profiles to use their signatures object. In a hybrid security configuration, the SQL injection and cross-site scripting patterns, and the SQL transformation rules, in the user signatures object are used not only by the signature rules, but also by the positive security checks configured in the Web Application Firewall profile that is using the signatures object. The Web Application Firewall examines the traffic to user protected websites and web services to detect traffic that matches a signature. A match is triggered only when every pattern in the rule matches the traffic. When a match occurs, the specified actions for the rule are invoked. Users can display an error page or error object when a request is blocked. Log messages can help users to identify attacks being launched against user applications. If users enable statistics, the Web Application Firewall maintains data about requests that match a Web Application Firewall signature or security check. If the traffic matches both a signature and a positive security check, the more restrictive of the two actions are enforced. For example, if a request matches a signature rule for which the block action is disabled, but the request also matches an SQL Injection positive security check for which the action is block, the request is blocked. In this case, the signature violation might be logged as <not blocked>, although the request is blocked by the SQL injection check. Customization: If necessary, users can add their own rules to a signatures object. Users can also customize the SQL/XSS patterns. The option to add their own signature rules, based on the specific security needs of user applications, gives users the flexibility to design their own customized security solutions. Users block only what they don’t want and allow the rest. A specific fast-match pattern in a specified location can significantly reduce processing overhead to optimize performance. Users can add, modify, or remove SQL injection and cross-site scripting patterns. Built-in RegEx and expression editors help users configure user patterns and verify their accuracy. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on AWS combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, flexible licensing, and other essential application delivery capabilities in a single VPX instance, conveniently available via the AWS Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on AWS enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. NetScaler Web Application Firewall (WAF) NetScaler Web Application Firewall (WAF) is an enterprise grade solution offering state of the art protections for modern applications. NetScaler WAF mitigates threats against public-facing assets, including websites, web applications, and APIs. NetScaler WAF includes IP reputation-based filtering, Bot mitigation, OWASP Top 10 application threats protections, Layer 7 DDoS protection and more. Also included are options to enforce authentication, strong SSL/TLS ciphers, TLS 1.3, rate limiting and rewrite policies. Using both basic and advanced WAF protections, NetScaler WAF provides comprehensive protection for your applications with unparalleled ease of use. Getting up and running is a matter of minutes. Further, using an automated learning model, called dynamic profiling, NetScaler WAF saves users precious time. By automatically learning how a protected application works, NetScaler WAF adapts to the application even as developers deploy and alter the applications. NetScaler WAF helps with compliance for all major regulatory standards and bodies, including PCI-DSS, HIPAA, and more. With our CloudFormation templates, it has never been easier to get up and running quickly. With auto scaling, users can rest assured that their applications remain protected even as their traffic scales up. Web Application Firewall Deployment Strategy The first step to deploying the web application firewall is to evaluate which applications or specific data need maximum security protection, which ones are less vulnerable, and the ones for which security inspection can safely be bypassed. This helps users in coming up with an optimal configuration, and in designing appropriate policies and bind points to segregate the traffic. For example, users might want to configure a policy to bypass security inspection of requests for static web content, such as images, MP3 files, and movies, and configure another policy to apply advanced security checks to requests for dynamic content. Users can use multiple policies and profiles to protect different contents of the same application. The next step is to baseline the deployment. Start by creating a virtual server and run test traffic through it to get an idea of the rate and amount of traffic flowing through the user system. Then, deploy the Web Application Firewall. Use NetScaler ADM and the Web Application Firewall StyleBook to configure the Web Application Firewall. See the StyleBook section below in this guide for details. After the Web Application Firewall is deployed and configured with the Web Application Firewall StyleBook, a useful next step would be to implement the NetScaler ADC WAF and OWASP Top Ten. Finally, three of the Web Application Firewall protections are especially effective against common types of Web attacks, and are therefore more commonly used than any of the others. Thus, they should be implemented in the initial deployment. They are: HTML Cross-Site Scripting. Examines requests and responses for scripts that attempt to access or modify content on a different website than the one on which the script is located. When this check finds such a script, it either renders the script harmless before forwarding the request or response to its destination, or it blocks the connection. HTML SQL Injection. Examines requests that contain form field data for attempts to inject SQL commands into a SQL database. When this check detects injected SQL code, it either blocks the request or renders the injected SQL code harmless before forwarding the request to the Web server. Note: If both of the following conditions apply to the user configuration, users should make certain that your Web Application Firewall is correctly configured: If users enable the HTML Cross-Site Scripting check or the HTML SQL Injection check (or both), and User protected websites accept file uploads or contain Web forms that can contain large POST body data. For more information about configuring the Web Application Firewall to handle this case, see Configuring the Application Firewall: Configuring the Web App Firewall. Buffer Overflow. Examines requests to detect attempts to cause a buffer overflow on the Web server. Configuring the Web Application Firewall (WAF) The following steps assume that the WAF is already enabled and functioning correctly. NetScaler recommends that users configure WAF using the Web Application Firewall StyleBook. Most users find it the easiest method to configure the Web Application Firewall, and it is designed to prevent mistakes. Both the GUI and the command line interface are intended for experienced users, primarily to modify an existing configuration or use advanced options. SQL Injection The Application Firewall HTML SQL Injection check provides special defenses against the injection of unauthorized SQL code that might break user Application security. NetScaler Web Application Firewall examines the request payload for injected SQL code in three locations: 1) POST body, 2) headers, and 3) cookies. A default set of keywords and special characters provides known keywords and special characters that are commonly used to launch SQL attacks. Users can also add new patterns, and they can edit the default set to customize the SQL check inspection. There are several parameters that can be configured for SQL injection processing. Users can check for SQL wildcard characters. Users can change the SQL Injection type and select one of the 4 options (SQLKeyword, SQLSplChar, SQLSplCharANDKeyword, SQLSplCharORKeyword) to indicate how to evaluate the SQL keywords and SQL special characters when processing the payload. The SQL Comments Handling parameter gives users an option to specify the type of comments that need to be inspected or exempted during SQL Injection detection. Users can deploy relaxations to avoid false positives. The learning engine can provide recommendations for configuring relaxation rules. The following options are available for configuring an optimized SQL Injection protection for the user application: Block — If users enable block, the block action is triggered only if the input matches the SQL injection type specification. For example, if SQLSplCharANDKeyword is configured as the SQL injection type, a request is not blocked if it contains no key words, even if SQL special characters are detected in the input. Such a request is blocked if the SQL injection type is set to either SQLSplChar, or SQLSplCharORKeyword. Log — If users enable the log feature, the SQL Injection check generates log messages indicating the actions that it takes. If block is disabled, a separate log message is generated for each input field in which the SQL violation was detected. However, only one message is generated when the request is blocked. Similarly, one log message per request is generated for the transform operation, even when SQL special characters are transformed in multiple fields. Users can monitor the logs to determine whether responses to legitimate requests are getting blocked. A large increase in the number of log messages can indicate attempts to launch an attack. Stats — If enabled, the stats feature gathers statistics about violations and logs. An unexpected surge in the stats counter might indicate that the user application is under attack. If legitimate requests are getting blocked, users might have to revisit the configuration to see if they need to configure new relaxation rules or modify the existing ones. Learn — If users are not sure which SQL relaxation rules might be ideally suited for their applications, they can use the learn feature to generate recommendations based on the learned data. The Web Application Firewall learning engine monitors the traffic and provides SQL learning recommendations based on the observed values. To get optimal benefit without compromising performance, users might want to enable the learn option for a short time to get a representative sample of the rules, and then deploy the rules and disable learning. Transform SQL special characters—The Web Application Firewall considers three characters, Single straight quote (‘), Backslash (), and Semicolon (;) as special characters for SQL security check processing. The SQL Transformation feature modifies the SQL Injection code in an HTML request to ensure that the request is rendered harmless. The modified HTML request is then sent to the server. All default transformation rules are specified in the /netscaler/default_custom_settings.xml file. The transform operation renders the SQL code inactive by making the following changes to the request: Single straight quote (‘) to double straight quote (“). Backslash () to double backslash (). Semicolon (;) is dropped completely. These three characters (special strings) are necessary to issue commands to a SQL server. Unless a SQL command is prefaced with a special string, most SQL servers ignore that command. Therefore, the changes that the Web Application Firewall performs when transformation is enabled prevent an attacker from injecting active SQL. After these changes are made, the request can safely be forwarded to the user protected website. When web forms on the user protected website can legitimately contain SQL special strings, but the web forms do not rely on the special strings to operate correctly, users can disable blocking and enable transformation to prevent blocking of legitimate web form data without reducing the protection that the Web Application Firewall provides to the user protected websites. The transform operation works independently of the SQL Injection Type setting. If transform is enabled and the SQL Injection type is specified as SQL keyword, SQL special characters are transformed even if the request does not contain any keywords. Tip: Users normally enable either transformation or blocking, but not both. If the block action is enabled, it takes precedence over the transform action. If users have blocking enabled, enabling transformation is redundant. Check for SQL Wildcard Characters—Wild card characters can be used to broaden the selections of a SQL SELECT statement. These wild card operators can be used with LIKE and NOT LIKE operators to compare a value to similar values. The percent (%), and underscore (_) characters are frequently used as wild cards. The percent sign is analogous to the asterisk (*) wildcard character used with MS-DOS and to match zero, one, or multiple characters in a field. The underscore is similar to the MS-DOS question mark (?) wildcard character. It matches a single number or character in an expression. For example, users can use the following query to do a string search to find all customers whose names contain the D character. SELECT * from customer WHERE name like “%D%”: The following example combines the operators to find any salary values that have 0 in the second and third place. SELECT * from customer WHERE salary like ‘_00%’: Different DBMS vendors have extended the wildcard characters by adding extra operators. The NetScaler Web Application Firewall can protect against attacks that are launched by injecting these wildcard characters. The 5 default Wildcard characters are percent (%), underscore (_), caret (^), opening bracket ([), and closing bracket (]). This protection applies to both HTML and XML profiles. The default wildcard chars are a list of literals specified in the *Default Signatures: <wildchar type=” LITERAL”>% <wildchar type=”LITERAL”>_ <wildchar type=”LITERAL”>^ <wildchar type=”LITERAL”>[ <wildchar type=”LITERAL”>] Wildcard characters in an attack can be PCRE, like [^A-F]. The Web Application Firewall also supports PCRE wildcards, but the literal wildcard chars above are sufficient to block most attacks. Note: The SQL wildcard character check is different from the SQL special character check. This option must be used with caution to avoid false positives. Check Request Containing SQL Injection Type—The Web Application Firewall provides 4 options to implement the desired level of strictness for SQL Injection inspection, based on the individual need of the application. The request is checked against the injection type specification for detecting SQL violations. The 4 SQL injection type options are: SQL Special Character and Keyword—Both a SQL keyword and a SQL special character must be present in the input to trigger a SQL violation. This least restrictive setting is also the default setting. SQL Special Character—At least one of the special characters must be present in the input to trigger a SQL violation. SQL key word—At least one of the specified SQL keywords must be present in the input to trigger a SQL violation. Do not select this option without due consideration. To avoid false positives, make sure that none of the keywords are expected in the inputs. SQL Special Character or Keyword—Either the key word or the special character string must be present in the input to trigger the security check violation. Tip: If users configure the Web Application Firewall to check for inputs that contain a SQL special character, the Web Application Firewall skips web form fields that do not contain any special characters. Since most SQL servers do not process SQL commands that are not preceded by a special character, enabling this option can significantly reduce the load on the Web Application Firewall and speed up processing without placing the user protected websites at risk. SQL comments handling — By default, the Web Application Firewall checks all SQL comments for injected SQL commands. Many SQL servers ignore anything in a comment, however, even if preceded by an SQL special character. For faster processing, if your SQL server ignores comments, you can configure the Web Application Firewall to skip comments when examining requests for injected SQL. The SQL comments handling options are: ANSI—Skip ANSI-format SQL comments, which are normally used by UNIX-based SQL databases. For example: /– (Two Hyphens) - This is a comment that begins with two hyphens and ends with end of line. - Braces (Braces enclose the comment. The { precedes the comment, and the } follows it. Braces can delimit single- or multiple-line comments, but comments cannot be nested) /**/: C style comments (Does not allow nested comments). Please note /*! <comment that begins with a slash followed by an asterisk and an exclamation mark is not a comment > */ MySQL Server supports some variants of C-style comments. These enable users to write code that includes MySQL extensions, but is still portable, by using comments of the following form: [/*! MySQL-specific code */] .#: Mysql comments : This is a comment that begins with the # character and ends with an end of the line Nested — Skip nested SQL comments, which are normally used by Microsoft SQL Server. For example; – (Two Hyphens), and /**/ (Allows nested comments) ANSI/Nested — Skip comments that adhere to both the ANSI and nested SQL comment standards. Comments that match only the ANSI standard, or only the nested standard, are still checked for injected SQL. Check all Comments — Check the entire request for injected SQL without skipping anything. This is the default setting. Tip: Usually, users should not choose the Nested or the ANSI/Nested option unless their back-end database runs on Microsoft SQL Server. Most other types of SQL server software do not recognize nested comments. If nested comments appear in a request directed to another type of SQL server, they might indicate an attempt to breach security on that server. Check Request headers — Enable this option if, in addition to examining the input in the form fields, users want to examine the request headers for HTML SQL Injection attacks. If users use the GUI, they can enable this parameter in the Advanced Settings -> Profile Settings pane of the Web Application Firewall profile. Note: If users enable the Check Request header flag, they might have to configure a relaxation rule for the User-Agent header. Presence of the SQL keyword like and a SQL special character semi-colon (;) might trigger false positive and block requests that contain this header. Warning: If users enable both request header checking and transformation, any SQL special characters found in headers are also transformed. The Accept, Accept-Charset, Accept-Encoding, Accept-Language, Expect, and User-Agent headers normally contain semicolons (;). Enabling both Request header checking and transformation simultaneously might cause errors. InspectQueryContentTypes — Configure this option if users want to examine the request query portion for SQL Injection attacks for the specific content-types. If users use the GUI, they can configure this parameter in the Advanced Settings -> Profile Settings pane of the Application Firewall profile. Cross-Site Scripting The HTML Cross-Site Scripting (cross-site scripting) check examines both the headers and the POST bodies of user requests for possible cross-site scripting attacks. If it finds a cross-site script, it either modifies (transforms) the request to render the attack harmless, or blocks the request. Note: The HTML Cross-Site Scripting (cross-site scripting) check works only for content type, content length, and so forth. It does not work for cookie. Also ensure to have the ‘checkRequestHeaders’ option enabled in the user Web Application Firewall profile. To prevent misuse of the scripts on user protected websites to breach security on user websites, the HTML Cross-Site Scripting check blocks scripts that violate the same origin rule, which states that scripts should not access or modify content on any server but the server on which they are located. Any script that violates the same origin rule is called a cross-site script, and the practice of using scripts to access or modify content on another server is called cross-site scripting. The reason cross-site scripting is a security issue is that a web server that allows cross-site scripting can be attacked with a script that is not on that web server, but on a different web server, such as one owned and controlled by the attacker. Unfortunately, many companies have a large installed base of JavaScript-enhanced web content that violates the same origin rule. If users enable the HTML Cross-Site Scripting check on such a site, they have to generate the appropriate exceptions so that the check does not block legitimate activity. The Web Application Firewall offers various action options for implementing HTML Cross-Site Scripting protection. In addition to the Block, Log, Stats and Learn actions, users also have the option to Transform cross-site scripts to render an attack harmless by entity encoding the script tags in the submitted request. Users can configure Check complete URLs for the cross-site scripting parameter to specify if they want to inspect not just the query parameters but the entire URL to detect a cross-site scripting attack. Users can configure the InspectQueryContentTypes parameter to inspect the request query portion for a cross-site scripting attack for the specific content-types. Users can deploy relaxations to avoid false positives. The Web Application Firewall learning engine can provide recommendations for configuring relaxation rules. The following options are available for configuring an optimized HTML Cross-Site Scripting protection for the user application: Block — If users enable block, the block action is triggered if the cross-site scripting tags are detected in the request. Log — If users enable the log feature, the HTML Cross-Site Scripting check generates log messages indicating the actions that it takes. If block is disabled, a separate log message is generated for each header or form field in which the cross-site scripting violation was detected. However, only one message is generated when the request is blocked. Similarly, one log message per request is generated for the transform operation, even when cross-site scripting tags are transformed in multiple fields. Users can monitor the logs to determine whether responses to legitimate requests are getting blocked. A large increase in the number of log messages can indicate attempts to launch an attack. Stats — If enabled, the stats feature gathers statistics about violations and logs. An unexpected surge in the stats counter might indicate that the user application is under attack. If legitimate requests are getting blocked, users might have to revisit the configuration to see if they must configure new relaxation rules or modify the existing ones. Learn — If users are not sure which relaxation rules might be ideally suited for their application, they can use the learn feature to generate HTML Cross-Site Scripting rule recommendations based on the learned data. The Web Application Firewall learning engine monitors the traffic and provides learning recommendations based on the observed values. To get optimal benefit without compromising performance, users might want to enable the learn option for a short time to get a representative sample of the rules, and then deploy the rules and disable learning. Transform cross-site scripts — If enabled, the Web Application Firewall makes the following changes to requests that match the HTML Cross-Site Scripting check: Left angle bracket (<) to HTML character entity equivalent (<) Right angle bracket (>) to HTML character entity equivalent (>) This ensures that browsers do not interpret unsafe html tags, such as <script>, and thereby run malicious code. If users enable both request-header checking and transformation, any special characters found in request headers are also modified as described above. If scripts on the user protected website contain cross-site scripting features, but the user website does not rely upon those scripts to operate correctly, users can safely disable blocking and enable transformation. This configuration ensures that no legitimate web traffic is blocked, while stopping any potential cross-site scripting attacks. Check complete URLs for cross-site scripting — If checking of complete URLs is enabled, the Web Application Firewall examines entire URLs for HTML cross-site scripting attacks instead of checking just the query portions of URLs. Check Request headers — If Request header checking is enabled, the Web Application Firewall examines the headers of requests for HTML cross-site scripting attacks, instead of just URLs. If users use the GUI, they can enable this parameter in the Settings tab of the Web Application Firewall profile. InspectQueryContentTypes — If Request query inspection is configured, the Application Firewall examines the query of requests for cross-site scripting attacks for the specific content-types. If users use the GUI, they can configure this parameter in the Settings tab of the Application Firewall profile. Important: As part of the streaming changes, the Web Application Firewall processing of the cross-site scripting tags has changed. In earlier releases, the presence of either open bracket (<), or close bracket (>), or both open and close brackets (<>) was flagged as a cross-site scripting Violation. The behavior has changed in the builds that include support for request side streaming. Only the close bracket character (>) is no longer considered as an attack. Requests are blocked even when an open bracket character (<) is present, and is considered as an attack. The Cross-site scripting attack gets flagged. Buffer Overflow Check The Buffer Overflow check detects attempts to cause a buffer overflow on the web server. If the Web Application Firewall detects that the URL, cookies, or header are longer than the configured length, it blocks the request because it can cause a buffer overflow. The Buffer Overflow check prevents attacks against insecure operating-system or web-server software that can crash or behave unpredictably when it receives a data string that is larger than it can handle. Proper programming techniques prevent buffer overflows by checking incoming data and either rejecting or truncating overlong strings. Many programs, however, do not check all incoming data and are therefore vulnerable to buffer overflows. This issue especially affects older versions of web-server software and operating systems, many of which are still in use. The Buffer Overflow security check allows users to configure the Block, Log, and Stats actions. In addition, users can also configure the following parameters: Maximum URL Length. The maximum length the Web Application Firewall allows in a requested URL. Requests with longer URLs are blocked. Possible Values: 0–65535. Default: 1024 Maximum Cookie Length. The maximum length the Web Application Firewall allows for all cookies in a request. Requests with longer cookies trigger the violations. Possible Values: 0–65535. Default: 4096 Maximum Header Length. The maximum length the Web Application Firewall allows for HTTP headers. Requests with longer headers are blocked. Possible Values: 0–65535. Default: 4096 Query string length. Maximum length allowed for a query string in an incoming request. Requests with longer queries are blocked. Possible Values: 0–65535. Default: 1024 Total request length. Maximum request length allowed for an incoming request. Requests with a longer length are blocked. Possible Values: 0–65535. Default: 24820 Virtual Patching/Signatures The signatures provide specific, configurable rules to simplify the task of protecting user websites against known attacks. A signature represents a pattern that is a component of a known attack on an operating system, web server, website, XML-based web service, or other resource. A rich set of preconfigured built-in or native rules offers an easy to use security solution, applying the power of pattern matching to detect attacks and protect against application vulnerabilities. Users can create their own signatures or use signatures in the built-in templates. The Web Application Firewall has two built-in templates: Default Signatures: This template contains a preconfigured list of over 1,300 signatures, in addition to a complete list of SQL injection keywords, SQL special strings, SQL transform rules, and SQL wildcard characters. It also contains denied patterns for cross-site scripting, and allowed attributes and tags for cross-site scripting. This is a read-only template. Users can view the contents, but they cannot add, edit, or delete anything in this template. To use it, users must make a copy. In their own copy, users can enable the signature rules that they want to apply to their traffic, and specify the actions to be taken when the signature rules match the traffic. The signatures are derived from the rules published by SNORT: SNORT, which is an open source intrusion prevention system capable of performing real-time traffic analysis to detect various attacks and probes. *Xpath Injection Patterns: This template contains a preconfigured set of literal and PCRE keywords and special strings that are used to detect XPath (XML Path Language) injection attacks. Blank Signatures: In addition to making a copy of the built-in Default Signatures template, users can use a blank signatures template to create a signature object. The signature object that users create with the blank signatures option does not have any native signature rules, but, just like the *Default template, it has all the SQL/XSS built-in entities. External-Format Signatures: The Web Application Firewall also supports external format signatures. Users can import the third-party scan report by using the XSLT files that are supported by the NetScaler Web Application Firewall. A set of built-in XSLT files is available for selected scan tools to translate external format files to native format (see the list of built-in XSLT files later in this section). While signatures help users to reduce the risk of exposed vulnerabilities and protect the user mission critical Web Servers while aiming for efficacy, Signatures do come at a Cost of additional CPU Processing. It is important to choose the right Signatures for user Application needs. Enable only the signatures that are relevant to the Customer Application/environment. NetScaler offers signatures in more than 10 different categories across platforms/OS/Technologies. The signature rules database is substantial, as attack information has built up over the years. So, most of the old rules may not be relevant for all networks as Software Developers may have patched them already or customers are running a more recent version of the OS. Signatures Updates NetScaler Web Application Firewall supports both Auto & Manual Update of Signatures. We also suggest Enabling Auto-update for signatures to stay up to date. These signatures files are hosted on the AWS Environment and it is important to allow outbound access to NetScaler IP’s from Network Firewalls to fetch the latest signature files. There is no effect of updating signatures to the ADC while processing Real Time Traffic. Application Security Analytics The Application Security Dashboard provides a holistic view of the security status of user applications. For example, it shows key security metrics such as security violations, signature violations, and threat indexes. Application Security dashboard also displays attack related information such as syn attacks, small window attacks, and DNS flood attacks for the discovered NetScaler ADC instances. Note: To view the metrics of the Application Security Dashboard, AppFlow for Security insight should be enabled on the NetScaler ADC instances that users want to monitor. To view the security metrics of a NetScaler ADC instance on the application security dashboard: Log on to NetScaler ADM using the administrator credentials. Navigate to Applications > App Security Dashboard, and select the instance IP address from the Devices list. Users can further drill down on the discrepancies reported on the Application Security Investigator by clicking the bubbles plotted on the graph. Centralized Learning on ADM NetScaler Web Application Firewall (WAF) protects user web applications from malicious attacks such as SQL injection and cross-site scripting (XSS). To prevent data breaches and provide the right security protection, users must monitor their traffic for threats and real-time actionable data on attacks. Sometimes, the attacks reported might be false-positives and those need to be provided as an exception. The Centralized Learning on NetScaler ADM is a repetitive pattern filter that enables WAF to learn the behavior (the normal activities) of user web applications. Based on monitoring, the engine generates a list of suggested rules or exceptions for each security check applied on the HTTP traffic. It is much easier to deploy relaxation rules using the Learning engine than to manually deploy it as necessary relaxations. To deploy the learning feature, users must first configure a Web Application Firewall profile (set of security settings) on the user NetScaler ADC appliance. For more information, see Creating Web Application Firewall profiles: Creating Web App Firewall Profiles. NetScaler ADM generates a list of exceptions (relaxations) for each security check. As an administrator, users can review the list of exceptions in NetScaler ADM and decide to deploy or skip. Using the WAF learning feature in NetScaler ADM, users can: Configure a learning profile with the following security checks Buffer Overflow HTML Cross-Site Scripting Note: The cross-site script limitation of location is only FormField. HTML SQL Injection Note: For the HTML SQL Injection check, users must configure set -sqlinjectionTransformSpecialChars ON and set -sqlinjectiontype sqlspclcharorkeywords in the NetScaler ADC instance. Check the relaxation rules in NetScaler ADM and decide to take necessary action (deploy or skip) Get the notifications through email, slack, and ServiceNow Use the dashboard to view relaxation details To use the WAF learning in NetScaler ADM: Configure the learning profile: Configure the Learning Profile See the relaxation rules: View Relaxation Rules and Idle Rules Use the WAF learning dashboard: View WAF Learning Dashboard StyleBook NetScaler Web Application Firewall is a Web Application Firewall (WAF) that protects web applications and sites from both known and unknown attacks, including all application-layer and zero-day threats. NetScaler ADM now provides a default StyleBook with which users can more conveniently create an application firewall configuration on NetScaler ADC instances. Deploying Application Firewall Configurations The following task assists you in deploying a load balancing configuration along with the application firewall and IP reputation policy on NetScaler ADC instances in your business network. To Create a LB Configuration with Application Firewall Settings In NetScaler ADM, navigate to Applications > Configurations > StyleBooks. The StyleBooks page displays all the StyleBooks available for customer use in NetScaler ADM. Scroll down and find HTTP/SSL Load Balancing StyleBook with application firewall policy and IP reputation policy. Users can also search for the StyleBook by typing the name as lb-appfw. Click Create Configuration. The StyleBook opens as a user interface page on which users can enter the values for all the parameters defined in this StyleBook. Enter values for the following parameters: Load Balanced Application Name. Name of the load balanced configuration with an application firewall to deploy in the user network. Load balanced App Virtual IP address. Virtual IP address at which the NetScaler ADC instance receives client requests. Load Balanced App Virtual Port. The TCP Port to be used by the users in accessing the load balanced application. Load Balanced App Protocol. Select the front-end protocol from the list. Application Server Protocol. Select the protocol of the application server. As an option, users can enable and configure the Advanced Load Balancer Settings. Optionally, users can also set up an authentication server for authenticating traffic for the load balancing virtual server. Click “+” in the server IPs and Ports section to create application servers and the ports that they can be accessed on. Users can also create FQDN names for application servers. Users can also specify the details of the SSL certificate. Users can also create monitors in the target NetScaler ADC instance. To configure an application firewall on the virtual server, enable WAF Settings. Ensure that the application firewall policy rule is true if users want to apply the application firewall settings to all traffic on that VIP. Otherwise, specify the NetScaler ADC policy rule to select a subset of requests to which to apply the application firewall settings. Next, select the type of profile that has to be applied - HTML or XML. Optionally, users can configure detailed application firewall profile settings by enabling the application firewall Profile Settings check box. Optionally, if users want to configure application firewall signatures, enter the name of the signature object that is created on the NetScaler ADC instance where the virtual server is to be deployed. Note: Users cannot create signature objects by using this StyleBook. Next, users can also configure any other application firewall profile settings such as, StartURL settings, DenyURL settings and others. For more information on application firewall and configuration settings, see Application Firewall. In the Target Instances section, select the NetScaler ADC instance on which to deploy the load balancing virtual server with the application firewall. Note: Users can also click the refresh icon to add recently discovered NetScaler ADC instances in NetScaler ADM to the available list of instances in this window. Users can also enable IP Reputation check to identify the IP address that is sending unwanted requests. Users can use the IP reputation list to preemptively reject requests that are coming from the IP with the bad reputation. Tip: NetScaler recommends that users select Dry Run to check the configuration objects that must be created on the target instance before they run the actual configuration on the instance. When the configuration is successfully created, the StyleBook creates the required load balancing virtual server, application server, services, service groups, application firewall labels, application firewall policies, and binds them to the load balancing virtual server. The following figure shows the objects created in each server: To see the ConfigPack created on NetScaler ADM, navigate to Applications > Configurations. Security Insight Analytics Web and web service applications that are exposed to the Internet have become increasingly vulnerable to attacks. To protect applications from attack, users need visibility into the nature and extent of past, present, and impending threats, real-time actionable data on attacks, and recommendations on countermeasures. Security Insight provides a single-pane solution to help users assess user application security status and take corrective actions to secure user applications. How Security Insight Works Security Insight is an intuitive dashboard-based security analytics solution that gives users full visibility into the threat environment associated with user applications. Security insight is included in NetScaler ADM, and it periodically generates reports based on the user Application Firewall and ADC system security configurations. The reports include the following information for each application: Threat index. A single-digit rating system that indicates the criticality of attacks on the application, regardless of whether the application is protected by an ADC appliance. The more critical the attacks on an application, the higher the threat index for that application. Values range from 1 through 7. The threat index is based on attack information. The attack-related information, such as violation type, attack category, location, and client details, gives users insight into the attacks on the application. Violation information is sent to NetScaler ADM only when a violation or attack occurs. Many breaches and vulnerabilities lead to a high threat index value. Safety index. A single-digit rating system that indicates how securely users have configured the ADC instances to protect applications from external threats and vulnerabilities. The lower the security risks for an application, the higher the safety index. Values range from 1 through 7. The safety index considers both the application firewall configuration and the ADC system security configuration. For a high safety index value, both configurations must be strong. For example, if rigorous application firewall checks are in place but ADC system security measures, such as a strong password for the nsroot user, have not been adopted, applications are assigned a low safety index value. Actionable Information. Information that users need for lowering the threat index and increasing the safety index, which significantly improves application security. For example, users can review information about violations, existing and missing security configurations for the application firewall and other security features, the rate at which the applications are being attacked, and so on. Configuring Security Insight Note: Security Insight is supported on ADC instances with Premium license or ADC Advanced with AppFirewall license only. To configure security insight on an ADC instance, first configure an application firewall profile and an application firewall policy, and then bind the application firewall policy globally. Then, enable the AppFlow feature, configure an AppFlow collector, action, and policy, and bind the policy globally. When users configure the collector, they must specify the IP address of the NetScaler ADM service agent on which they want to monitor the reports. Configure Security Insight on an ADC Instance Run the following commands to configure an application firewall profile and policy, and bind the application firewall policy globally or to the load balancing virtual server. add appfw profile <name> [-defaults ( basic or advanced )] set appfw profile <name> [-startURLAction <startURLAction> ...] add appfw policy <name> <rule> <profileName> bind appfw global <policyName> <priority> or, bind lb vserver <lb vserver> -policyName <policy> -priority <priority> Sample: add appfw profile pr_appfw -defaults advancedset appfw profile pr_appfw -startURLaction log stats learnadd appfw policy pr_appfw_pol "HTTP.REQ.HEADER(\"Host\").EXISTS" pr_appfwbind appfw global pr_appfw_pol 1or,bind lb vserver outlook –policyName pr_appfw_pol –priority "20" Run the following commands to enable the AppFlow feature, configure an AppFlow collector, action, and policy, and bind the policy globally or to the load balancing virtual server: add appflow collector <name> -IPAddress <ipaddress> set appflow param [-SecurityInsightRecordInterval <secs>] [-SecurityInsightTraffic ( ENABLED or DISABLED )] add appflow action <name> -collectors <string> add appflow policy <name> <rule> <action> bind appflow global <policyName> <priority> [<gotoPriorityExpression>] [-type <type>] or, bind lb vserver <vserver> -policyName <policy> -priority <priority> Sample: add appflow collector col -IPAddress 10.102.63.85set appflow param -SecurityInsightRecordInterval 600 -SecurityInsightTraffic ENABLEDadd appflow action act1 -collectors coladd appflow action af_action_Sap_10.102.63.85 -collectors coladd appflow policy pol1 true act1add appflow policy af_policy_Sap_10.102.63.85 true af_action_Sap_10.102.63.85bind appflow global pol1 1 END -type REQ_DEFAULTor,bind lb vserver Sap –policyName af_action_Sap_10.102.63.85 –priority "20" Enable Security Insight from NetScaler ADM Navigate to Networks > Instances > NetScaler ADC and select the instance type. For example, VPX. Select the instance and from the Select Action list, select Configure Analytics. On the Configure Analytics on virtual server window: Select the virtual servers that you want to enable security insight and click Enable Analytics. The Enable Analytics window is displayed. Select Security Insight Under Advanced Options, select Logstream or IPFIX as the Transport Mode The Expression is true by default Click OK Note: If users select virtual servers that are not licensed, then NetScaler ADM first licenses those virtual servers and then enables analytics For admin partitions, only Web Insight is supported After users click OK, NetScaler ADM processes to enable analytics on the selected virtual servers. Note: When users create a group, they can assign roles to the group, provide application-level access to the group, and assign users to the group. NetScaler ADM analytics now supports virtual IP address-based authorization. Customer users can now see reports for all Insights for only the applications (virtual servers) for which they are authorized. For more information on groups and assigning users to the group, see Configure Groups on NetScaler ADM: Configure Groups on NetScaler ADM . Thresholds Users can set and view thresholds on the safety index and threat index of applications in Security Insight. To set a threshold: Navigate to System > Analytics Settings > Thresholds, and select Add. Select the traffic type as Security in the Traffic Type field, and enter required information in the other appropriate fields such as Name, Duration, and entity. In the Rule section, use the Metric, Comparator, and Value fields to set a threshold. For example, “Threat Index” “>” “5” Click Create. To view the threshold breaches: Navigate to Analytics > Security Insight > Devices, and select the ADC instance. In the Application section, users can view the number of threshold breaches that have occurred for each virtual server in the Threshold Breach column. Security Insight Use Case The following use cases describe how users can use security insight to assess the threat exposure of applications and improve security measures. Obtain an Overview of the Threat Environment In this use case, users have a set of applications that are exposed to attacks, and they have configured NetScaler ADM to monitor the threat environment. Users need to frequently review the threat index, safety index, and the type and severity of any attacks that the applications might have experienced, so that they can focus first on the applications that need the most attention. The security insight dashboard provides a summary of the threats experienced by the user applications over a time period of user choosing, and for a selected ADC device. It displays the list of applications, their threat and safety indexes, and the total number of attacks for the chosen time period. For example, users might be monitoring Microsoft Outlook, Microsoft Lync, SharePoint, and an SAP application, and users might want to review a summary of the threat environment for these applications. To obtain a summary of the threat environment, log on to NetScaler ADM, and then navigate to Analytics > Security Insight. Key information is displayed for each application. The default time period is 1 hour. To view information for a different time period, from the list at the top-left, select a time period. To view a summary for a different ADC instance, under Devices, click the IP address of the ADC instance. To sort the application list by a given column, click the column header. Determine the Threat Exposure of an Application After reviewing a summary of the threat environment on the Security Insight dashboard to identify the applications that have a high threat index and a low safety index, users want to determine their threat exposure before deciding how to secure them. That is, users want to determine the type and severity of the attacks that have degraded their index values. Users can determine the threat exposure of an application by reviewing the application summary. In this example, Microsoft Outlook has a threat index value of 6, and users want to know what factors are contributing to this high threat index. To determine the threat exposure of Microsoft Outlook, on the Security Insight dashboard, click Outlook. The application summary includes a map that identifies the geographic location of the server. Click Threat Index > Security Check Violations and review the violation information that appears. Click Signature Violations and review the violation information that appears. Determine Existing and Missing Security Configuration for an Application After reviewing the threat exposure of an application, users want to determine what application security configurations are in place and what configurations are missing for that application. Users can obtain this information by drilling down into the application’s safety index summary. The safety index summary gives users information about the effectiveness of the following security configurations: Application Firewall Configuration. Shows how many signature and security entities are not configured. NetScaler ADM System Security. Shows how many system security settings are not configured. In the previous use case, users reviewed the threat exposure of Microsoft Outlook, which has a threat index value of 6. Now, users want to know what security configurations are in place for Outlook and what configurations can be added to improve its threat index. On the Security Insight dashboard, click Outlook, and then click the Safety Index tab. Review the information provided in the Safety Index Summary area. On the Application Firewall Configuration node, click Outlook_Profile and review the security check and signature violation information in the pie charts. Review the configuration status of each protection type in the application firewall summary table. To sort the table on a column, click the column header. Click the NetScaler ADM System Security node and review the system security settings and NetScaler recommendations to improve the application safety index. Identify Applications That Require Immediate Attention The applications that need immediate attention are those having a high threat index and a low safety index. In this example, both Microsoft Outlook and Microsoft Lync have a high threat index value of 6, but Lync has the lower of the two safety indexes. Therefore, users might have to focus their attention on Lync before improving the threat environment for Outlook. Determine the Number of Attacks in a Given Period of Time Users might want to determine how many attacks occurred on a given application at a given point in time, or they might want to study the attack rate for a specific time period. On the Security Insight page, click any application and in the Application Summary, click the number of violations. The Total Violations page displays the attacks in a graphical manner for one hour, one day, one week, and one month. The Application Summary table provides the details about the attacks. Some of them are as follows: Attack time IP address of the client from which the attack happened Severity Category of violation URL from which the attack originated, and other details. While users can always view the time of attack in an hourly report as seen in the image above, now they can view the attack time range for aggregated reports even for daily or weekly reports. If users select “1 Day” from the time-period list, the Security Insight report displays all attacks that are aggregated and the attack time is displayed in a one-hour range. If users choose “1 Week” or “1 Month,” all attacks are aggregated and the attack time is displayed in a one-day range. Obtain Detailed Information about Security Breaches Users might want to view a list of the attacks on an application and gain insights into the type and severity of attacks, actions taken by the ADC instance, resources requested, and the source of the attacks. For example, users might want to determine how many attacks on Microsoft Lync were blocked, what resources were requested, and the IP addresses of the sources. On the Security Insight dashboard, click Lync > Total Violations. In the table, click the filter icon in the Action Taken column header, and then select Blocked. For information about the resources that were requested, review the URL column. For information about the sources of the attacks, review the Client IP column. View Log Expression Details NetScaler ADC instances use log expressions configured with the Application Firewall profile to take action for the attacks on an application in the user enterprise. In Security Insight, users can view the values returned for the log expressions used by the ADC instance. These values include, request header, request body and so on. In addition to the log expression values, users can also view the log expression name and the comment for the log expression defined in the Application Firewall profile that the ADC instance used to take action for the attack. Prerequisites: Ensure that users: Configure log expressions in the Application Firewall profile. For more information, see Application Firewall. Enable log expression-based Security Insights settings in NetScaler ADM. Do the following: Navigate to Analytics > Settings, and click Enable Features for Analytics. In the Enable Features for Analytics page, select Enable Security Insight under the Log Expression Based Security Insight Setting section and click OK. For example, users might want to view the values of the log expression returned by the ADC instance for the action it took for an attack on Microsoft Lync in the user enterprise. On the Security Insight dashboard, navigate to Lync > Total Violations. In the Application Summary table, click the URL to view the complete details of the violation in the Violation Information page including the log expression name, comment, and the values returned by the ADC instance for the action. Determine the Safety Index before Deploying the Configuration Security breaches occur after users deploy the security configuration on an ADC instance, but users might want to assess the effectiveness of the security configuration before they deploy it. For example, users might want to assess the safety index of the configuration for the SAP application on the ADC instance with IP address 10.102.60.27. On the Security Insight dashboard, under Devices, click the IP address of the ADC instance that users configured. Users can see that both the threat index and the total number of attacks are 0. The threat index is a direct reflection of the number and type of attacks on the application. Zero attacks indicate that the application is not under any threat. Click Sap > Safety Index > SAP_Profile and assess the safety index information that appears. In the application firewall summary, users can view the configuration status of different protection settings. If a setting is set to log or if a setting is not configured, the application is assigned a lower safety index. Security Violations View Application Security Violation Details Web applications that are exposed to the internet have become drastically more vulnerable to attacks. NetScaler ADM enables users to visualize actionable violation details to protect applications from attacks. Navigate to Security > Security Violations for a single-pane solution to: Access the application security violations based on their categories such as Network, Bot, and WAF Take corrective actions to secure the applications To view the security violations in NetScaler ADM, ensure: Users have a premium license for the NetScaler ADC instance (for WAF and BOT violations). Users have applied a license on the load balancing or content switching virtual servers (for WAF and BOT). For more information, refer to: Manage Licensing on Virtual Servers. Users enable more settings. For more information, see the procedure available at the Setting up section in the NetScaler product documentation: Setting up. Violation Categories NetScaler ADM enables users to view the following violations: NETWORK Bot WAF HTTP Slow Loris Excessive Client Connections Unusually High Upload Transactions DNS Slow Loris Account Takeover** Unusually High Download Transactions HTTP Slow Post Unusually High Upload Volume Excessive Unique IPs NXDomain Flood Attack Unusually High Request Rate Excessive Unique IPs Per Geo HTTP desync attack Unusually High Download Volume Bleichenbacher Attack Segment smack Attack Syn Flood Attack ** - Users must configure the account takeover setting in NetScaler ADM. See the prerequisite mentioned in Account Takeover: Account Takeover. Apart from these violations, users can also view the following Security Insight and Bot Insight violations under the WAF and Bot categories respectively: WAF Bot Buffer Overflow Crawler Content type Feed Fetcher Cookie Consistency Link Checker CSRF Form Tagging Marketing Deny URL Scraper Form Field Consistency Screenshot Creator Field Formats Search Engine Maximum Uploads Service Agent Referrer Header Site Monitor Safe Commerce Speed Tester Safe Object Tool HTML SQL Inject Uncategorized Start URL Virus Scanner XSS Vulnerability Scanner XML DoS DeviceFP Wait Exceeded XML Format Invalid DeviceFP XML WSI Invalid Captcha Response XML SSL Captcha Attempts Exceeded XML Attachment Valid Captcha Response XML SOAP Fault Captcha Client Muted XML Validation Captcha Wait Time Exceeded Others Request Size Limit Exceeded IP Reputation Rate Limit Exceeded HTTP DOS Blacklist (IP, subnet, policy expression) TCP Small Window Whitelist (IP, subnet, policy expression) Signature Violation Zero Pixel Request File Upload Type Source IP JSON XSS Host JSON SQL Geo Location JSON DOS URL Command Injection Infer Content Type XML Cookie Hijack Setting up Users must enable Advanced Security Analytics and set Web Transaction Settings to All to view the following violations in NetScaler ADM: Unusually High Upload Transactions (WAF) Unusually High Download Transactions (WAF) Excessive Unique IPs (WAF) Account takeover (BOT) For other violations, ensure whether Metrics Collector is enabled. By default, Metrics Collector is enabled on the NetScaler ADC instance. For more information, see: Configure Intelligent App Analytics. Enable Advanced Security Analytics Navigate to Networks > Instances > NetScaler ADC, and select the instance type. For example, MPX. Select the NetScaler ADC instance and from the Select Action list, select Configure Analytics. Select the virtual server and click Enable Analytics. On the Enable Analytics window: Select Web Insight. After users select Web Insight, the read-only Advanced Security Analytics option is enabled automatically. Note: The Advanced Security Analytics option is displayed only for premium licensed ADC instances. Select Logstream as Transport Mode The Expression is true by default Click OK Enable Web Transaction settings Navigate to Analytics > Settings. The Settings page is displayed. Click Enable Features for Analytics. Under Web Transaction Settings, select All. Click Ok. Security violations dashboard In the security violations dashboard, users can view: Total violations occurred across all ADC instances and applications. The total violations are displayed based on the selected time duration. Total violations under each category. Total ADCs affected, total applications affected, and top violations based on the total occurrences and the affected applications. Violation details For each violation, NetScaler ADM monitors the behavior for a specific time duration and detects violations for unusual behaviors. Click each tab to view the violation details. Users can view details such as: The total occurrences, last occurred, and total applications affected Under event details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating violations. Drag and select on the graph that lists the violations to narrow down the violation search. Click Reset Zoom to reset the zoom result Recommended Actions that suggest users troubleshoot the issue Other violation details such as violence occurrence time and detection message Bot Insight Using Bot Insight in NetScaler ADM After users configure the bot management in NetScaler ADC, they must enable Bot Insight on virtual servers to view insights in NetScaler ADM. To enable Bot Insight: Navigate to Networks > Instances > NetScaler ADC and select the instance type. For example, VPX. Select the instance and from the Select Action list, select Configure Analytics. Select the virtual server and click Enable Analytics. On the Enable Analytics window: Select Bot Insight Under Advanced Option, select Logstream. Click OK. After enabling Bot Insight, navigate to Analytics > Security > Bot Insight. Time list to view bot details Drag the slider to select a specific time range and click Go to display the customized results Total instances affected from bots Virtual server for the selected instance with total bot attacks Total Bots – Indicates the total bot attacks (inclusive of all bot categories) found for the virtual server. Total Human Browsers – Indicates the total human users accessing the virtual server. Bot Human Ratio – Indicates the ratio between human users and bots accessing the virtual server. Signature Bots, Fingerprinted Bot, Rate Based Bots, IP Reputation Bots, allow list Bots, and block list Bots – Indicates the total bot attacks occurred based on the configured bot category. For more information about bot category, see: Configure Bot Detection Techniques in NetScaler ADC. Click > to view bot details in a graph format. View events history Users can view the bot signature updates in the Events History, when: New bot signatures are added in NetScaler ADC instances. Existing bot signatures are updated in NetScaler ADC instances. Users can select the time duration in bot insight page to view the events history. The following diagram shows how the bot signatures are retrieved from AWS cloud, updated on NetScaler ADC and view signature update summary on NetScaler ADM. The bot signature auto update scheduler retrieves the mapping file from the AWS URI. Checks the latest signatures in the mapping file with the existing signatures in ADC appliance. Downloads the new signatures from AWS and verifies the signature integrity. Updates the existing bot signatures with the new signatures in the bot signature file. Generates an SNMP alert and sends the signature update summary to NetScaler ADM. View Bots Click the virtual server to view the Application Summary Provides the Application Summary details such as: Average RPS – Indicates the average bot transaction requests per second (RPS) received on virtual servers. Bots by Severity – Indicates the highest bot transactions occurred based on the severity. The severity is categorized based on Critical, High, Medium, and Low. For example, if the virtual servers have 11770 high severity bots and 1550 critical severity bots, then NetScaler ADM displays Critical 1.55 K under Bots by Severity. Largest Bot Category – Indicates the highest bot attacks occurred based on the bot category. For example, if the virtual servers have 8000 block listed bots, 5000 allow listed bots, and 10000 Rate Limit Exceeded bots, then NetScaler ADM displays Rate Limit Exceeded 10 K under Largest Bot Category. Largest Geo Source – Indicates the highest bot attacks occurred based on a region. For example, if the virtual servers have 5000 bot attacks in Santa Clara, 7000 bot attacks in London, and 9000 bot attacks in Bangalore, then NetScaler ADM displays Bangalore 9 K under Largest Geo Source. Average % Bot Traffic – Indicates the human bot ratio. Displays the severity of the bot attacks based on locations in map view Displays the types of bot attacks (Good, Bad, and All) Displays the total bot attacks along with the corresponding configured actions. For example, if you have configured: IP address range (192.140.14.9 to 192.140.14.254) as block list bots and selected Drop as an action for these IP address ranges IP range (192.140.15.4 to 192.140.15.254) as block list bots and selected to create a log message as an action for these IP ranges In this scenario, NetScaler ADM displays: Total block listed bots Total bots under Dropped Total bots under Log View CAPTCHA bots In webpages, CAPTCHAs are designed to identify if the incoming traffic is from a human or an automated bot. To view the CAPTCHA activities in NetScaler ADM, users must configure CAPTCHA as a bot action for IP reputation and device fingerprint detection techniques in a NetScaler ADC instance. For more information, see: Configure Bot Management. The following are the CAPTCHA activities that NetScaler ADM displays in Bot insight: Captcha attempts exceeded – Denotes the maximum number of CAPTCHA attempts made after login failures Captcha client muted – Denotes the number of client requests that are dropped or redirected because these requests were detected as bad bots earlier with the CAPTCHA challenge Human – Denotes the captcha entries performed from the human users Invalid captcha response – Denotes the number of incorrect CAPTCHA responses received from the bot or human, when NetScaler ADC sends a CAPTCHA challenge View bot traps To view bot traps in NetScaler ADM, you must configure the bot trap in NetScaler ADC instance. For more information, see: Configure Bot Management. To identify the bot trap, a script is enabled in the webpage and this script is hidden from humans, but not to bots. NetScaler ADM identifies and reports the bot traps, when this script is accessed by bots. Click the virtual server and select Zero Pixel Request View bot details For further details, click the bot attack type under Bot Category. The details such as attack time and total number of bot attacks for the selected captcha category are displayed. Users can also drag the bar graph to select the specific time range to be displayed with bot attacks. To get additional information of the bot attack, click to expand. Instance IP – Indicates the NetScaler ADC instance IP address Total Bots – Indicates the total bot attacks occurred for that particular time HTTP Request URL – Indicates the URL that is configured for captcha reporting Country Code – Indicates the country where the bot attack occurred Region – Indicates the region where the bot attack occurred Profile Name – Indicates the profile name that users provided during the configuration Advanced search Users can also use the search text box and time duration list, where they can view bot details as per the user requirement. When users click the search box, the search box gives them the following list of search suggestions. Instance IP – NetScaler ADC instance IP address Client-IP – Client IP address Bot-Type – Bot type such as Good or Bad Severity – Severity of the bot attack Action-Taken – Action taken after the bot attack such as Drop, No action, Redirect Bot-Category – Category of the bot attack such as block list, allow list, fingerprint, and so on. Based on a category, users can associate a bot action to it Bot-Detection – Bot detection types (block list, allow list, and so on) that users have configured on NetScaler ADC instance Location – Region/country where the bot attack has occurred Request-URL – URL that has the possible bot attacks Users can also use operators in the user search queries to narrow the focus of the user search. For example, if users want to view all bad bots: Click the search box and select Bot-Type Click the search box again and select the operator = Click the search box again and select Bad Click Search to display the results Bot violation details Excessive Client Connections When a client tries to access the web application, the client request is processed in NetScaler ADC appliance, instead of connecting to the server directly. Web traffic comprises bots and bots can perform various actions at a faster rate than a human. Using the Excessive Client Connections indicator, users can analyze scenarios when an application receives unusually high client connections through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total IP addresses transacting the application The accepted IP address range that the application can receive Account Takeover Note: Ensure users enable the advanced security analytics and web transaction options. For more information, see Setting up: Setting up . Some malicious bots can steal user credentials and perform various kinds of cyberattacks. These malicious bots are known as bad bots. It is essential to identify bad bots and protect the user appliance from any form of advanced security attacks. Prerequisite Users must configure the Account Takeover settings in NetScaler ADM. Navigate to Analytics > Settings > Security Violations Click Add On the Add Application page, specify the following parameters: Application - Select the virtual server from the list. Method - Select the HTTP method type from the list. The available options are GET, PUSH, POST, and UPDATE. Login URL and Success response code - Specify the URL of the web application and specify the HTTP status code (for example, 200) for which users want NetScaler ADM to report the account takeover violation from bad bots. Click Add. After users configure the settings, using the Account Takeover indicator, users can analyze if bad bots attempted to take over the user account, giving multiple requests along with credentials. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating total unusual failed login activity, successful logins, and failed logins The bad bot IP address. Click to view details such as time, IP address, total successful logins, total failed logins, and total requests made from that IP address. Unusually High Upload Volume Web traffic also comprises data that is processed for uploading. For example, if the user average upload data per day is 500 MB and if users upload 2 GB of data, then this can be considered as an unusually high upload data volume. Bots are also capable to process uploading of data more quickly than humans. Using the Unusually High Upload Volume indicator, users can analyze abnormal scenarios of upload data to the application through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total upload data volume processed The accepted range of upload data to the application Unusually High Download Volume Similar to high upload volume, bots can also perform downloads more quickly than humans. Using the Unusually High Download Volume indicator, users can analyze abnormal scenarios of download data from the application through bots. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total download data volume processed The accepted range of download data from the application Unusually High Request Rate Users can control the incoming and outgoing traffic from or to an application. A bot attack can perform an unusually high request rate. For example, if users configure an application to allow 100 requests/minute and if users observe 350 requests, then it might be a bot attack. Using the Unusually High Request Rate indicator, users can analyze the unusual request rate received to the application. Under Event Details, users can view: The affected application. Users can also select the application from the list if two or more applications are affected with violations. The graph indicating all violations The violation occurrence time The detection message for the violation, indicating the total requests received and % of excessive requests received than the expected requests The accepted range of expected request rate range from the application Use Cases Bot Sometimes the incoming web traffic is comprised of bots and most organizations suffer from bot attacks. Web and mobile applications are significant revenue drivers for business and most companies are under the threat of advanced cyberattacks, such as bots. A bot is a software program that automatically performs certain actions repeatedly at a much faster rate than a human. Bots can interact with webpages, submit forms, execute actions, scan texts, or download content. They can access videos, post comments, and tweet on social media platforms. Some bots, known as chatbots, can hold basic conversations with human users. A bot that performs a helpful service, such as customer service, automated chat, and search engine crawlers are good bots. At the same time, a bot that can scrape or download content from a website, steal user credentials, spam content, and perform other kinds of cyberattacks are bad bots. With a good number of bad bots performing malicious tasks, it is essential to manage bot traffic and protect the user web applications from bot attacks. By using NetScaler bot management, users can detect the incoming bot traffic and mitigate bot attacks to protect the user web applications. NetScaler bot management helps identify bad bots and protect the user appliance from advanced security attacks. It detects good and bad bots and identifies if incoming traffic is a bot attack. By using bot management, users can mitigate attacks and protect the user web applications. NetScaler ADC bot management provides the following benefits: Defends against bots, scripts, and toolkits. Provides real-time threat mitigation using static signature-based defense and device fingerprinting. Neutralizes automated basic and advanced attacks. Prevents attacks, such as App layer DDoS, password spraying, password stuffing, price scrapers, and content scrapers. Protects user APIs and investments. Protects user APIs from unwarranted misuse and protects infrastructure investments from automated traffic. Some use cases where users can benefit by using the NetScaler bot management system are: Brute force login. A government web portal is constantly under attack by bots attempting brute force user logins. The organization discovers the attack by looking through web logs and seeing specific users being attacked repeatedly with rapid login attempts and passwords incrementing using a dictionary attack approach. By law, they must protect themselves and their users. By deploying the NetScaler bot management, they can stop brute force login using device fingerprinting and rate limiting techniques. Block bad bots and device fingerprint unknown bots. A web entity gets 100,000 visitors each day. They have to upgrade the underlying footprint and they are spending a fortune. In a recent audit, the team discovered that 40 percent of the traffic came from bots, scraping content, picking news, checking user profiles, and more. They want to block this traffic to protect their users and reduce their hosting costs. Using bot management, they can block known bad bots, and fingerprint unknown bots that are hammering their site. By blocking these bots, they can reduce bot traffic by 90 percent. Permit good bots. “Good” bots are designed to help businesses and consumers. They have been around since the early 1990s when the first search engine bots were developed to crawl the Internet. Google, Yahoo, and Bing would not exist without them. Other examples of good bots—mostly consumer-focused—include: Chatbots (a.k.a. chatterbots, smart bots, talk bots, IM bots, social bots, conversation bots) interact with humans through text or sound. One of the first text uses was for online customer service and text messaging apps like Facebook Messenger and iPhone Messages. Siri, Cortana, and Alexa are chatbots; but so are mobile apps that let users order coffee and then tell them when it will be ready, let users watch movie trailers and find local theater showtimes, or send users a picture of the car model and license plate when they request a ride service. Shopbots scour the Internet looking for the lowest prices on items users are searching for. Monitoring bots check on the health (availability and responsiveness) of websites. Downdetector is an example of an independent site that provides real-time status information, including outages, of websites and other kinds of services. For more information on Downdetector, see: Downdetector. Bot Detection Configuring Bot Management by using NetScaler ADC GUI Users can configure NetScaler ADC bot management by first enabling the feature on the appliance. Once users enable, they can create a bot policy to evaluate the incoming traffic as bot and send the traffic to the bot profile. Then, users create a bot profile and then bind the profile to a bot signature. As an alternative, users can also clone the default bot signature file and use the signature file to configure the detection techniques. After creating the signature file, users can import it into the bot profile. All these steps are performed in the below sequence: Enable bot management feature Configure bot management settings Clone NetScaler bot default signature Import NetScaler bot signature Configure bot signature settings Create bot profile Create bot policy Enable Bot Management Feature Follow the steps given below to enable bot management: On the navigation pane, expand System and then click Settings. On the Configure Advanced Features page, select the Bot Management check box. Click OK, and then click Close. Clone Bot Signature File Follow the steps given below to clone bot signature file: Navigate to Security > NetScaler Bot Management and Signatures. In NetScaler Bot Management Signatures page, select the default bot signatures record and click Clone. In the Clone Bot Signature page, enter a name and edit the signature data. Click Create. Import Bot Signature File If users have their own signature file, then they can import it as a file, text, or URL. Perform the following the steps to import the bot signature file: Navigate to Security > NetScaler Bot Management and Signatures. On the NetScaler Bot Management Signatures page, import the file as URL, File, or text. Click Continue. On the Import NetScaler Bot Management Signature page, set the following parameters. Name. Name of the bot signature file. Comment. Brief description about the imported file. Overwrite. Select the check box to allow overwriting of data during file update. Signature Data. Modify signature parameters Click Done. IP Reputation Configure IP Reputation by using NetScaler ADC GUI This configuration is a prerequisite for the bot IP reputation feature. The detection technique enables users to identify if there is any malicious activity from an incoming IP address. As part of the configuration, we set different malicious bot categories and associate a bot action to each of them. Follow the steps below to configure the IP reputation technique. Navigate to Security > NetScaler Bot Management and Profiles. On the NetScaler Bot Management Profiles page, select a signature file and click Edit. On the NetScaler Bot Management Profile page, go to Signature Settings section and click IP Reputation. On the IP Reputation section, set the following parameters: Enabled. Select the check box to validate incoming bot traffic as part of the detection process. Configure Categories. Users can use the IP reputation technique for incoming bot traffic under different categories. Based on the configured category, users can drop or redirect the bot traffic. Click Add to configure a malicious bot category. In the Configure NetScaler Bot Management Profile IP Reputation Binding page, set the following parameters: Category. Select a malicious bot category from the list. Associate a bot action based on category. Enabled. Select the check box to validate the IP reputation signature detection. Bot action. Based on the configured category, users can assign no action, drop, redirect, or CAPTCHA action. Log. Select the check box to store log entries. Log Message. Brief description of the log. Comments. Brief description about the bot category. Click OK. Click Update. Click Done. Auto Update for Bot Signatures The bot static signature technique uses a signature lookup table with a list of good bots and bad bots. The bots are categorized based on user-agent string and domain names. If the user-agent string and domain name in incoming bot traffic matches a value in the lookup table, a configured bot action is applied. The bot signature updates are hosted on the AWS cloud and the signature lookup table communicates with the AWS database for signature updates. The auto signature update scheduler runs every 1-hour to check the AWS database and updates the signature table in the ADC appliance. The Bot signature mapping auto update URL to configure signatures is: Bot Signature Mapping. Note: Users can also configure a proxy server and periodically update signatures from the AWS cloud to the ADC appliance through proxy. For proxy configuration, users must set the proxy IP address and port address in the bot settings. Configure Bot Signature Auto Update For configuring bot signature auto update, complete the following steps: Enable Bot Signature Auto Update Users must enable the auto update option in the bot settings on the ADC appliance. At the command prompt, type: set bot settings –signatureAutoUpdate ON Configure Bot Signature Auto Update using the NetScaler ADC GUI Complete the following steps to configure bot signature auto update: Navigate to Security > NetScaler Bot Management. In the details pane, under Settings click Change NetScaler Bot Management Settings. In the Configure NetScaler Bot Management Settings, select the Auto Update Signature check box. Click OK and Close. For more information on configuring IP Reputation using the CLI, see: Configure the IP Reputation Feature Using the CLI. References For information on using SQL Fine Grained Relaxations, see: SQL Fine Grained Relaxations. For information on how to configure the SQL Injection Check using the Command Line, see: HTML SQL Injection Check. For information on how to configure the SQL Injection Check using the GUI, see: Using the GUI to Configure the SQL Injection Security Check. For information on using the Learn Feature with the SQL Injection Check, see: Using the Learn Feature with the SQL Injection Check. For information on using the Log Feature with the SQL Injection Check, see: Using the Log Feature with the SQL Injection Check. For information on Statistics for the SQL Injection violations, see: Statistics for the SQL Injection Violations. For information on SQL Injection Check Highlights, see: Highlights. For information about XML SQL Injection Checks, see: XML SQL Injection Check. For information on using Cross-Site Scripting Fine Grained Relaxations, see: SQL Fine Grained Relaxations. For information on configuring HTML Cross-Site Scripting using the command line, see: Using the Command Line to Configure the HTML Cross-Site Scripting Check. For information on configuring HTML Cross-Site Scripting using the GUI, see: Using the GUI to Configure the HTML Cross-Site Scripting Check. For information on using the Learn Feature with the HTML Cross-Site Scripting Check, see: Using the Learn Feature with the HTML Cross-Site Scripting Check. For information on using the Log Feature with the HTML Cross-Site Scripting Check, see: Using the Log Feature with the HTML Cross-Site Scripting Check. For information on statistics for the HTML Cross-Site Scripting violations, see: Statistics for the HTML Cross-Site Scripting Violations. For information on HTML Cross-Site Scripting highlights, see: Highlights. For information about XML Cross-Site Scripting, visit: XML Cross-Site Scripting Check. For information on using the command line to configure the Buffer Overflow Security Check, see: Using the Command Line to Configure the Buffer Overflow Security Check. For information on using the GUI to configure the Buffer Overflow Security Check, see: Configure Buffer Overflow Security Check by using the NetScaler ADC GUI. For information on using the Log Feature with the Buffer Overflow Security Check, see: Using the Log Feature with the Buffer Overflow Security Check. For information on Statistics for the Buffer Overflow violations, see: Statistics for the Buffer Overflow Violations. For information on the Buffer Overflow Security Check Highlights, see: Highlights. For information on Adding or Removing a Signature Object, see: Adding or Removing a Signature Object. For information on creating a signatures object from a template, see: To Create a Signatures Object from a Template. For information on creating a signatures object by importing a file, see: To Create a Signatures Object by Importing a File. For information on creating a signatures object by importing a file using the command line, see: To Create a Signatures Object by Importing a File using the Command Line. For information on removing a signatures object by using the GUI, see: To Remove a Signatures Object by using the GUI. For information on removing a signatures object by using the command line, see: To Remove a Signatures Object by using the Command Line. For information on configuring or modifying a signatures object, see: Configuring or Modifying a Signatures Object. For more information on updating a signature object, see: Updating a Signature Object. For information on using the command line to update Web Application Firewall Signatures from the source, see: To Update the Web Application Firewall Signatures from the Source by using the Command Line. For information on updating a signatures object from a Citrix format file, see: Updating a Signatures Object from a NetScaler Format File. For information on updating a signatures object from a supported vulnerability scanning tool, see: Updating a Signatures Object from a Supported Vulnerability Scanning Tool. For information on Snort Rule Integration, see: Snort Rule Integration. For information on configuring Snort Rules, see: Configure Snort Rules. For information about configuring Bot Management using the command line, see: Configure Bot Management. For information about configuring bot management settings for device fingerprint technique, see: Configure Bot Management Settings for Device Fingerprint Technique. For information on configuring bot allow lists by using NetScaler ADC GUI, see: Configure Bot White List by using NetScaler ADC GUI. For information on configuring bot block lists by using NetScaler ADC GUI, see: Configure Bot Black List by using NetScaler ADC GUI. For more information on configuring Bot management, see: Configure Bot Management. Prerequisites Users need some prerequisite knowledge before deploying a NetScaler VPX instance on Azure: Familiarity with Azure terminology and network details. For information, see the Azure terminology above. Knowledge of a NetScaler ADC appliance. For detailed information about the NetScaler ADC appliance, see: NetScaler ADC 13.0. Knowledge of NetScaler ADC networking. See: Networking. Azure Prerequisites This section describes the prerequisites that users must complete in Microsoft Azure and NetScaler ADM before they provision NetScaler ADC VPX instances. This document assumes the following: Users possess a Microsoft Azure account that supports the Azure Resource Manager deployment model. Users have a resource group in Microsoft Azure. For more information on how to create an account and other tasks, visit Microsoft Azure documentation: Microsoft Azure Documentation. Limitations Running the NetScaler ADC VPX load balancing solution on ARM imposes the following limitations: The Azure architecture does not accommodate support for the following NetScaler ADC features: Clustering IPv6 Gratuitous ARP (GARP) L2 Mode (bridging). Transparent virtual server are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN Dynamic Routing Virtual MAC USIP Jumbo Frames If users think that they might have to shut down and temporarily deallocate the NetScaler ADC VPX virtual machine at any time, they should assign a static Internal IP address while creating the virtual machine. If they do not assign a static internal IP address, Azure might assign the virtual machine a different IP address each time it restarts, and the virtual machine might become inaccessible. In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. If a NetScaler ADC VPX instance with a model number higher than VPX 3000 is used, the network throughput might not be the same as specified by the instance’s license. However, other features, such as SSL throughput and SSL transactions per second, might improve. The “deployment ID” that is generated by Azure during virtual machine provisioning is not visible to the user in ARM. Users cannot use the deployment ID to deploy NetScaler ADC VPX appliance on ARM. The NetScaler ADC VPX instance supports 20 Mb/s throughput and standard edition features when it is initialized. For a XenApp and XenDesktop deployment, a VPN virtual server on a VPX instance can be configured in the following modes: Basic mode, where the ICAOnly VPN virtual server parameter is set to ON. The Basic mode works fully on an unlicensed NetScaler ADC VPX instance. Smart-Access mode, where the ICAOnly VPN virtual server parameter is set to OFF. The Smart-Access mode works for only 5 NetScaler AAA session users on an unlicensed NetScaler ADC VPX instance. Note: To configure the Smart Control feature, users must apply a Premium license to the NetScaler ADC VPX instance. Azure-VPX Supported Models and Licensing In an Azure deployment, only the following NetScaler ADC VPX models are supported: VPX 10, VPX 200, VPX 1000, and VPX 3000. For more information, see the NetScaler ADC VPX Data Sheet. A NetScaler ADC VPX instance on Azure requires a license. The following licensing options are available for NetScaler ADC VPX instances running on Azure. Users can choose one of these methods to license NetScaler ADCs provisioned by NetScaler ADM: Using ADC licenses present in NetScaler ADM: Configure pooled capacity, VPX licenses, or virtual CPU licenses while creating the autoscale group. So, when a new instance is provisioned for an autoscale group, the already configured license type is automatically applied to the provisioned instance. Pooled Capacity: Allocates bandwidth to every provisioned instance in the autoscale group. Ensure users have the necessary bandwidth available in NetScaler ADM to provision new instances. For more information, see: Configure Pooled Capacity. Each ADC instance in the autoscale group checks out one instance license and the specified bandwidth from the pool. VPX licenses: Applies the VPX licenses to newly provisioned instances. Ensure users have the necessary number of VPX licenses available in NetScaler ADM to provision new instances. When a NetScaler ADC VPX instance is provisioned, the instance checks out the license from the NetScaler ADM. For more information, see: NetScaler ADC VPX Check-in and Check-out Licensing. Virtual CPU licenses: Applies virtual CPU licenses to newly provisioned instances. This license specifies the number of CPUs entitled to a NetScaler ADC VPX instance. Ensure users have the necessary number of Virtual CPUs in NetScaler ADM to provision new instances. When a NetScaler ADC VPX instance is provisioned, the instance checks out the virtual CPU license from the NetScaler ADM. For more information, see: NetScaler ADC Virtual CPU Licensing. When the provisioned instances are destroyed or de-provisioned, the applied licenses are automatically returned to NetScaler ADM. To monitor the consumed licenses, navigate to the Networks > Licenses page. Using Microsoft Azure subscription licenses: Configure NetScaler ADC licenses available in Azure Marketplace while creating the autoscale group. So, when a new instance is provisioned for the autoscale group, the license is obtained from Azure Marketplace. Supported NetScaler ADC Azure Virtual Machine Images Supported NetScaler ADC Azure Virtual Machine Images for Provisioning Use the Azure virtual machine image that supports a minimum of three NICs. Provisioning NetScaler ADC VPX instance is supported only on Premium and Advanced edition. For more information on Azure virtual machine image types, see: General Purpose Virtual Machine Sizes. The following are the recommended VM sizes for provisioning: Standard_DS3_v2 Standard_B2ms Standard_DS4_v2 Port Usage Guidelines Users can configure more inbound and outbound rules n NSG while creating the NetScaler VPX instance or after the virtual machine is provisioned. Each inbound and outbound rule is associated with a public port and a private port. Before configuring NSG rules, note the following guidelines regarding the port numbers users can use: The NetScaler VPX instance reserves the following ports. Users cannot define these as private ports when using the Public IP address for requests from the internet. Ports 21, 22, 80, 443, 8080, 67, 161, 179, 500, 520, 3003, 3008, 3009, 3010, 3011, 4001, 5061, 9000, 7000. However, if users want internet-facing services such as the VIP to use a standard port (for example, port 443) users have to create port mapping by using the NSG. The standard port is then mapped to a different port that is configured on the NetScaler ADC VPX for this VIP service. For example, a VIP service might be running on port 8443 on the VPX instance but be mapped to public port 443. So, when the user accesses port 443 through the Public IP, the request is directed to private port 8443. The Public IP address does not support protocols in which port mapping is opened dynamically, such as passive FTP or ALG. High availability does not work for traffic that uses a public IP address (PIP) associated with a VPX instance, instead of a PIP configured on the Azure load balancer. For more information, see: Configure a High-Availability Setup with a Single IP Address and a Single NIC. In a NetScaler Gateway deployment, users need not configure a SNIP address, because the NSIP can be used as a SNIP when no SNIP is configured. Users must configure the VIP address by using the NSIP address and some nonstandard port number. For call-back configuration on the back-end server, the VIP port number has to be specified along with the VIP URL (for example, url: port). Note: In Azure Resource Manager, a NetScaler ADC VPX instance is associated with two IP addresses - a public IP address (PIP) and an internal IP address. While the external traffic connects to the PIP, the internal IP address or the NSIP is non-routable. To configure a VIP in VPX, use the internal IP address (NSIP) and any of the free ports available. Do not use the PIP to configure a VIP. For example, if NSIP of a NetScaler ADC VPX instance is 10.1.0.3 and an available free port is 10022, then users can configure a VIP by providing the 10.1.0.3:10022 (NSIP address + port) combination.
  18. Overview NetScaler ADC is an application delivery and load balancing solution that provides a high-quality user experience for web, traditional, and cloud-native applications regardless of where they are hosted. It comes in a wide variety of form factors and deployment options without locking users into a single configuration or cloud. Pooled capacity licensing enables the movement of capacity among cloud deployments. As an undisputed leader of service and application delivery, NetScaler ADC is deployed in thousands of networks around the world to optimize, secure, and control the delivery of all enterprise and cloud services. Deployed directly in front of web and database servers, NetScaler ADC combines high-speed load balancing and content switching, HTTP compression, content caching, SSL acceleration, application flow visibility, and a powerful application firewall into an integrated, easy-to-use platform. Meeting SLAs is greatly simplified with end-to-end monitoring that transforms network data into actionable business intelligence. NetScaler ADC allows policies to be defined and managed using a simple declarative policy engine with no programming expertise required. NetScaler VPX The NetScaler ADC VPX product is a virtual appliance that can be hosted on a wide variety of virtualization and cloud platforms: XenServer VMware ESX Microsoft Hyper-V Linux KVM Amazon Web Services Microsoft Azure Google Cloud Platform This deployment guide focuses on NetScaler ADC VPX on Amazon Web Services. Amazon Web Services Amazon Web Services (AWS) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services can offer tools such as compute power, database storage, and content delivery services. AWS offers the following essential services AWS Compute Services Migration Services Storage Database Services Management Tools Security Services Analytics Networking Messaging Developer Tools Mobile Services AWS Terminology Here is a brief description of the key terms used in this document that users must be familiar with: Amazon Machine Image (AMI) - A machine image, which provides the information required to launch an instance, which is a virtual server in the cloud. Auto Scaling - A web service to launch or terminate Amazon EC2 instances automatically based on user-defined policies, schedules, and health checks. AWS Auto Scaling Group - An AWS auto scaling group is a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of instance scaling and management. Elastic Block Store - Provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. Elastic Compute Cloud (EC2) - A web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Elastic Load Balancing (ELB) - Distributes incoming application traffic across multiple EC2 instances, in multiple Availability Zones. Distributing the traffic increases the fault tolerance of user applications. Elastic Network Interface (ENI) - A virtual network interface that users can attach to an instance in a Virtual Private Cloud (VPC). Elastic IP (EIP) address - A static, public IPv4 address that users have allocated in Amazon EC2 or Amazon VPC and then attached to an instance. Elastic IP addresses are associated with user accounts, not a specific instance. They are elastic because users can easily allocate, attach, detach, and free them as their needs change. IAM-Instance-Profile - An identity provided to the NetScaler ADC instances provisioned in a cluster in AWS. The profile allows the instances to access AWS services when it starts to load balance the client requests. Identity and Access Management (IAM) - An AWS identity with permission policies that determine what the identity can and cannot do in AWS. Users can use an IAM role to enable applications running on an EC2 instance to securely access their AWS resources. IAM role is required for deploying VPX instances in a high-availability setup. Instance type - Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give users the flexibility to choose the appropriate mix of resources for their applications. Listener - A listener is a process that checks for connection requests, using the protocol and port that you configure. The rules that you define for a listener determine how the load balancer routes requests to the targets in one or more target groups. NLB - Network load balancer. NLB is an L4 load balancer available in the AWS environment. Route 53 - Route 53 is Amazon’s highly available and scalable cloud domain name system (DNS) web service. Security groups - A named set of allowed inbound network connections for an instance. Subnet - A segment of the IP address range of a VPC with which EC2 instances can be attached. Users can create subnets to group instances according to security and operational needs. Virtual Private Cloud (VPC) - A web service for provisioning a logically isolated section of the AWS cloud where users can launch AWS resources in a virtual network that they define. Here is a brief description of other terms used in this document that we recommend you are familiar with: Autoscale Groups - An Autoscale group is a group of NetScaler ADC instances that load balance applications as a single entity and trigger autoscaling when the threshold parameters breach the limits. NetScaler ADC instances scale-out or scale-in dynamically based on the autoscale groups configuration. Note: A NetScaler autoscale group is called autoscale group throughout this document whereas the AWS autoscale group is explicitly called AWS autoscale group. NetScaler ADC Clusters - A NetScaler ADC cluster is a group of NetScaler ADC VPX instances and each instance is called a node. The client traffic is distributed across the nodes to provide high availability, high throughput, and scalability. CloudFormation - A service for writing or changing templates that create and delete related AWS resources together as a unit. Cooldown period - After a scale-out, the cooldown period is the time for which evaluation of the statistics has to be stopped. The cooldown period ensures organic growing of an autoscale group by allowing current traffic to stabilize and average out on the current set of instances before the next scaling decision is made. Default cooldown period value is 10 minutes and is configurable. Note: Default value is determined based on the time required for the system to stabilize after a scale-out (approximately 4 minutes) plus NetScaler ADC configuration and DNS advertisement time. Drain Connection Timeout - During scale-in, once an instance is selected for deprovisioning, NetScaler ADM removes the instance from processing new connections to the autoscale group and waits until the specified drain connection timeout period expires before deprovisioning. This timeout allows existing connections to this instance be drained out before it gets deprovisioned. If the connections are drained before the drain connection timeout expires, even then the NetScaler ADM waits for the drain connection timeout period to expire before starting a new evaluation. Note: If the connections are not drained even after the drain connection timeout expires, the NetScaler ADM removes the instances which might impact the application. Default value is 5 minutes and is configurable. Key pair - A set of security credentials with which users prove their identity electronically. A key pair consists of a private key and a public key. Route table - A set of routing rules that controls the traffic leaving any subnet that is associated with the route table. Users can associate multiple subnets with a single route table, but a subnet can be associated with only one route table at a time. Simple Storage Service (S3) - Storage for the Internet. It is designed to make web-scale computing easier for developers. Tags - Each autoscale group is assigned a tag which is a key and value pair. You can apply tags to the resources that enable you to organize and identify resources easily. The tags are applied to both AWS and NetScaler ADM. Example: Key= name, Value = webserver. Use a consistent set of tags to easily track the autoscale groups that might belong to various groups such as development, production, testing. Threshold Parameters - Parameters that are monitored for triggering scale-out or scale-in. The parameters are CPU usage, memory usage, and throughput. You can select one parameter or more than one parameter for monitoring. Time to Live (TTL) - Specifies the time interval that the DNS resource record may be cached before the source of the information should again be consulted. Default TTL value is 30 seconds and is configurable. Watch Time - The time for which the scale parameter’s threshold has to stay breached for a scaling to happen. If the threshold is breached on all the samples collected in this specified time, then a scaling happens. If the threshold parameters remain at a value higher than the maximum threshold value throughout this duration, a scale-out is triggered. If the threshold parameters operate at a value lower than the minimum threshold value, a scale-in is triggered. Default value is 3 minutes and is configurable. Use Cases Compared to alternative solutions that require each service to be deployed as a separate virtual appliance, NetScaler ADC on AWS combines L4 load balancing, L7 traffic management, server offload, application acceleration, application security, and other essential application delivery capabilities in a single VPX instance, conveniently available via the AWS Marketplace. Furthermore, everything is governed by a single policy framework and managed with the same, powerful set of tools used to administer on-premises NetScaler ADC deployments. The net result is that NetScaler ADC on AWS enables several compelling use cases that not only support the immediate needs of today’s enterprises, but also the ongoing evolution from legacy computing infrastructures to enterprise cloud data centers. Data center Expansion with Autoscale There are organizations looking to expand their NetScaler footprint in the public cloud, and they are thinking about using native public cloud services. A common use case is for businesses to migrate to the cloud at their own pace so that they can focus on higher ROI workloads or applications. And by using our solution, which often includes using pooled capacity devices to keep workloads both on-prem and in the cloud by decoupling the bandwidth and instances, they can go ahead and move to whichever cloud they choose at their own pace. Now the public cloud provides elasticity, which is also a significant use case for customers who want to host applications on demand while not worrying about over or under provisioning of resources. Efficient hosting of applications in a cloud involves easy and cost-effective management of resources depending on the application demand. For example, consider that a business has an e-commerce web portal running on AWS. This portal sometimes offers enormous discounts during which there is a spike in the application traffic. When application traffic increases during these offers, the applications must be scaled out dynamically and network resources might likewise also need to be increased. The NetScaler ADM autoscaling feature supports provisioning and autoscaling of NetScaler ADC instances in AWS. The NetScaler ADM autoscaling feature constantly monitors the threshold parameters such as memory usage, CPU usage, and throughput. Users can select one of these parameters or more than one parameter for monitoring. These parameter values are then compared to the user configured values. If the parameter values breach the limits, then scale-out or scale-in is triggered as needed. The NetScaler ADM autoscale feature architecture is designed in such a way that users can configure the minimum and maximum number of instances for each of the autoscale groups. Pre-setting these numbers ensures that each application is always up and running and aligns to customer demand. Benefits of Autoscaling High availability of applications. Autoscaling ensures that your application always has the right number of NetScaler ADC VPX instances to handle the traffic demands. This is to ensure that your application is up and running all the time irrespective of traffic demands. Smart scaling decisions and zero touch configuration. Autoscaling continuously monitors your application and adds or removes NetScaler ADC instances dynamically depending on the demand. When demand spikes upward, the instances are automatically added. When the demand spikes downward, the instances are automatically removed. The addition and removal of NetScaler ADC instances happens automatically making it a zero-touch manual configuration. Automatic DNS management. The NetScaler ADM autoscale feature offers automatic DNS management. Whenever new NetScaler ADC instances are added, the domain names are updated automatically. Graceful connection termination. During a scale-in, the NetScaler ADC instances are gracefully removed avoiding the loss of client connections. Better cost management. Autoscaling dynamically increases or decreases NetScaler ADC instances as needed. Running only needed instances enables users to optimize the costs involved. Users save money by launching instances only when they are needed and terminate them when they are not needed. Thus, users pay only for the resources they use. Observability. Observability is essential to application dev-ops or IT personnel to monitor the health of the application. The NetScaler ADM’s autoscale dashboard enables users to visualize the threshold parameter values, autoscale trigger time stamps, events, and the instances participating in autoscale. Deployment Types Three-NIC Deployment Typical Deployments Typical Deployments StyleBook driven With ADM With GSLB (Route53 w/domain registration) Licensing - Pooled/Marketplace Use Cases Three-NIC Deployments are used to achieve real isolation of data and management traffic. Three-NIC Deployments also improve the scale and performance of the ADC. Three-NIC Deployments are used in network applications where throughput is typically 1 Gbps or higher and a Three-NIC Deployment is recommended. CFT Deployment Customers would deploy using CloudFormation Templates if they are customizing their deployments or they are automating their deployments. Deployment Steps Three-NIC Deployment for data center Expansion with Autoscale The NetScaler ADC VPX instance is available as an Amazon Machine Image (AMI) in the AWS marketplace, and it can be launched as an Elastic Compute Cloud (EC2) instance within an AWS VPC. The minimum EC2 instance type allowed as a supported AMI on NetScaler VPX is m4.large. The NetScaler ADC VPX AMI instance requires a minimum of 2 virtual CPUs and 2 GB of memory. An EC2 instance launched within an AWS VPC can also provide the multiple interfaces, multiple IP addresses per interface, and public and private IP addresses needed for VPX configuration. Each VPX instance requires at least three IP subnets: A management subnet A client-facing subnet (VIP) A back-end facing subnet (SNIP) NetScaler recommends three network interfaces for a standard VPX instance on AWS installation. AWS currently makes multi-IP functionality available only to instances running within an AWS VPC. A VPX instance in a VPC can be used to load balance servers running in EC2 instances. An Amazon VPC allows users to create and control a virtual networking environment, including their own IP address range, subnets, route tables, and network gateways. Note: By default, users can create up to 5 VPC instances per AWS region for each AWS account. Users can request higher VPC limits by submitting Amazon’s request form: Amazon VPC Request . Licensing Requirements The NetScaler ADC instances that are created for the NetScaler autoscale group use NetScaler ADC Advanced or Premium ADC licenses. NetScaler ADC clustering feature is included in Advanced or Premium ADC licenses. Users can choose one of the following methods to license NetScaler ADCs provisioned by NetScaler ADM: Using ADC licenses present in NetScaler ADM: Configure pooled capacity, VPX licenses, or virtual CPU licenses while creating the autoscale group. So, when a new instance is provisioned for an autoscale group, the already configured license type is automatically applied to the provisioned instance. Pooled Capacity: Allocates bandwidth to every provisioned instance in the autoscale group. Ensure users have the necessary bandwidth available in NetScaler ADM to provision new instances. For more information, see: Configure Pooled Capacity. Each ADC instance in the autoscale group checks out one instance license and the specified bandwidth from the pool. VPX licenses: Applies the VPX licenses to newly provisioned instances. Ensure users have the necessary number of VPX licenses available in NetScaler ADM to provision new instances. When a NetScaler ADC VPX instance is provisioned, the instance checks out the license from the NetScaler ADM. For more information, see: NetScaler ADC VPX Check-in and Check-out Licensing. Virtual CPU licenses: Applies virtual CPU licenses to newly provisioned instances. This license specifies the number of CPUs entitled to a NetScaler ADC VPX instance. Ensure users have the necessary number of Virtual CPUs in NetScaler ADM to provision new instances. When a NetScaler ADC VPX instance is provisioned, the instance checks out the virtual CPU license from the NetScaler ADM. For more information, see: NetScaler ADC virtual CPU Licensing. When the provisioned instances are destroyed or de-provisioned, the applied licenses are automatically returned to NetScaler ADM. To monitor the consumed licenses, navigate to the Networks > Licenses page. Using AWS subscription licenses: Configure NetScaler ADC licenses available in the AWS marketplace while creating the autoscale group. So, when a new instance is provisioned for the autoscale group, the license is obtained from AWS Marketplace. Deploying NetScaler ADC VPX Instances on AWS When customers move their applications to the cloud, the components that are part of their application increase, become more distributed, and need to be dynamically managed. With NetScaler ADC VPX instances on AWS, users can seamlessly extend their L4-L7 network stack to AWS. With NetScaler ADC VPX, AWS becomes a natural extension of their on-premises IT infrastructure. Customers can use NetScaler ADC VPX on AWS to combine the elasticity and flexibility of the cloud, with the same optimization, security, and control features that support the most demanding websites and applications in the world. With NetScaler Application Delivery Management (ADM) monitoring their NetScaler ADC instances, users gain visibility into the health, performance, and security of their applications. They can automate the setup, deployment, and management of their application delivery infrastructure across hybrid multi-cloud environments. Architecture Diagram The following image provides an overview of how NetScaler ADM connects with AWS to provision NetScaler ADC VPX instances in AWS. Configuration Tasks Perform the following tasks on AWS before provisioning NetScaler ADC VPX instances in NetScaler ADM: Create subnets Create security groups Create an IAM role and define a policy Perform the following tasks on NetScaler ADM to provision the instances on AWS: Create site Provision NetScaler ADC VPX instance on AWS To Create Subnets Create three subnets in a VPC. The three subnets that are required to provision NetScaler ADC VPX instances in a VPC - are management, client, and server. Specify an IPv4 CIDR block from the range that is defined in the VPC for each of the subnets. Specify the availability zone in which the subnet is to reside. Create all the three subnets in the same availability zone. The following image illustrates the three subnets created in the customer region and their connectivity to the client system. For more information on VPC and subnets, see: VPCs and Subnets. To Create Security Groups Create a security group to control inbound and outbound traffic in the NetScaler ADC VPX instance. A security group acts as a virtual firewall for a user instance. Create security groups at the instance level, and not at the subnet level. It is possible to assign each instance in a subnet in the user VPC to a different set of security groups. Add rules for each security group to control the inbound traffic that is passing through the client subnet to instances. Users can also add a separate set of rules that control the outbound traffic that passes through the server subnet to the application servers. Although users can use the default security group for their instances, they might want to create their own groups. Create three security groups - one for each subnet. Create rules for both incoming and outgoing traffic that users want to control. Users can add as many rules as they want. For more information on security groups, see: Security Groups for Your VPC. To Create an IAM Role and Define a Policy Create an IAM role so that customers can establish a trust relationship between their users and the NetScaler trusted AWS account and create a policy with NetScaler permissions. In AWS, click Services. In the left side navigation pane, select IAM > Roles > Create role. Users are connecting their AWS account with the AWS account in NetScaler ADM. So, select Another AWS account to allow NetScaler ADM to perform actions in the AWS account. Type in the 12-digit NetScaler ADM AWS account ID. The NetScaler ID is 835822366011. Users can also find the NetScaler ID in NetScaler ADM when they create the cloud access profile. Enable Require external ID to connect to a third-party account. Users can increase the security of their roles by requiring an optional external identifier. Type an ID that can be a combination of any characters. Click Permissions. In Attach permissions policies page, click Create policy. The list of permissions from NetScaler is provided in the following box: {"Version": "2012-10-17","Statement":[ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeImageAttribute", "ec2:DescribeInstanceAttribute", "ec2:DescribeRegions", "ec2:DescribeDhcpOptions", "ec2:DescribeSecurityGroups", "ec2:DescribeHosts", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:DescribeSubnets", "ec2:DescribeNetworkInterfaces", "ec2:DescribeAvailabilityZones", "ec2:DescribeNetworkInterfaceAttribute", "ec2:DescribeInstanceStatus", "ec2:DescribeAddresses", "ec2:DescribeKeyPairs", "ec2:DescribeTags", "ec2:DescribeVolumeStatus", "ec2:DescribeVolumes", "ec2:DescribeVolumeAttribute", "ec2:CreateTags", "ec2:DeleteTags", "ec2:CreateKeyPair", "ec2:DeleteKeyPair", "ec2:ResetInstanceAttribute", "ec2:RunScheduledInstances", "ec2:ReportInstanceStatus", "ec2:StartInstances", "ec2:RunInstances", "ec2:StopInstances", "ec2:UnmonitorInstances", "ec2:MonitorInstances", "ec2:RebootInstances", "ec2:TerminateInstances", "ec2:ModifyInstanceAttribute", "ec2:AssignPrivateIpAddresses", "ec2:UnassignPrivateIpAddresses", "ec2:CreateNetworkInterface", "ec2:AttachNetworkInterface", "ec2:DetachNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:ResetNetworkInterfaceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:AssociateAddress", "ec2:AllocateAddress", "ec2:ReleaseAddress", "ec2:DisassociateAddress", "ec2:GetConsoleOutput" ], "Resource": "*" }]} Copy and paste the list of permissions in the JSON tab and click Review policy. In Review policy page, type a name for the policy, enter a description, and click Create policy. To Create a Site in NetScaler ADM Create a site in NetScaler ADM and add the details of the VPC associated with the AWS role. In NetScaler ADM, navigate to Networks > Sites. Click Add. Select the service type as AWS and enable Use existing VPC as a site. Select the cloud access profile. If the cloud access profile does not exist in the field, click Add to create a profile. In the Create Cloud Access Profile page, type the name of the profile with which users want to access AWS. Type the ARN associated with the role that users have created in AWS. Type the external ID that users provided while creating an Identity and Access Management (IAM) role in AWS. See step 4 in “To create an IAM role and define a policy” task. Ensure that the IAM role name that you specified in AWS starts with “NetScaler-ADM-“and it correctly appears in the Role ARN. The details of the VPC, such as the region, VPC ID, name and CIDR block, associated with the user IAM role in AWS are imported in NetScaler ADM. Type a name for the site. Click Create. To Provision NetScaler ADC VPX on AWS Use the site that users created earlier to provision the NetScaler ADC VPX instances on AWS. Provide NetScaler ADM service agent details to provision those instances that are bound to that agent. In NetScaler ADM, navigate to Networks > Instances > NetScaler ADC. In the VPX tab, click Provision. This option displays the Provision NetScaler ADC VPX on Cloud page. Select Amazon Web Services (AWS) and click Next. In Basic Parameters, select the Type of Instance from the list. Standalone: This option provisions a standalone NetScaler ADC VPX instance on AWS. HA: This option provisions the high availability NetScaler ADC VPX instances on AWS. To provision the NetScaler ADC VPX instances in the same zone, select the Single Zone option under Zone Type. To provision the NetScaler ADC VPX instances across multiple zones, select the Multi Zone option under Zone type. In the Cloud Parameters tab, make sure to specify the network details for each zone that is created on AWS. Specify the name of your NetScaler ADC VPX instance. In Site, select the site that you created earlier. In Agent, select the agent that is created to manage your NetScaler ADC VPX instance. In Cloud Access Profile, select the cloud access profile created during site creation. In Device Profile, select the profile to provide authentication. NetScaler ADM uses the device profile when it requires to log on to the NetScaler ADC VPX instance. Click Next. In Cloud Parameters, Select the NetScaler IAM Role created in AWS. An IAM role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. In the Product field, select the NetScaler ADC product version that users want to provision. Select the EC2 instance type from the Instance Type list. Select the Version of NetScaler ADC that users want to provision. Select both Major and Minor version of NetScaler ADC. In Security Groups, select the Management, Client, and Server security groups that users created in their virtual network. In IPs in server Subnet per Node, select the number of IP addresses in server subnet per node for the security group. In Subnets, select the Management, Client, and Server subnets for each zone that are created in AWS. Users can also select the region from the Availability Zone list. Click Finish. The NetScaler ADC VPX instance is now provisioned on AWS. Note: Currently, NetScaler ADM doesn’t support deprovisioning of NetScaler ADC instances from AWS. To View the NetScaler ADC VPX Provisioned in AWS From the AWS home page, navigate to Services and click EC2. On the Resources page, click Running Instances. Users can view the NetScaler ADC VPX provisioned in AWS. The name of the NetScaler ADC VPX instance is the same name users provided while provisioning the instance in NetScaler ADM. To View the NetScaler ADC VPX Provisioned in NetScaler ADM In NetScaler ADM, navigate to Networks > Instances > NetScaler ADC. Select NetScaler ADC VPX tab. The NetScaler ADC VPX instance provisioned in AWS is listed here. Autoscaling of NetScaler ADC in AWS using NetScaler ADM Autoscaling Architecture The following diagram illustrates the architecture of the autoscaling feature with DNS as the traffic distributor. The following diagram illustrates the architecture of the autoscaling feature with NLB as the traffic distributor. NetScaler Application Delivery Management (ADM) NetScaler Application Delivery Management is a web-based solution for managing all NetScaler ADC deployments that are deployed on-premises or on the cloud. You can use this cloud solution to manage, monitor, and troubleshoot the entire global application delivery infrastructure from a single, unified, and centralized cloud-based console. NetScaler Application Delivery Management (ADM) provides all the capabilities required to quickly set up, deploy, and manage application delivery in NetScaler ADC deployments and with rich analytics of application health, performance, and security. The autoscale groups are created in NetScaler ADM and the NetScaler ADC VPX instances are provisioned from NetScaler ADM. The application is then deployed through StyleBooks in NetScaler ADM. Traffic Distributors (NLB or DNS/Route53) NLB or DNS/Route53 is used to distribute traffic across all the nodes in an autoscale group. See Autoscale traffic distribution modes for more information. The NetScaler ADM communicates with the traffic distributor to update the application domain and IP addresses of the load balancing virtual servers that front-end the application. NetScaler ADM Autoscale Group Autoscale group is a group of NetScaler ADC instances that load balances applications as a single entity and triggers autoscaling based on the configured threshold parameter values. NetScaler ADC Clusters A NetScaler ADC cluster is a group of NetScaler ADC VPX instances and each instance is called a node. The client traffic is distributed across the nodes to provide high availability, high throughput, and scalability. Note: Autoscaling decisions are made at the cluster level and not at the node level. Independent clusters are hosted in different availability zones and therefore support for some of the shared state features are limited. Persistence sessions such as source IP persistence and others except cookie-based persistence cannot be shared across clusters. However, all the stateless features like load balancing methods work as expected across the multiple availability zones. AWS Auto Scaling Groups AWS auto scaling group is a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of instance scaling and management. AWS Availability Zones AWS availability zone is an isolated location inside a region. Each region is made up of several availability zones. Each availability zone belongs to a single region. Traffic Distribution Modes As users move their application deployment to cloud, autoscaling becomes a part of the infrastructure. As the applications scale-out or scale-in using autoscaling, these changes must be propagated to the client. This propagation is achieved using DNS based or NLB based autoscaling. Traffic Distribution Use Cases Feature Supported for NLB Supported for DNS HTTPS Supported Supported WAF Supported Supported Gateway Not Supported Not Supported * ICA Proxy Not Supported Not Supported * EDT Support Not Supported Not Supported NLB based Autoscaling In NLB-based deployment mode, the distribution tier to the cluster nodes is the AWS network load balancer. In NLB based autoscaling, only one static IP address is offered per availability zone. This is the public IP address that is added to route53 and the back end IP addresses can be private. With this public IP address, any new NetScaler ADC instance provisioned during autoscaling operates using private IP addresses and does not require extra public IP addresses. Use NLB-based autoscaling to manage TCP traffic. Use DNS-based autoscaling to manage UDP traffic. DNS based Autoscaling In DNS based autoscaling, DNS acts as the distribution layer to the NetScaler ADC cluster nodes. The scaling changes are propagated to the client by updating the domain name corresponding to the application. Currently, the DNS provider is AWS Route53. Note: In DNS based autoscaling, each NetScaler ADC instance requires a public IP address. Important: Autoscaling supports all the NetScaler ADC features except the following features which require a spotted configuration on cluster nodes: GSLB Virtual Servers NetScaler Gateway and its features Telco Features For more information on spotted configuration, see Striped, Partially Striped, and Spotted Configurations. How Autoscaling Works The following flowchart illustrates the autoscaling workflow. The NetScaler ADM collects statistics (CPU usage, memory usage, throughput) from the autoscale provisioned clusters at a time interval of one minute. The statistics are evaluated against the configuration thresholds. Depending on whether the statistics exceed the maximum threshold or are operating below the minimum threshold, scale-out or scale-in is triggered respectively. If a scale-out is triggered: New nodes are provisioned. The nodes are attached to the cluster and the configuration is synchronized from the cluster to the new node. The nodes are registered with NetScaler ADM. The new node IP addresses are updated in DNS/NLB. When the application is deployed, an IPset is created on clusters in each availability zone and the domain and the instance IP addresses are registered with DNS/NLB. If a scale-in is triggered: The IP addresses of the nodes identified for removal are removed. The nodes are detached from the cluster, deprovisioned, and then deregistered from NetScaler ADM. When the application is removed, the domain and the instance IP addresses are deregistered from DNS/NLB and the IPset is deleted. Example Consider that users have created an autoscale group named asg_arn in a single availability zone with the following configuration. Threshold parameter – Memory usage Minimum limit: 40 Maximum limit: 85 Watch time – 3 minutes Cooldown period – 10 minutes Drain connection timeout – 10 minutes TTL timeout – 60 seconds After the autoscale group is created, statistics are collected from the autoscale group. The autoscale policy also evaluates if any an autoscale event is in progress and if an autoscaling is in progress, waits for that event to complete before collecting the statistics. Sequence of Events T1 and T2: Memory usage exceeds the maximum threshold limit. T3 - Memory usage is below the maximum threshold limits. T6, T5, T4: Memory usage has breached the maximum threshold limit consecutively for three watch time durations. A scale-out is triggered. Provisioning of nodes occurs. Cooldown period is in effect. T7 – T16: Autoscale evaluation is skipped for this availability zone from T7 through T16 as the cool down period is in effect. T18, T19, T20 - Memory usage has breached the minimum threshold limit consecutively for three watch time durations. Scale-in is triggered. Drain connection timeout is in effect. IP addresses are relieved from the DNS/NLB. T21 – T30: Autoscale evaluation is skipped for this availability zone from T21 through T30 as the drain connection timeout is in effect. T31 For DNS based autoscaling, TTL is in effect. For NLB based autoscaling, deprovisioning of the instances occurs. T32 For NLB based autoscaling, evaluation of the statistics starts. For DNS based autoscaling, deprovisioning of the instances occurs. T33: For DNS based autoscaling, evaluation of the statistics starts. Autoscale Configuration To start autoscaling of NetScaler ADC VPX instances in AWS, users must complete the following steps: Complete all the pre-requisites on AWS listed in the AWS Prerequisites section of this guide. Complete all the prerequisites listed on NetScaler ADM in the NetScaler ADM Prerequisites section of this guide. Create autoscale groups: Initialize the autoscale configuration. Configure autoscale parameters. Check out licenses. Configure cloud parameters. Deploy the application. The next few sections assist users in performing all the necessary tasks in AWS before users create autoscale groups in NetScaler ADM. The tasks that users must complete are as follows: Subscribe to the required NetScaler ADC VPX instance on AWS. Create the required VPC or select an existing VPC. Define the corresponding subnets and security groups. Create two IAM roles, one for NetScaler ADM and one for NetScaler ADC VPX instance. Tip: Users can use AWS CloudFormation Templates to automate the AWS prerequisites step for the NetScaler ADC autoscaling by visiting: citrix-adc-aws-cloudformation/templates . For more information on how to create VPC, subnet and security groups, refer to: AWS Documentation. Subscribe to NetScaler ADC VPX License in AWS Go to: AWS Marketplace. Log on with your credentials. Search for NetScaler ADC VPX Customer Licensed, Premium, or Advanced edition. Subscribe to either NetScaler ADC VPX Customer Licensed, Premium Edition, or NetScaler ADC VPX Advanced Edition licenses. Note: If users choose the Customer Licensed edition, the autoscale group checks out the licenses from the NetScaler ADM while provisioning NetScaler ADC instances. Create Subnets Create three subnets in the user VPC - one each for the management, client, and server connections. Specify an IPv4 CIDR block from the range that is defined in the user VPC for each of the subnets. Specify the availability zone in which users want the subnet to reside. Create all three subnets in each of the availability zones where servers are present. Management. Existing subnet in the user Virtual Private Cloud (VPC) dedicated for management. NetScaler ADC must contact AWS services and requires internet access. Configure a NAT gateway and add a route table entry to allow internet access from this subnet. Client. Existing subnet in the user Virtual Private Cloud (VPC) dedicated for client side. Typically, NetScaler ADC receives client traffic for the application via a public subnet from the internet. Associate the client subnet with a route table which has a route to an Internet gateway. This allows NetScaler ADC to receive application traffic from the internet. Server. A server subnet where the application servers are provisioned. All user application servers are present in this subnet and receive application traffic from the NetScaler ADC through this subnet. Create Security Groups Management. Existing security group in your account dedicated for management of NetScaler ADC VPX. Inbound rules should be allowed on the following TCP and UDP ports. TCP: 80, 22, 443, 3008–3011, 4001 UDP: 67, 123, 161, 500, 3003, 4500, 7000 Ensure that the security group allows the NetScaler ADM agent to be able to access the VPX. Client. Existing security group in the user account dedicated for client-side communication of NetScaler ADC VPX instances. Typically, inbound rules are allowed on TCP ports 80, 22, and 443. Server. Existing security group in the user account dedicated for server-side communication of NetScaler ADC VPX. Create IAM Roles Along with creating an IAM role and defining a policy, users must also create an instance profile in AWS. IAM roles allow NetScaler ADM to provision NetScaler ADC instances, create, or delete Route53 entries. While roles define “what can I do?” they do not define “who am I?” AWS EC2 uses an instance profile as a container for an IAM role. An instance profile is a container for an IAM role that users can use to pass role information to an EC2 instance when the instance starts. When users create an IAM role using the console, the console creates an instance profile automatically and gives it the same name as the role it corresponds to. Roles provide a mechanism to define a collection of permissions. An IAM user represents a person and an instance profile represents the EC2 instances. If a user has role “A,” and an instance has an instance profile attached to “A,” these two principals can access the same resources in the same way. Note: Ensure that the role names start with “NetScaler-ADM-“and the instance profile name starts with “NetScaler-ADC-.” To Create an IAM Role Create an IAM role so that you can establish a trust relationship between your users and the NetScaler trusted AWS account and create a policy with NetScaler permissions. In AWS, click Services. In the left side navigation pane, select IAM > Roles > Create role. Users are connecting the user AWS account with the AWS account in NetScaler ADM. So, select Another AWS account to allow NetScaler ADM to perform actions in the user AWS account. Type in the 12-digit NetScaler ADM AWS account ID. The NetScaler ID is 835822366011. Users can also find the NetScaler ID in NetScaler ADM when they create the cloud access profile. Click Permissions. In Attach permissions policies page, click Create policy. Users can create and edit a policy in the visual editor or by using JSON. The list of permissions from NetScaler for NetScaler ADM is provided in the following box: {"Version": "2012-10-17","Statement": [ { "Action": [ "ec2:DescribeInstances", "ec2:UnmonitorInstances", "ec2:MonitorInstances", "ec2:CreateKeyPair", "ec2:ResetInstanceAttribute", "ec2:ReportInstanceStatus", "ec2:DescribeVolumeStatus", "ec2:StartInstances", "ec2:DescribeVolumes", "ec2:UnassignPrivateIpAddresses", "ec2:DescribeKeyPairs", "ec2:CreateTags", "ec2:ResetNetworkInterfaceAttribute", "ec2:ModifyNetworkInterfaceAttribute", "ec2:DeleteNetworkInterface", "ec2:RunInstances", "ec2:StopInstances", "ec2:AssignPrivateIpAddresses", "ec2:DescribeVolumeAttribute", "ec2:DescribeInstanceCreditSpecifications", "ec2:CreateNetworkInterface", "ec2:DescribeImageAttribute", "ec2:AssociateAddress", "ec2:DescribeSubnets", "ec2:DeleteKeyPair", "ec2:DisassociateAddress", "ec2:DescribeAddresses", "ec2:DeleteTags", "ec2:RunScheduledInstances", "ec2:DescribeInstanceAttribute", "ec2:DescribeRegions", "ec2:DescribeDhcpOptions", "ec2:GetConsoleOutput", "ec2:DescribeNetworkInterfaces", "ec2:DescribeAvailabilityZones", "ec2:DescribeNetworkInterfaceAttribute", "ec2:ModifyInstanceAttribute", "ec2:DescribeInstanceStatus", "ec2:ReleaseAddress", "ec2:RebootInstances", "ec2:TerminateInstances", "ec2:DetachNetworkInterface", "ec2:DescribeIamInstanceProfileAssociations", "ec2:DescribeTags", "ec2:AllocateAddress", "ec2:DescribeSecurityGroups", "ec2:DescribeHosts", "ec2:DescribeImages", "ec2:DescribeVpcs", "ec2:AttachNetworkInterface", "ec2:AssociateIamInstanceProfile", "ec2:DescribeAccountAttributes", "ec2:DescribeInternetGateways" ], "Resource": "*", "Effect": "Allow", "Sid": "VisualEditor0" }, { "Action": [ "iam:GetRole", "iam:PassRole", "iam:CreateServiceLinkedRole" ], "Resource": "*", "Effect": "Allow", "Sid": "VisualEditor1" }, { "Action": [ "route53:CreateHostedZone", "route53:CreateHealthCheck", "route53:GetHostedZone", "route53:ChangeResourceRecordSets", "route53:ChangeTagsForResource", "route53:DeleteHostedZone", "route53:DeleteHealthCheck", "route53:ListHostedZonesByName", "route53:GetHealthCheckCount" ], "Resource": "*", "Effect": "Allow", "Sid": "VisualEditor2" }, { "Action": [ "iam:ListInstanceProfiles", "iam:ListAttachedRolePolicies", "iam:SimulatePrincipalPolicy", "iam:SimulatePrincipalPolicy" ], "Resource": "*", "Effect": "Allow", "Sid": "VisualEditor3" }, { "Action": [ "ec2:ReleaseAddress", "elasticloadbalancing:DeleteLoadBalancer", "ec2:DescribeAddresses", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:DeregisterTargets", "ec2:DescribeSubnets", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:ModifyTargetGroupAttributes", "ec2:AllocateAddress" ], "Resource": "*", "Effect": "Allow", "Sid": "VisualEditor4" } ]} Copy and paste the list of permissions in the JSON tab and click Review policy. In Review policy page, type a name for the policy, enter a description, and click Create policy. Note: Ensure that the name starts with “Citrix-ADM-.” In the Create Role page, enter the name of the role. Note: Ensure that the role name starts with “Citrix-ADM-.” Click Create Role. Similarly, create a profile for the NetScaler ADC instances by providing a different name starting with NetScaler-ADC-. Attach a policy with permissions provided by NetScaler for AWS to access the NetScaler ADC instances. Ensure that users select AWS service > EC2, and then click Permissions to create an instance profile. Add the list of permissions provided by NetScaler. The list of permissions from NetScaler for NetScaler ADC instances is provided in the following box: { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "iam:GetRole", "iam:SimulatePrincipalPolicy", "autoscaling:*", "sns:*", "sqs:*", "cloudwatch:*", "ec2:AssignPrivateIpAddresses", "ec2:DescribeInstances", "ec2:DescribeNetworkInterfaces", "ec2:DetachNetworkInterface", "ec2:AttachNetworkInterface", "ec2:StartInstances", "ec2:StopInstances" ], "Resource": "*" } ]} Register the DNS Domain Users must also ensure that they have registered the DNS domain for hosting their applications. Assess the number of elastics IPs (EIP) required in the user network. The number of EIPs required varies based on whether users are deploying DNS based autoscaling or NLB based autoscaling. To increase the number of EIPs, create a case with AWS. For DNS based autoscaling, the number of EIPs required per availability zone is equal to the number of applications multiplied by the maximum number of VPX instances users want to configure in the autoscale groups. For NLB based autoscaling, the number of EIPs required is equal to the number of applications multiplied by the number of availability zones in which the applications are getting deployed. Assess the Instance Limit Requirements When assessing instance limits, ensure that users consider space requirements for NetScaler ADC instances as well. Create Autoscale Groups Initialize Autoscale Configuration In NetScaler ADM, navigate to Networks > AutoScale Groups. Click Add to create autoscale groups. The Create AutoScale Group page appears. Enter the following details. Name. Type a name for the autoscale group. Site. Select the site that users have created to provision the NetScaler ADC VPX instances on AWS. Agent. Select the NetScaler ADM agent that manages the provisioned instances. Cloud Access Profile. Select the cloud access profile. Note: If the cloud access profile does not exist in the field, click Add to create a profile. Type the ARN associated with the role that you have created in AWS. Type the external ID that users provided while creating an Identity and Access Management (IAM) role in AWS. Depending on the cloud access profile that users select, the availability zones are populated. Device Profile. Select the device profile from the list. The device profile will be used by NetScaler ADM whenever it must log on to the instance. Traffic Distribution Mode. The Load Balancing using NLB option is selected as default traffic distribution mode. If applications are using UDP traffic, then select DNS using AWS route53. Note: After the autoscale configuration is set up, new availability zones cannot be added or existing availability zones cannot be removed. Enable AutoScale Group. Enable or disable the status of the ASG groups. This option is enabled, by default. If this option is disabled, autoscaling is not triggered. Availability Zones. Select the zones in which you want to create the autoscale groups. Depending on the cloud access profile that you have selected, availability zones specific to that profile are populated. Tags. Type the key-value pair for the autoscale group tags. A tag consists of a case-sensitive key-value pair. These tags enable you to organize and identify the autoscale groups easily. The tags are applied to both AWS and NetScaler ADM. Click Next. Configuring Autoscale Parameters In the AutoScale Parameters tab, enter the following details. Select one or more than one of the following threshold parameters whose values must be monitored to trigger a scale-out or a scale-in. Enable CPU Usage Threshold: Monitor the metrics based on the CPU usage. Enable Memory Usage Threshold: Monitor the metrics based on the memory usage. Enable Throughput Threshold: Monitor the metrics based on the throughput. Note: Default minimum threshold limit is 30 and maximum threshold limit is 70. However, users can modify the limits. Minimum threshold limit must be equal to or less than half of the maximum threshold limit. More than one threshold parameters can be selected for monitoring. In such cases, a scale-in is triggered if at least one of the threshold parameters is above the maximum threshold. However, a scale-in is triggered only if all the threshold parameters are operating below their normal thresholds. Minimum Instances. Select the minimum number of instances that need to be provisioned for this autoscale group. By default, the minimum number of instances is equal to the number of zones selected. Users can increment the minimum instances by multiples of number of zones. For example, if the number of availability zones is 4, the minimum instances is 4 by default. Users can increase the minimum instances by 8, 12, 16. Maximum Instances. Select the maximum number of instances that need to be provisioned for this autoscale group. The maximum number of instances must be greater than or equal to the minimum instances value. The maximum number of instances that can be configured is equal to number of availability zones multiplied by 32. Maximum number of instances = number of availability zones * 32. Drain Connection Timeout (minutes). Select the drain connection timeout period. During scale-in, once an instance is selected for deprovisioning, NetScaler ADM removes the instance from processing new connections to autoscale group and waits until the specified time expires before deprovisioning. This allows existing connections to this instance to be drained out before it gets deprovisioned. Cooldown period (minutes). Select the cooldown period. During scale-out, cooldown period is the time for which evaluation of the statistics has to be stopped after a scale-out occurs. This ensures organic growing of instances of an autoscale group by allowing current traffic to stabilize and average out on the current set of instances before the next scaling decision is made. DNS Time To Live(seconds). Select the amount of time (in seconds) that a packet is set to exist inside a network before being discarded by a router. This parameter is applicable only when the traffic distribution mode is DNS using AWS route53. Watch-Time (minutes). Select the watch-time duration. The time for which the scale parameter’s threshold has to stay breached for a scaling to happen. If the threshold is breached on all the samples collected in this specified time, then a scaling happens. Click Next. Configure Licenses for Provisioning NetScaler ADC Instances Select one of the following modes to license NetScaler ADC instances that are part of your Autoscale Group: Using NetScaler ADM: While provisioning NetScaler ADC instances, the autoscale group checks out the licenses from the NetScaler ADM. Using AWS Cloud: The Allocate from Cloud option uses the NetScaler product licenses available in the AWS marketplace. While provisioning NetScaler ADC instances, the autoscale group uses the licenses from the marketplace. If users choose to use licenses from the AWS marketplace, specify the product or license in the Cloud Parameters tab. For more information, see: Licensing Requirements. Use Licenses from NetScaler ADM In the License tab, select Allocate from ADM. In License Type, select one of the following options from the list: Bandwidth Licenses: Users can select one of the following options from the Bandwidth License Types list: Pooled Capacity: Specify the capacity to allocate for every new instance in the autoscale group. From the common pool, each ADC instance in the autoscale group checks out one instance license and only as much bandwidth as is specified. VPX Licenses: When a NetScaler ADC VPX instance is provisioned, the instance checks out the license from the NetScaler ADM. Virtual CPU Licenses: The provisioned NetScaler ADC VPX instance checks out licenses depending on the number of active CPUs running in the autoscale group. Note: When the provisioned instances are removed or destroyed, the applied licenses return to the NetScaler ADM license pool. These licenses can be reused to provision new instances during your next autoscale. In License Edition, select the license edition. The autoscale group uses the specified edition to provision instances. Click Next. Configure Cloud Parameters In the Cloud Parameters tab, enter the following details. IAM Role: Select the IAM role that users have created in AWS. An IAM role is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. Product: Select the NetScaler ADC product version that users want to provision. Version: Select the NetScaler ADC product release version and build number. The release versions and build numbers are auto-populated based on the product that users have selected. AWS AMI ID: Enter the AMI ID specific to the region that users have selected. Instance Type: Select the EC2 instance type. Note: The recommended instance type for the selected product is auto-populated, by default. Security Groups: Security groups control the inbound and outbound traffic in the NetScaler ADC VPX instance. Users create rules for both incoming and outgoing traffic that they want to control. Select appropriate values for the following subnets. Group in the user account dedicated for management of NetScaler ADC VPX instances. Inbound rules should be allowed on the following TCP and UDP ports. TCP: 80, 22, 443, 3008–3011, 4001 UDP: 67, 123, 161, 500, 3003, 4500, 7000 Ensure that the security group allows the NetScaler ADM agent to be able to access the VPX. Client. Existing security group in the user account dedicated for client-side communication of NetScaler ADC VPX instances. Typically, inbound rules are allowed on the TCP ports 80, 22, and 443. Server. Existing security group in the user account dedicated for server-side communication of NetScaler ADC VPX. IP’s in server subnet per node: Select the number of IP addresses in server subnet per node for the security group. Zone: The number of zones that are populated is equal to the number of availability zones that users have selected. For each zone, select the appropriate values for the following subnets. Management. Existing subnet in the user Virtual Private Cloud (VPC) dedicated for management. NetScaler ADC needs to contact AWS services and requires internet access. Configure a NAT gateway and add a route table entry to allow internet access from this subnet. Client. Existing subnet in the user Virtual Private Cloud (VPC) dedicated for client side. Typically, NetScaler ADC receives client traffic for the application via a public subnet from the internet. Associate the client subnet with a route table which has a route to an Internet gateway. This will allow NetScaler ADC to receive application traffic from the internet. Server. Application servers are provisioned in a server subnet. All user application servers will be present in this subnet and will be receiving application traffic from the NetScaler ADC via this subnet. Click Finish. A progress window with the status for creating the autoscale group appears. It might take several minutes for the creation and provisioning of autoscale groups. Configure Application using Stylebooks In NetScaler ADM, navigate to Networks > Autoscale Groups. Select the autoscale group that users created and click Configure. The Choose StyleBook page displays all the StyleBooks available for customer use to deploy configurations in the autoscale clusters. Select the appropriate StyleBook. For example, users can use the HTTP/SSL Load balancing StyleBook. Users can also import new StyleBooks. Click the StyleBook to create the required configuration. The StyleBook opens as a user interface page on which users can enter the values for all the parameters defined in this StyleBook. Enter values for all the parameters. If users are creating back-end servers in AWS, select Backend Server Configuration. Further select AWS EC2 Autoscaling > Cloud and enter the values for all the parameters. There might be a few optional configurations required depending on the StyleBook that users have chosen. For example, users might have to create monitors, provide SSL certificate settings, and so on. Click Create to deploy the configuration on the NetScaler ADC cluster. The FQDN of the application or the virtual server cannot be modified after it is configured and deployed. The FQDN of the application is resolved to the IP address using DNS. As this DNS record might be cached across various name servers, changing the FQDN might cause the traffic to be blackholed. SSL session sharing work as expected within an availability zone but across availability zones, requires reauthentication. SSL sessions are synchronized within the cluster. As the autoscale group spanning across availability zones has separate clusters in each zone, SSL sessions cannot be synchronized across zones. Shared limits such as max client and spill-over are set statically based on the number of availability zones. This limit has to be set after calculating it manually. Limit = "Limit required" divided by "number of zones". Shared limits are distributed automatically across nodes within a cluster. As the autoscale group spanning across availability zones has separate clusters in each zone, these limits have to be calculated manually. Upgrade NetScaler ADC Clusters Users must manually upgrade the cluster nodes. Users first upgrade the image of existing nodes and then update AMI from the NetScaler ADM. Important: Ensure the following during an upgrade: No scale-in or scale-out is triggered. No configuration changes must be performed on the cluster in the autoscale group. Users keep a backup of the ns.conf file of the previous version. In case an upgrade fails, users can fall back to the previous version. Perform the following steps to upgrade the NetScaler ADC cluster nodes. Disable the autoscale group on the MAS ASG portal. Select one of the clusters within the autoscale groups for upgrade. Follow the steps documented in the topic: Upgrading or Downgrading the NetScaler ADC Cluster. Note: Upgrade one node in the cluster. Monitor the application traffic for any failures. If users encounter any issues or failures, downgrade the node that was previously upgraded. Else, continue with the upgrade of all nodes. Continue upgrading the nodes in all the clusters in the autoscale group. Note: If the upgrade for any cluster fails, downgrade all the clusters in the autoscale group to the previous version. Follow the steps documented in the topic Upgrading or Downgrading the NetScaler ADC Cluster . After successful upgrade of all clusters, update the AMI on the MAS ASG Portal. AMI must be of the same version as the image used for the upgrade. Edit the autoscale group and type the AMI that corresponds to the upgraded version. Enable the autoscale group on the ADM portal. Modify Autoscale Groups Configuration Users can modify an autoscale group configuration or delete an autoscale group. Users can modify only the following autoscale group parameters. Traffic distribution mode. Maximum and minimum limits of the threshold parameters. Minimum and maximum instance values. Drain connection period value. Cooldown period value. Time to live value – If the traffic distribution mode is DNS. Watch duration value. Users can also delete the autoscale groups after they are created. When users delete an autoscale group, all the domains and IP addresses are deregistered from DNS/NLB and the cluster nodes are deprovisioned. CloudFormation Template Deployment NetScaler ADC VPX is available as Amazon Machine Images (AMI) in the AWS Marketplace. Before using a CloudFormation template to provision a NetScaler ADC VPX in AWS, the AWS user has to accept the terms and subscribe to the AWS Marketplace product. Each edition of the NetScaler ADC VPX in the Marketplace requires this step. Each template in the CloudFormation repository has collocated documentation describing the usage and architecture of the template. The templates attempt to codify the recommended deployment architecture of the NetScaler ADC VPX, or to introduce the user to the NetScaler ADC or to demonstrate a particular feature, edition, or option. Users can reuse, modify, or enhance the templates to suit their production and testing needs. Most templates require full EC2 permissions in addition to permissions to create IAM roles. The CloudFormation templates contain AMI IDs that are specific to a particular release of the NetScaler ADC VPX (for example, release 12.0–56.20) and edition (for example, NetScaler ADC VPX Platinum Edition - 10 Mbps) OR NetScaler ADC BYOL. To use a different version / edition of the NetScaler ADC VPX with a CloudFormation template requires the user to edit the template and replace the AMI Ids. The latest NetScaler ADC AWS-AMI-IDs are available from the NetScaler ADC CloudFormation Templates on GitHub citrix-adc-aws-cloudformation/templates. CFT Three-NIC Deployment This template deploys a VPC, with 3 subnets (Management, client, server) for 2 Availability Zones. It deploys an Internet Gateway, with a default route on the public subnets. This template also creates a HA pair across Availability Zones with two instances of NetScaler ADC: 3 ENIs associated to 3 VPC subnets (Management, Client, Server) on primary and 3 ENIs associated to 3 VPC subnets (Management, Client, Server) on secondary. All the resource names created by this CFT are prefixed with a tagName of the stack name. The output of the CloudFormation template includes: PrimaryCitrixADCManagementURL - HTTPS URL to the Management GUI of the Primary VPX (uses self-signed cert). PrimaryCitrixADCManagementURL2 - HTTP URL to the Management GUI of the Primary VPX. PrimaryCitrixADCInstanceID - Instance Id of the newly created Primary VPX instance. PrimaryCitrixADCPublicVIP - Elastic IP address of the Primary VPX instance associated with the VIP. PrimaryCitrixADCPrivateNSIP - Private IP (NS IP) used for management of the Primary VPX. PrimaryCitrixADCPublicNSIP - Public IP (NS IP) used for management of the Primary VPX. PrimaryCitrixADCPrivateVIP - Private IP address of the Primary VPX instance associated with the VIP. PrimaryCitrixADCSNIP - Private IP address of the Primary VPX instance associated with the SNIP. SecondaryCitrixADCManagementURL - HTTPS URL to the Management GUI of the Secondary VPX (uses self-signed cert). SecondaryCitrixADCManagementURL2 - HTTP URL to the Management GUI of the Secondary VPX. SecondaryCitrixADCInstanceID - Instance Id of the newly created Secondary VPX instance. SecondaryCitrixADCPrivateNSIP - Private IP (NS IP) used for management of the Secondary VPX. SecondaryCitrixADCPublicNSIP - Public IP (NS IP) used for management of the Secondary VPX. SecondaryCitrixADCPrivateVIP - Private IP address of the Secondary VPX instance associated with the VIP. SecondaryCitrixADCSNIP - Private IP address of the Secondary VPX instance associated with the SNIP. SecurityGroup - Security group id to which the VPX belongs. When providing input to the CFT, the * against any parameter in the CFT implies that it is a mandatory field. For example, VPC ID* is a mandatory field. The following prerequisites must be met. The CloudFormation template requires sufficient permissions to create IAM roles, beyond normal EC2 full privileges. The user of this template also needs to accept the terms and subscribe to the AWS Marketplace product before using this CloudFormation template. The following should also be present: Key Pair 3 unallocated EIPs Primary Management Client VIP Secondary Management For more information on provisioning NetScaler ADC VPX instances on AWS, users can visit Provisioning NetScaler ADC VPX Instances on AWS. For more information on autoscaling of NetScaler ADC in AWS using NetScaler ADM, visit: Autoscaling of NetScaler ADC in AWS using NetScaler ADM. For information on adding the AWS autoscaling service to a NetScaler ADC VPX instance, visit: Add Back-end AWS Autoscaling Service. AWS Prerequisites Before attempting to create a VPX instance in AWS, users should ensure they have the following: An AWS account to launch a NetScaler ADC VPX AMI in an Amazon Web Services (AWS) Virtual Private Cloud (VPC). Users can create an AWS account for free at Amazon Web Services: AWS. NetScaler ADM service agent has been added in AWS. A VPC has been created and availability zones have been selected. For more information on how to create an account and other tasks, see: AWS Documentation. For more information on how to install the NetScaler ADM service agent on AWS, see: Install NetScaler ADM Agent on AWS. An AWS Identity and Access Management (IAM) user account to securely control access to AWS services and resources for users. For more information about how to create an IAM user account, see the topic: Creating IAM Users (Console). An IAM adminuser with all administrative permissions has been created. An IAM role is mandatory for both standalone and high availability deployments. The IAM role must have the following privileges: ec2:DescribeInstances ec2:DescribeNetworkInterfaces ec2:DetachNetworkInterface ec2:AttachNetworkInterface ec2:StartInstances ec2:StopInstances ec2:RebootInstances ec2:DescribeAddresses ec2:AssociateAddress ec2:DisassociateAddress autoscaling:* sns:* sqs:* iam:SimulatePrincipalPolicy iam:GetRole If the NetScaler CloudFormation template is used, the IAM role is automatically created. The template does not allow selecting an already created IAM role. Note: When users log on the VPX instance through the GUI, a prompt to configure the required privileges for the IAM role appears. Ignore the prompt if the privileges have already been configured. AWS CLI is required to use all the functionality provided by the AWS Management Console from the terminal program. For more information, see: What Is the AWS Command Line Interface?. Users also need the AWS CLI to change the network interface type to SR-IOV. ADM Prerequisites Users must ensure that they have completed all the pre-requisites on the NetScaler ADM to use the autoscale feature. Create a Site Create a site in NetScaler ADM and add the details of the VPC associated with the user AWS role. In NetScaler ADM, navigate to Networks > Sites. Click Add. Select the service type as AWS and enable Use existing VPC as a site. Select the cloud access profile. If the cloud access profile doesn’t exist in the field, click Add to create a profile. In the Create Cloud Access Profile page, type the name of the profile with which users want to access AWS. Type the ARN associated with the role that users have created in AWS. Copy the autogenerated External ID to update the IAM role. Click Create. Again, click Create to create the site. Update the IAM role in AWS using the auto-generated External ID: * Log in to the user AWS account and navigate to the role that users want to update.* In the Trust relationships tab, click Edit trust relationship and append the following condition within the Statement block: "Condition": { "StringEquals": { "sts:ExternalId": \<External-ID>\ }} Enabling external ID for an IAM role in AWS allows users to connect to a third-party account. The external ID increases the security of the user role. The details of the VPC, such as the region, VPC ID, name and CIDR block, associated with the user IAM role in AWS are imported in NetScaler ADM. Provision NetScaler ADM Agent on AWS The NetScaler ADM service agent works as an intermediary between the NetScaler ADM and the discovered instances in the data center or on the cloud. Navigate to Networks > Agents. Click Provision. Select AWS and click Next. In the Cloud Parameters tab, specify the following: Name - specify the NetScaler ADM agent name. Site - select the site users have created to provision an agent and ADC VPX instances. Cloud Access Profile - select the cloud access profile from the list. Availability Zone - Select the zones in which users want to create the autoscale groups. Depending on the cloud access profile that users have selected, availability zones specific to that profile are populated. Security Group - Security groups control the inbound and outbound traffic in the NetScaler ADC agent. Users create rules for both incoming and outgoing traffic that they want to control. Subnet - Select the management subnet where users want to provision an agent. Tags - Type the key-value pair for the autoscale group tags. A tag consists of a case-sensitive key-value pair. These tags enable users to organize and identify the autoscale groups easily. The tags are applied to both AWS and NetScaler ADM. Click Finish. Alternatively, users can install the NetScaler ADM agent from the AWS marketplace. For more information, see: Install NetScaler ADM Agent on AWS. Limitations and Usage Guidelines The following limitations and usage guidelines apply when deploying a NetScaler ADC VPX instance on AWS: Users should read the AWS terminology listed previously in this article before starting a new deployment. The clustering feature is supported only when provisioned with NetScaler ADM Auto Scale Groups. For the high availability setup to work effectively, associate a dedicated NAT device to the management Interface or associate an Elastic IP (EIP) to NSIP. For more information on NAT, in the AWS documentation, see: NAT Instances. Data traffic and management traffic must be segregated with ENIs belonging to different subnets. Only the NSIP address must be present on the management ENI. If a NAT instance is used for security instead of assigning an EIP to the NSIP, appropriate VPC level routing changes are required. For instructions on making VPC level routing changes, in the AWS documentation, see: Scenario 2: VPC with Public and Private Subnets. A VPX instance can be moved from one EC2 instance type to another (for example, from m3.large to an m3.xlarge). For more information, visit: Limitations and Usage Guidelines. For storage media for VPX on AWS, NetScaler recommends EBS, because it is durable and the data is available even after it is detached from the instance. Dynamic addition of ENIs to VPX is not supported. Restart the VPX instance to apply the update. NetScaler recommends users to stop the standalone or HA instance, attach the new ENI, and then restart the instance. The primary ENI cannot be changed or attached to a different subnet once it is deployed. Secondary ENIs can be detached and changed as needed while the VPX is stopped. Users can assign multiple IP addresses to an ENI. The maximum number of IP addresses per ENI is determined by the EC2 instance type, see the section “IP Addresses Per Network Interface Per Instance Type” in: Elastic Network Interfaces. Users must allocate the IP addresses in AWS before they assign them to ENIs. For more information, see: Elastic Network Interfaces. NetScaler recommends that users avoid using the enable and disable interface commands on NetScaler ADC VPX interfaces. The NetScaler ADC set ha node <NODE_ID> -haStatus STAYPRIMARY and set ha node <NODE_ID> -haStatus STAYSECONDARY commands are disabled by default. IPv6 is not supported for VPX. Due to AWS limitations, these features are not supported: Gratuitous ARP(GARP). L2 mode (bridging). Transparent vServers are supported with L2 (MAC rewrite) for servers in the same subnet as the SNIP. Tagged VLAN. Dynamic Routing. Virtual MAC. For RNAT, routing, and Transparent vServers to work, ensure Source/Destination Check is disabled for all ENIs in the data path. For more information, see “Changing the Source/Destination Checking” in: Elastic Network Interfaces. In a NetScaler ADC VPX deployment on AWS, in some AWS regions, the AWS infrastructure might not be able to resolve AWS API calls. This happens if the API calls are issued through a non-management interface on the NetScaler ADC VPX instance. As a workaround, restrict the API calls to the management interface only. To do that, create an NSVLAN on the VPX instance and bind the management interface to the NSVLAN by using the appropriate command. For example: set ns config -nsvlan <vlan id>\ -ifnum 1/1 -tagged NO save config Restart the VPX instance at the prompt. For more information about configuring nsvlan, see Configuring NSVLAN. In the AWS console, the vCPU usage shown for a VPX instance under the Monitoring tab might be high (up to 100 percent), even when the actual usage is much lower. To see the actual vCPU usage, navigate to View all CloudWatch metrics. For more information, see: Monitor your Instances using Amazon CloudWatch. Alternately, if low latency and performance are not a concern, users can enable the CPU Yield feature allowing the packet engines to idle when there is no traffic. For more details about the CPU Yield feature and how to enable it, visit: Citrix Support Knowledge Center. AWS-VPX Support Matrix The following sections list the supported VPX model and AWS regions, instance types, and services. Supported VPX Models on AWS NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 200 Mbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 1000 Mbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 3 Gbps NetScaler ADC VPX Standard/Enterprise/Platinum Edition - 5 Gbps NetScaler ADC VPX Standard/Advanced/Premium - 10 Mbps NetScaler ADC VPX Express - 20 Mbps NetScaler ADC VPX - Customer Licensed Supported AWS Regions US West (Oregon) Region US West (N. California) Region US East (Ohio) Region| US East (N. Virginia) Region Asia Pacific (Seoul) Region Canada (Central) Region Asia Pacific (Singapore) Region Asia Pacific (Sydney) Region Asia Pacific (Tokyo) Region Asia Pacific (Hong Kong) Region Canada (Central) Region China (Beijing) Region China (Ningxia) Region EU (Frankfurt) Region EU (Ireland) Region EU (London) Region EU (Paris) Region South America (São Paulo) Region AWS GovCloud (US-East) Region Supported AWS Instance Types m3.large, m3.large, m3.2xlarge c4.large, c4.large, c4.2xlarge, c4.4xlarge, c4.8xlarge m4.large, m4.large, m4.2xlarge, m4.4xlarge, m4.10xlarge m5.large, m5.xlarge, m5.2xlarge, m5.4xlarge, m5.12xlarge, m5.24xlarge c5.large, c5.xlarge, c5.2xlarge, c5.4xlarge, c5.9xlarge, c5.18xlarge, c5.24xlarge C5n.large, C5n.xlarge, C5n.2xlarge, C5n.4xlarge, C5n.9xlarge, C5n.18xlarge Supported AWS Services #EC2 #Lambda #S3 #VPC #route53 #ELB #Cloudwatch #AWS AutoScaling #Cloud formation Simple Queue Service (SQS) Simple Notification Service (SNS) Identity & Access Management (IAM) For higher bandwidth, NetScaler recommends the following instance types Instance Type Bandwidth Enhanced Networking (SR-IOV) M4.10x large 3 Gbps and 5 Gbps Yes C4.8x large 3 Gbps and 5 Gbps Yes C5.18xlarge/M5.18xlarge 25 Gbps ENA C5n.18xlarge 30 Gbps ENA To remain updated about the current supported VPX models and AWS regions, instance types, and services, visit the VPX-AWS support matrix
  19. 5 Stars!!! Thank you for this clear explanation Daniel, very well written and the videos and graphics were very helpful. My only suggestion is to update the name of Azure AD.
  20. Overview This guide aims to provide an overview of using Terraform to create a complete Citrix DaaS Resource Location on Amazon EC2. At the end of the process, you created: A new Citrix Cloud Resource Location (RL) running on Amazon EC2 2 Cloud Connector Virtual Machines registered with the Domain and the Resource Location A Hypervisor Connection and a Hypervisor Pool pointing to the new Resource Location in Amazon EC2 A Machine Catalog based on the uploaded Master Image VHD or on an Amazon EC2-based Master Image A Delivery Group based on the Machine Catalog with full Autoscale Support Example policies and policy scope bound to the Delivery Group What is Terraform Terraform is an Infrastructure-as-Code (IaC) tool that defines cloud and on-prem resources in easy-readable configuration files rather than through a GUI. IaC allows you to build, change, and manage your infrastructure safely and consistently by defining resource configurations. These configurations can be versioned, reused, and shared and are created in its native declarative configuration language known as HashiCorp Configuration Language (HCL), or optionally using JSON. Terraform creates and manages resources on Cloud platforms and other services through their application programming interfaces (APIs). Terraform providers are compatible with virtually any platform or service with an accessible API. More information about Terraform can be found at https://developer.hashicorp.com/terraform/intro. Installation HashiCorp distributes Terraform as a binary package. You can also install Terraform using popular package managers. In this example, we use Chocolatey for Windows to deploy Terraform. Chocolatey is a free and open-source package management system for Windows. Install the Terraform package from the CLI. Installation of Chocolatey Open a PowerShell shell with Administrative rights and paste the following command: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) Chocolatey downloads and installs all necessary components automatically: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString (https://community.chocolatey.org/install.ps1)) Forcing web requests to allow TLS v1.2 (Required for requests to Chocolatey.org) Getting latest version of the Chocolatey package for download. Not using proxy. Getting Chocolatey from https://community.chocolatey.org/api/v2/package/chocolatey/2.2.2. Downloading https://community.chocolatey.org/api/v2/package/chocolatey/2.2.2 to C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall\chocolatey.zip Not using proxy. Extracting C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall\chocolatey.zip to C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall Installing Chocolatey on the local machine Creating ChocolateyInstall as an environment variable (targeting 'Machine') Setting ChocolateyInstall to 'C:\ProgramData\chocolatey' WARNING: It's very likely you will need to close and reopen your shell before you can use choco. Restricting write permissions to Administrators We are setting up the Chocolatey package repository. The packages themselves go to 'C:\ProgramData\chocolatey\lib' (i.e. C:\ProgramData\chocolatey\lib\yourPackageName). A shim file for the command line goes to 'C:\ProgramData\chocolatey\bin' and points to an executable in 'C:\ProgramData\chocolatey\lib\yourPackageName'. Creating Chocolatey folders if they do not already exist. chocolatey.nupkg file not installed in lib. Attempting to locate it from bootstrapper. PATH environment variable does not have C:\ProgramData\chocolatey\bin in it. Adding... WARNING: Not setting tab completion: Profile file does not exist at 'C:\TACG\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1'. Chocolatey (choco.exe) is now ready. You can call choco from anywhere, command line or powershell by typing choco. Run choco /? for a list of functions. You may need to shut down and restart powershell and/or consoles first prior to using choco. Ensuring Chocolatey commands are on the path Ensuring chocolatey.nupkg is in the lib folder PS C:\TACG> Run choco --v to check if Chocolatey was installed successfully: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> choco --v Chocolatey v2.2.2 PS C:\TACG> Installation of Terraform After the successful installation of Chocolatey you can install Terraform by running this command on the PowerShell session: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> choco install terraform Chocolatey v2.2.2 Installing the following packages: terraform By installing, you accept licenses for the packages. Progress: Downloading terraform 1.6.4... 100% terraform v1.7.4 [Approved] terraform package files install completed. Performing other installation steps. The package terraform wants to run 'chocolateyInstall.ps1'. Note: If you don't run this script, the installation will fail. Note: To confirm automatically next time, use '-y' or consider: choco feature enable -n allowGlobalConfirmation Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): A Removing old terraform plug-ins Downloading terraform 64 bit from 'https://releases.hashicorp.com/terraform/1.7.4/terraform_1.7.4_windows_amd64.zip' Progress: 100% - Completed download of C:\TACG\AppData\Local\Temp\chocolatey\terraform\1.7.4\terraform_1.7.4_windows_amd64.zip (25.05 MB). Download of terraform_1.7.4_windows_amd64.zip (25.05 MB) completed. Hashes match. Extracting C:\TACG\AppData\Local\Temp\chocolatey\terraform\1.7.4\terraform_1.7.4_windows_amd64.zip to C:\ProgramData\chocolatey\lib\terraform\tools... C:\ProgramData\chocolatey\lib\terraform\tools ShimGen has successfully created a shim for terraform.exe The install of terraform was successful. Software installed to 'C:\ProgramData\chocolatey\lib\terraform\tools' Chocolatey installed 1/1 packages. See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log). PS C:\TACG> Run terraform -version to check if Terraform installed successfully: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> terraform -version Terraform v1.7.4 on windows_amd64 PS C:\TACG> The installation of Terraform is now completed. Terraform - Basics and Commands Terraform Block The terraform {} block contains Terraform settings, including the required providers to provision your infrastructure. Terraform installs providers from the Terraform Registry. Providers The provider block configures the specified provider. A provider is a plug-in that Terraform uses to create and manage your resources. Providing multiple provider blocks in the Terraform configuration enables managing resources from different providers. Resources Resource blocks define the components of the infrastructure - physical, virtual, or logical. These blocks contain arguments to configure the resource. The provider's reference lists the required and optional arguments for each resource. The core Terraform workflow consists of three stages Write: You define resources that are deployed, altered, or deleted. Plan: Terraform creates an execution plan describing the infrastructure that it creates, updates, or destroys based on the existing infrastructure and your configuration. Apply: On approval, Terraform does the proposed operations in the correct order, respecting any resource dependencies. Terraform does not only add complete configurations, it also allows you to change previously added configurations. For example, changing the DNS servers of an NIC of an Amazon EC2 VM does not require redeploying the whole configuration - Terraform only alters the needed resources. Terraform Provider for Citrix Citrix has developed a custom Terraform provider for automating Citrix product deployments and configurations. Using Terraform with Citrix' provider, you can manage your Citrix products via Infrastructure as Code. Terraform is giving you higher efficiency and consistency on infrastructure management and better reusability on infrastructure configuration. The provider defines individual units of infrastructure and currently supports both Citrix Virtual Apps and Desktops and Citrix Desktop as a Service solutions. You can automate the creation of a site setup including host connections, machine catalogs, and delivery groups. You can deploy resources in Amazon EC2, AWS, and GCP in addition to supported on-premises Hypervisors. Terraform expects to be invoked from a working directory that contains configuration files written in the Terraform language. Terraform uses configuration content from this directory and also uses the directory to store settings, cached plug-ins and modules, and state data. A working directory must be initialized before Terraform can do any operations in it. Initialize the working directory by using the command: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> terraform init Initializing the backend... Successfully configured the backend "local"! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plug-ins... - Finding citrix/citrix versions matching ">= 0.5.3"... - Installing citrix/citrix v0.5.3... - Installed citrix/citrix v0.5.3 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plug-ins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG> The provider defines how Terraform can interact with the underlying API. Configurations must declare which providers they require so Terraform can install and use them. Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache. Example: Terraform configuration files - provider.tf The file provider.tf contains the information on the target site where to apply the configuration. Depending on whether it is a Citrix Cloud site or a Citrix On-Premises site, the provider needs to be configured differently: # Cloud Provider provider "Citrix" { customer_id = "idofthexxxxxxxxx" ‘Citrix Cloud Customer ID client_id = "3edrxxxx-XXXX-XXXX-XXXX-XXXXXXXXXX" ‘ID of the Secure API client planned to use client_secret = "*********************" ‘Secret of the Secure API client planned to use } A guide for the creation of a secure API client can be found in Citrix Developer Docs and is shown later as well. # On-Premises Provider provider "Citrix" { hostname = "10.0.0.6" client_id = "foo.local\\admin" client_secret = "foo" } Example - Schema used for the Provider configuration client_id (String): Client Id for Citrix DaaS service authentication For Citrix On-Premises customers: Use this variable to specify the Domain-Admin Username. For Citrix Cloud customers: Use this variable to specify Cloud API Key Client ID. Can be set via the Environment Variable CITRIX_CLIENT_ID. client_secret (String, Sensitive): Client Secret for Citrix DaaS service authentication For Citrix On-Premises customers: Use this variable to specify the Domain-Admin Password. For Citrix Cloud customers: Use this variable to specify Cloud API Key Client Secret. Can be set via the Environment Variable CITRIX_CLIENT_SECRET.| customer_id (String): Citrix Cloud customer ID Only applicable for Citrix Cloud customers. Can be set via the Environment Variable CITRIX_CUSTOMER_ID. disable_ssl_verification (Boolean): Disable SSL verification against the target DDC Only applicable to on-premises customers. Citrix Cloud customers do not need this option. Set to true to skip SSL verification only when the target DDC does not have a valid SSL certificate issued by a trusted CA. When set to true, please make sure that your provider config is set for a known DDC hostname. It is recommended to configure a valid certificate for the target DDC. Can be set via the Environment Variable CITRIX_DISABLE_SSL_VERIFICATION.| environment (String): Citrix Cloud environment of the customer Only applicable for Citrix Cloud customers. Available options: Production, Staging, Japan, JapanStaging. Can be set via the Environment Variable CITRIX_ENVIRONMENT.| hostname (String) | Hostname/base URL of Citrix DaaS service For Citrix On-Premises customers (Required): Use this variable to specify the Delivery Controller hostname. For Citrix Cloud customers (Optional): Use this variable to override the Citrix DaaS service hostname. Can be set via the Environment Variable CITRIX_HOSTNAME.| Deploying a Citrix Cloud Resource location on Amazon EC2 using Terraform Overview This guide aims to showcase the possibility of creating a complete Citrix Cloud Resource Location on Amazon EC2 using Terraform. We want to reduce manual interventions to the absolute minimum. All Terraform configuration files can be found late on GitHub - we update this guide when the GitHub repository is ready. In this guide we use an existing domain and will not deploy a new domain - for further instructions for deploying a new domain refer to the guide Citrix DaaS and Terraform - Automatic Deployment of a Resource Location on Microsoft Azure. Note that this guide will be reworked soon! The AD deployment used for this guide consists of a Hub-and-Spoke model - each Resource Location running on a Hypervisor/Hyperscaler is connected to the main Domain Controller by using IPSec-based Site-to-Site VPNs. Each Resource Location has its own sub-domain. The Terraform flow is split into different parts: The Terraform flow is split into different parts: Part One - this part can be run on any computer where Terraform is installed : Creating the initially needed Resources on Amazon EC2: Creating all needed IAM roles on Amazon EC2 Creating all needed IAM Instance profiles on Amazon EC2 Creating all needed IAM policies on Amazon EC2 Creating all needed Secret Manager configurations on Amazon EC2 Creating all needed DHCP configurations on Amazon EC2 Creating a Windows Server 2022-based Master Image VM used for deploying the Machine Catalog in step 3 Creating two Windows Server 2022-based VMs used as Cloud Connector VMs in step 2 Creating a Windows Server 2022-based VM acting as a Administrative workstation for running the Terraform steps 2 and 3 - this is necessary because of using WinRM for further configuration and deployment in steps 2 and 3! Creating all necessary scripts for joining the VMs to the existing sub-domain Putting the VMs into the existing sub-domain Part Two - this part can only be run on the previously created Administrative VM as the deployment of steps 2 and 3 relies heavily on WinRM: Configuring the three previously created Virtual Machines on Amazon EC2: Installing the needed software on the CCs Installing the needed software on the Admin-VM Creating the necessary Resources in Citrix Cloud: Creating a Resource Location in Citrix Cloud Configuring the 2 CCs as Cloud Connectors Registering the 2 CCs in the newly created Resource Location Part Three: Creating the Machine Catalog and Delivery Group in Citrix Cloud: Retrieving the Site- and Zone-ID of the Resource Location Creating a dedicated Hypervisor Connection to Amazon EC2 Creating a dedicated Hypervisor Resource Pool Creating a Machine Catalog (MC) in the newly created Resource Location Creating a Delivery Group (DG) based on the MC in the newly created Resource Location Determine if WinRM connections/communications are functioning We strongly recommend a quick check to determine the communication before starting the Terraform scripts. Open a PowerShell console and type the following command: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> test-wsman -ComputerName <IP-Address of the computer you want to reach> -Credential <IP-Address of the computer you want to reach>\administrator -Authentication Basic The response should look like: wsmid : http://schemas.dmtf.org/wbem/wsman/identity/1/wsmanidentity.xsd ProtocolVersion : http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd ProductVendor : Microsoft Corporation ProductVersion : OS: 0.0.0 SP: 0.0 Stack: 3.0 Another possibility is to open a PowerShell console and type: Enter-PSSession -ComputerName <IP-Address of the computer you want to reach> -Credential <IP-Address of the computer you want to reach>\administrator The response should look like: [172.31.22.104]: PS C:\Users\Administrator\Documents> A short Terraform script also checks if the communication via WinRM between the Admin-VM and in this example, the CC1-VM is working as intended: locals { #### Test the WinRM communication #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context TerraformTestWinRMScript = <<-EOT $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $FileNameForData = 'C:\temp\xdinst\Processes.txt' If (Test-Path $FileNameForData) {Remove-Item -Path $FileNameForData -Force} Get-Process | Out-File -FilePath 'C:\temp\xdinst\Processes.txt' } EOT } #### Write script into local data-directory resource "local_file" "WriteWinRMTestScriptIntoDataDirectory" { filename = "${path.module}/data/Terraform-Test-WinRM.ps1" content = local.TerraformTestWinRMScript } resource "null_resource" "CreateTestScriptOnCC1" { connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_CC1-IP timeout = var.Provisioner_Timeout } provisioner "file" { source = "${path.module}/data/Terraform-Test-WinRM.ps1" destination = "C:/temp/xdinst/Terraform-Test-WinRM.ps1" } provisioner "remote-exec" { inline = [ "powershell -File 'C:/temp/xdinst/Terraform-Test-WinRM.ps1'" ] } } If you can see in the Terraform console something like...: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} null_resource.CreateTestScriptOnCC1: Creating... null_resource.CreateTestScriptOnCC1: Provisioning with 'remote-exec'... null_resource.CreateTestScriptOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CreateTestScriptOnCC1 (remote-exec): Host: 172.31.22.103 null_resource.CreateTestScriptOnCC1 (remote-exec): Port: 5985 null_resource.CreateTestScriptOnCC1 (remote-exec): User: administrator null_resource.CreateTestScriptOnCC1 (remote-exec): Password: true null_resource.CreateTestScriptOnCC1 (remote-exec): HTTPS: false null_resource.CreateTestScriptOnCC1 (remote-exec): Insecure: false null_resource.CreateTestScriptOnCC1 (remote-exec): NTLM: false null_resource.CreateTestScriptOnCC1 (remote-exec): CACert: false null_resource.CreateTestScriptOnCC1 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CreateTestScriptOnCC1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/Terraform-Test-WinRM.ps1 #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CreateTestScriptOnCC1: Creation complete after 3s [id=1571484748961023525] ...then you can be sure that the provisioning using WinRM is working as intended! Configuration using variables All needed configuration settings are stored in the corresponding Variables which need to be set. Some Configuration settings are propagated throughout the whole Terraform configuration... You need to start each of the 3 modules manually using the Terraform workflow terraform init, terraform plan, and terraform apply in the corresponding module directory. Terraform then completes the necessary configuration steps of the corresponding module. File System structure Root-Directory Module 1: _CConAWS-Creation: Filename Purpose _CConAWS-Creation-Create.tf Resource configuration and primary flow definition _CConAWS-Creation-Create-variables.tf Definition of Variables _CConAWS-Creation-Create.auto.tfvars.json Setting the values of the Variables _CConAWS-Creation-Provider.tf Provider definition and configuration _CConAWS-Creation-Provider-variables.tf Definition of Variables _CConAWS-Creation-Provider.auto.tfvars.json Setting the values of the Variables Add-EC2InstanceToDomainAdminVM.ps1 Powershell-Script for joining the Admin-VM to the Domain Add-EC2InstanceToDomainCC1.ps1 Powershell-Script for joining the CC1-VM to the Domain Add-EC2InstanceToDomainCC2.ps1 Powershell-Script for joining the CC2-VM to the Domain Add-EC2InstanceToDomainWMI.ps1 Powershell-Script for joining the CC2-VM to the Domain DATA-Directory Place to put files to upload using file provisioning (NOT RECOMMENDED - see later explanation Module 2: _CConAWS-Install: Filename Purpose _CCOnAWS-Install-CreatePreReqs.tf Resource configuration and primary flow definition _CCOnAWS-Install-CreatePreReqs-variables.tf Definition of Variables _CCOnAWS-Install-CreatePreReqs.auto.tfvars.json Setting the values of the Variables _CConAWS-Install-Provider.tf Provider definition and configuration _CConAWS-Install-Provider-variables.tf Definition of Variables _CConAWS-Install-Provider.auto.tfvars.json Setting the values of the Variables GetSiteID.ps1 PowerShell script to make a REST-API call to determine the CC-Site-ID GetZoneID.ps1 PowerShell script to make a REST-API call to determine the CC-Zone-ID DATA/InstallPreReqsOnAVM1.ps1 PowerShell script to deploy needed pre-requisites on the Admin-VM DATA/InstallPreReqsOnAVM2.ps1 PowerShell script to deploy needed pre-requisites on the Admin-VM DATA/InstallPreReqsOnCC.ps1 PowerShell script to deploy needed pre-requisites on the Admin-VM Module 3: _CConAWS-CCStuff: Filename Purpose _CCOnAWS-CCStuff-CreateCCEntities.tf Resource configuration and primary flow definition _CCOnAWS-CCStuff-CreateCCEntities-variables.tf Definition of Variables _CCOnAWS-CCStuff-CreateCCEntities.auto.tfvars.json Setting the values of the Variables _CConAWS-CCStuff-Provider.tf Provider definition and configuration _CConAWS-CCStuff-Provider-variables.tf Definition of Variables _CConAWS-CCStuff-Provider.auto.tfvars.json Setting the values of the Variables Change the settings in the .json files according to your needs. The following prerequisites are needed before setting the corresponding settings or running the Terraform workflow to ensure a smooth and error-free build. Prerequisites Installing AWS.Tools for PowerShell and AWS CLI In this guide, we use AWS CLI and PowerShell cmdlets to determine further needed information. Further information about AWS CLI and the installation/configuration can be found at AWS Command Line Interface, further information about the PowerShell cmdlets for AWS can be found at Installing the AWS Tools for PowerShell on Windows. .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Examples: PS C:\TACG> Install-AWSToolsModule AWS.Tools.EC2 -Force Installing module AWS.Tools.EC2 version 4.1.533.0 PS C:\TACG> Install-AWSToolsModule AWS.Tools.IdentityManagement -Force Installing module AWS.Tools.IdentityManagement version 4.1.533.0 PS C:\TACG> Install-AWSToolsModule AWS.Tools.SimpleSystemsManagement -Force Installing module AWS.Tools.SimpleSystemsManagement version 4.1.533.0 PS C:\TACG> Existing Amazon EC2 entities We anticipate that the following resources are existing and are already configured on Amazon EC2: A working tenant All needed rights for the IAM user on the tenant A VPC At least one subnet in the VPC A security group configured for allowing inbound connections from the subnet and partially from the Internet: WinRM-HTTP, WinRM-HTTPS, UDP, DNS (UDP and TCP), ICMP (for testing purposes), HTTP, HTTPS, TCP (for testing purposes), RDP. No blocking rules for outbound connections should be in place An access key with its secret (see a description how to create the key later on) We can get the needed information about the VPC by using PowerShell: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Get-EC2VPC CidrBlock : 172.31.0.0/16 CidrBlockAssociationSet : {vpc-cidr-assoc-0a91XXXXXXXXXXX} DhcpOptionsId : dopt-0a71XXXXXXXXXXX InstanceTenancy : default Ipv6CidrBlockAssociationSet : {} IsDefault : True OwnerId : 968XXXXXX State : available Tags : {} VpcId : vpc-0f9XXXXXXXXXXXX Note the VpcId and put it into the corresponding .auto.tvars.json file. We can get the needed information about one or more subnets also by using PowerShell: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Get-EC2Subnet AssignIpv6AddressOnCreation : False AvailabilityZone : eu-central-1b AvailabilityZoneId : euc1-az3 AvailableIpAddressCount : 4091 CidrBlock : 172.31.32.0/20 CustomerOwnedIpv4Pool : DefaultForAz : True EnableDns64 : False EnableLniAtDeviceIndex : 0 Ipv6CidrBlockAssociationSet : {} Ipv6Native : False MapCustomerOwnedIpOnLaunch : False MapPublicIpOnLaunch : True OutpostArn : OwnerId : 968XXXXXX PrivateDnsNameOptionsOnLaunch : Amazon.EC2.Model.PrivateDnsNameOptionsOnLaunch State : available SubnetArn : arn:aws:ec2:eu-central-1:968XXXXXX:subnet/subnet-02e91c49df134f849 SubnetId : subnet-02eXXXXXXXXXX Tags : {Name} VpcId : vpc-0f9XXXXXXXXXXXX AssignIpv6AddressOnCreation : False AvailabilityZone : eu-central-1a AvailabilityZoneId : euc1-az2 AvailableIpAddressCount : 4089 CidrBlock : 172.31.16.0/20 CustomerOwnedIpv4Pool : DefaultForAz : True EnableDns64 : False EnableLniAtDeviceIndex : 0 Ipv6CidrBlockAssociationSet : {} Ipv6Native : False MapCustomerOwnedIpOnLaunch : False MapPublicIpOnLaunch : True OutpostArn : OwnerId : 968XXXXXX PrivateDnsNameOptionsOnLaunch : Amazon.EC2.Model.PrivateDnsNameOptionsOnLaunch State : available SubnetArn : arn:aws:ec2:eu-central-1:968XXXXXX:subnet/subnet-07eXXXXXXXXXX SubnetId : subnet-07eXXXXXXXXXX Tags : {Name} VpcId : vpc-0f9XXXXXXXXXXXX AssignIpv6AddressOnCreation : False AvailabilityZone : eu-central-1c AvailabilityZoneId : euc1-az1 AvailableIpAddressCount : 4090 CidrBlock : 172.31.0.0/20 CustomerOwnedIpv4Pool : DefaultForAz : True EnableDns64 : False EnableLniAtDeviceIndex : 0 Ipv6CidrBlockAssociationSet : {} Ipv6Native : False MapCustomerOwnedIpOnLaunch : False MapPublicIpOnLaunch : True OutpostArn : OwnerId : 968XXXXXX PrivateDnsNameOptionsOnLaunch : Amazon.EC2.Model.PrivateDnsNameOptionsOnLaunch State : available SubnetArn : arn:aws:ec2:eu-central-1:968XXXXXX:subnet/subnet-0359XXXXXXXXXXX SubnetId : subnet-0359XXXXXXXXXXX Tags : {Name} VpcId : vpc-0f9XXXXXXXXXXXX Note the SubnetID of the Availability Zone that you want to use and put it into the corresponding .auto.tvars.json file. Getting the region in Amazon EC2 where the resources will be deployed The currently configured default region can be found by using for example AWS CLI - open a PowerShell window and type: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> aws configure get region eu-central-1 PS C:\TACG> Write down the Location as we need to assign it to variables. Getting the available AMI Image-IDs from Amazon EC2 We want to automatically deploy the virtual machines necessary for the DC and the CCs - so we need detailed configuration settings: Set the credentials needed for allowing PowerShell to access the EC2 tenant: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Set-AWSCredential -AccessKey AKIXXXXXXXXXXXXXXXX -SecretKey RiXXXXXXXXXXXXXXXXXXXXXXXXXX -StoreAs default Get the available AMI images: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Initialize-AWSDefaultConfiguration -ProfileName default -Region eu-central-1 PS C:\TACG> Get-SSMLatestEC2Image -Path ami-windows-latest -ProfileName default -Region eu-central-1 Name Value ---- ----- EC2LaunchV2-Windows_Server-2016-English-Full-Base ami-05da8c5b8c31e1071 Windows_Server-2016-English-Full-SQL_2014_SP3_Enterprise ami-0792b126a5682d6e8 Windows_Server-2016-German-Full-Base ami-04482c384c5f44eba Windows_Server-2016-Japanese-Full-SQL_2016_SP3_Standard ami-06bae50c6434d597c Windows_Server-2016-Japanese-Full-SQL_2017_Web ami-069867bf028ce1d11 Windows_Server-2019-English-Core-EKS_Optimized-1.25 ami-0dc34920ee17ff0c7 Windows_Server-2019-Italian-Full-Base ami-0f6d5ffbe2b4e6daa Windows_Server-2022-Japanese-Full-SQL_2019_Enterprise ami-0ce4c5ab9a9ee18e0 Windows_Server-2022-Portuguese_Brazil-Full-Base ami-0fe6028dce619a01c amzn2-ami-hvm-2.0.20191217.0-x86_64-gp2-mono ami-0f7a4c9d36399c73f Windows_Server-2016-English-Deep-Learning ami-0873c2c3320a70d5b Windows_Server-2016-Japanese-Full-SQL_2016_SP3_Web ami-08565efb3c4b556ba Windows_Server-2016-Korean-Full-Base ami-08a0270377841480d Windows_Server-2019-English-STIG-Core ami-0b4eb638a465efce5 Windows_Server-2019-French-Full-Base ami-0443c855ecad9de50 Windows_Server-2022-English-Full-Base ami-0ad8b6fa068e0299a ... Note the value of the AMI that you want to use - for example ami-0ad8b6fa068e0299a. Getting the available VM sizes from Amazon EC2 We need to determine the available VM sizes. A PowerShell script helps us to list the available instance types on Amazon EC2: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> PS C:\TACG> (Get-EC2InstanceType -Region eu-central-1).Count 694 We need to filter the results to narrow down usable instances - we want to use instances with max. 4 vCPUs and max. 8 GB RAM: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Get-EC2InstanceType -Region eu-central-1 | Select-Object -Property InstanceType, @{Name="vCPUs"; Expression={$_.VCpuInfo.DefaultVCpus}}, @{Name="Memory in GB"; Expression={$_.MemoryInfo.SizeInMiB / 1024}} | Where-Object {$_vCPUs -le 4 -and $_."Memory in GB" -le 8 } | Sort-Object InstanceType | Format-Table InstanceType,vCPUs,"Memory in GB" InstanceType vCPUs Memory in GB ------------ ----- ------------ a1.large 2 4 a1.medium 1 2 a1.xlarge 4 8 c3.large 2 3,75 c3.xlarge 4 7,5 c4.large 2 3,75 c4.xlarge 4 7,5 c5.large 2 4 c5.xlarge 4 8 c5a.large 2 4 c5a.xlarge 4 8 c5ad.large 2 4 c5ad.xlarge 4 8 c5d.large 2 4 c5d.xlarge 4 8 c5n.large 2 5,25 c6a.large 2 4 c6a.xlarge 4 8 c6g.large 2 4 c6g.medium 1 2 c6g.xlarge 4 8 c6gd.large 2 4 c6gd.medium 1 2 c6gd.xlarge 4 8 c6gn.large 2 4 c6gn.medium 1 2 c6gn.xlarge 4 8 c6i.large 2 4 c6i.xlarge 4 8 c6id.large 2 4 c6id.xlarge 4 8 c6in.large 2 4 c6in.xlarge 4 8 c7a.large 2 4 c7a.medium 1 2 c7a.xlarge 4 8 c7g.large 2 4 c7g.medium 1 2 c7g.xlarge 4 8 c7gd.large 2 4 c7gd.medium 1 2 c7gd.xlarge 4 8 c7i.large 2 4 c7i.xlarge 4 8 t2.large 2 8 t2.medium 2 4 t2.micro 1 1 t2.nano 1 0,5 t2.small 1 2 t3.large 2 8 t3.medium 2 4 t3.micro 2 1 t3.nano 2 0,5 t3.small 2 2 t3a.large 2 8 t3a.medium 2 4 t3a.micro 2 1 t3a.nano 2 0,5 t3a.small 2 2 t4g.large 2 8 t4g.medium 2 4 t4g.micro 2 1 t4g.nano 2 0,5 t4g.small 2 2 ... PS C:\TACG> Note the Instancetype parameter of the Amazon EC2 Resource Location that you want to use. We use t3.medium and t3.large. Be sure to use the Citrix Terraform Provider version 0.5.3 or higher or we need to pass the Instance type to the Terraform provider in a different format (examples): Amazon EC2 syntax: Needed format: t3.nano T3 Nano Instance t3.small T3 Small Instance t3.medium T3 Medium Instance t3.large T3 Large Instance e2.small E2 Small Instance e2.medium E2 Medium Instance Example: Checking the current quotas for the instances (N-family) which we plan to use in this guide: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Install-AWSToolsModule AWS.Tools.ServiceQuotas PS C:\TACG> Get-SQServiceList ServiceCode ServiceName ----------- ----------- AWSCloudMap AWS Cloud Map access-analyzer Access Analyzer acm AWS Certificate Manager (ACM) acm-pca AWS Private Certificate Authority ... ec2 Amazon Elastic Compute Cloud (Amazon EC2) ec2-ipam IPAM ec2fastlaunch EC2 Fast Launch ... PS C:\TACG> Get-SQServiceQuota -ServiceCode ec2 -QuotaCode L-1216C47A Adjustable : True ErrorReason : GlobalQuota : False Period : QuotaAppliedAtLevel : ACCOUNT QuotaArn : arn:aws:servicequotas:eu-central-1:9XXXXXXXXX:ec2/L-1216C47A QuotaCode : L-1216C47A QuotaContext : QuotaName : Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances ServiceCode : ec2 ServiceName : Amazon Elastic Compute Cloud (Amazon EC2) Unit : None UsageMetric : Amazon.ServiceQuotas.Model.MetricInfo Value : 256 PS C:\TACG> The output of the cmdlet shows that we should have enough available resources on the instance level. The quotas can be increased using the Amazon EC2 Console or PowerShell. More information about increasing vCPU quotas can be found here: Amazon Elastic Compute Cloud (Amazon EC2) quotas. Further Software Components for Configuration and Deployment The Terraform deployment needs actual versions of the following software components: Citrix Cloud Connector Installer: cwcconnector.exe. Download the Citrix Cloud Connector Installer Citrix Remote PowerShell SDK Installer: CitrixPoshSdk.exe. Download the Citrix Remote PowerShell SDK Installer These components are required during the workflow. The Terraform engine looks for these files. In this guide we anticipate that the necessary software can be downloaded from a Storage Repository - we use an Azure Storage Blob where all necessary software is uploaded to. The URIs of the Storage Repository can be set in the corresponding variables: For the Cloud Connector: "CC_Install_CWCURI":"https://wmwblob.blob.core.windows.net/tfdata/cwcconnector.exe" For the Remote PowerShell SDK: "CC_Install_RPoSHURI":"https://wmwblob.blob.core.windows.net/tfdata/CitrixPoshSdk.exe" Creating an Access Key with a Secret for Amazon EC2 Authentication in AWS CLI and/or AWS.Tools for PowerShell Access Keys are long-term credentials for an IAM user or the root user which are used for Authentication and Authorization in EC2. They consist of two parts: an Access Key-ID and a Secret Access Key. Both are needed for Authentication and Authorization. Further information about Access Key management and the configuration can be found in Managing access keys for IAM users. The needed security information for the IAM Policies is stored in an EC2-Secrets Manager: Further information about EC2-Secrets Manager and the configuration can be found at AWS Secrets Manager. Creating a Secure Client in Citrix Cloud The Secure Client in Citrix Cloud is the same as the Access Key in Amazon EC2. It is used for Authentication. API clients in Citrix Cloud are always tied to one administrator and one customer. API clients are not visible to other administrators. If you want to access to more than one customer, you must create API clients within each customer. API clients are automatically restricted to the rights of the administrator that created it. For example, if an administrator is restricted to access only notifications, then the administrator’s API clients have the same restrictions: Reducing an administrator’s access also reduces the access of the API clients owned by that administrator Removing an administrator’s access also removes the administrator’s API clients. To create an API client, select the Identity and Access Management option from the menu. If this option does not appear, you do not have adequate permissions to create an API client. Contact your administrator to get the required permissions. Open Identity and Access Management in WebStudio: Creating an API Client in Citrix Cloud Click API Access, Secure Clients and put a name in the textbox next to the button Create Client. After entering a name. click Create Client: Creating an API Client in Citrix Cloud After the Secure Client is created, copy and write down the shown ID and Secret: Creating an API Client in Citrix Cloud The Secret is only visible during creation - after closing the window you are not able to get it anymore. The client-id and client-secret fields are needed by the Citrix Terraform provider. The also needed customer-id field can be found in your Citrix Cloud details. Put the values in the corresponding .auto.tvars.json file: ... "cc-customerId": "uzXXXXXXXX", "cc-apikey-clientId": "f4eXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "cc-apikey-clientSecret": "VXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "cc-apikey-type": "client_credentials", ... Creating a Bearer Token in Citrix Cloud The Bearer Token is needed for the Authorization of some REST-API calls in Citrix Cloud. As the Citrix provider currently has not implemented all functionalities yet, some REST-API calls are still needed. The Bearer Token authorizes these calls. It is important to set the URI to call and the required parameters correct: The URI must follow this syntax: For example [https://api-us.cloud.com/cctrustoauth2/{customerid}/tokens/clients] where {customerid} is your Customer ID you obtained from the Account Settings page. If your Customer ID is for example 1234567890, the URI is [https://api-us.cloud.com/cctrustoauth2/1234567890/tokens/clients] In this example, we use the Postman application to create a Bearer Token: Paste the correct URI into Postman´s address bar and select POST as the method. Verify the correct settings of the API call. Creating a Bearer Token using Postman If everything is set correctly, Postman shows a Response containing a JSON-formatted file containing the Bearer token in the field access-token: The token is normally valid for 3600 seconds. Put the values in the corresponding .auto.tvars.json file: ... "cc-apikey-type": "client_credentials", "cc-apikey-bearer": "CWSAuth bearer=eyJhbGciOiJSUzI1NiIsI...0q0IW7SZFVzeBittWnEwTYOZ7Q " ... You can also use PowerShell to request a Bearer Token - therefore you need a valid Secure Client stored in Citrix Cloud: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} asnp Citrix* $key= "f4eXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" $secret= "VJCXXXXXXXXXXXXXX" $customer= "uXXXXXXXX" $XDStoredCredentials = Set-XDCredentials -StoreAs default -ProfileType CloudApi -CustomerId $customer -APIKey $key -SecretKey $secret $auth = Get-XDAuthentication $BT = $GLOBAL:XDAuthToken | Out-File "<Path where to store the Token\BT.txt" Module 1: Create the initially needed Resources on Amazon EC2 This module is split into the following configuration parts: Creating the initially needed Resources on Amazon EC2: Creating all needed IAM roles on Amazon EC2 Creating all needed IAM Instance profiles on Amazon EC2 Creating all needed IAM policies on Amazon EC2 Creating all needed Secret Manager configurations on Amazon EC2 Creating all needed DHCP configurations on Amazon EC2 Creating a Windows Server 2022-based Master Image VM used for deploying the Machine Catalog in step 3 Creating two Windows Server 2022-based VMs which will be used as Cloud Connector VMs in step 2 Creating a Windows Server 2022-based VM acting as an Administrative workstation for running the Terraform steps 2 and 3 - this is necessary because of using WinRM for further configuration and deployment in steps 2 and 3! Creating all necessary scripts for joining the VMs to the existing sub-domain Putting the VMs into the existing subdomain Fetching and saving a valid Bearer Token All these steps are automatically done by Terraform. Please make sure you have configured the variables according to your needs. The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> terraform init Initializing the backend... Initializing provider plugins... - terraform.io/builtin/terraform is built in to Terraform - Finding mastercard/restapi versions matching "1.18.2"... - Finding citrix/citrix versions matching ">= 0.5.0"... - Finding hashicorp/aws versions matching ">= 5.4.0"... - Finding latest version of hashicorp/local... - Finding latest version of hashicorp/template... - Installing hashicorp/local v2.5.1... - Installed hashicorp/local v2.5.1 (signed by HashiCorp) - Installing hashicorp/template v2.2.0... - Installed hashicorp/template v2.2.0 (signed by HashiCorp) - Installing mastercard/restapi v1.18.2... - Installed mastercard/restapi v1.18.2 (self-signed, key ID DCB8C431D71C30AB) - Installing citrix/citrix v0.5.2... - Installed citrix/citrix v0.5.2 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) - Installing hashicorp/aws v5.41.0... - Installed hashicorp/aws v5.41.0 (signed by HashiCorp) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> terraform plan data.template_file.Add-EC2InstanceToDomainScriptCC2: Reading... data.template_file.Add-EC2InstanceToDomainScriptWMI: Reading... data.template_file.Add-EC2InstanceToDomainScriptAdminVM: Reading... data.template_file.Add-EC2InstanceToDomainScriptCC1: Reading... data.template_file.Add-EC2InstanceToDomainScriptWMI: Read complete after 0s [id=85de6bac9e35231cbd60a4c1636a554940abb789938916a626a5193f27f22498] data.template_file.Add-EC2InstanceToDomainScriptCC1: Read complete after 0s [id=24ee722eca6982b33be472de4f84edbae000d0bff0a139dec2ce97c8ea14a0ca] data.template_file.Add-EC2InstanceToDomainScriptAdminVM: Read complete after 0s [id=91b3ae99f8d4a2effb377f35dda69a583de194739c8191ee665c96e663ad8615] data.template_file.Add-EC2InstanceToDomainScriptCC2: Read complete after 0s [id=15075f0d18ca3e200ab603e397339245a8ff055fd688facfc5165dd5e455d151] data.aws_iam_policy_document.ec2_assume_role: Reading... data.aws_iam_policy_document.ec2_assume_role: Read complete after 0s [id=2851119427] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.local_file.Retrieve_BT will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "Retrieve_BT" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "./GetBT.txt" + id = (known after apply) } # aws_iam_instance_profile.ec2_profile will be created + resource "aws_iam_instance_profile" "ec2_profile" { + arn = (known after apply) + create_date = (known after apply) + id = (known after apply) + name = "ec2-profile" + name_prefix = (known after apply) + path = "/" + role = "ec2-iam-role" + tags_all = (known after apply) + unique_id = (known after apply) } # aws_iam_policy.secret_manager_ec2_policy will be created + resource "aws_iam_policy" "secret_manager_ec2_policy" { + arn = (known after apply) + description = "Secret Manager EC2 policy" + id = (known after apply) + name = "secret-manager-ec2-policy" + name_prefix = (known after apply) + path = "/" + policy = jsonencode( { + Statement = [ + { + Action = [ + "secretsmanager:*", ] + Effect = "Allow" + Resource = "*" }, ] + Version = "2012-10-17" } ) + policy_id = (known after apply) + tags_all = (known after apply) } # aws_iam_policy_attachment.api_secret_manager_ec2_attach will be created + resource "aws_iam_policy_attachment" "api_secret_manager_ec2_attach" { + id = (known after apply) + name = "secret-manager-ec2-attachment" + policy_arn = (known after apply) + roles = (known after apply) } # aws_iam_policy_attachment.ec2_attach1 will be created + resource "aws_iam_policy_attachment" "ec2_attach1" { + id = (known after apply) + name = "ec2-iam-attachment" + policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" + roles = (known after apply) } # aws_iam_policy_attachment.ec2_attach2 will be created + resource "aws_iam_policy_attachment" "ec2_attach2" { + id = (known after apply) + name = "ec2-iam-attachment" + policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM" + roles = (known after apply) } # aws_iam_role.ec2_iam_role will be created + resource "aws_iam_role" "ec2_iam_role" { + arn = (known after apply) + assume_role_policy = jsonencode( { + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "ec2.amazonaws.com" } }, ] + Version = "2012-10-17" } ) + create_date = (known after apply) + force_detach_policies = false + id = (known after apply) + managed_policy_arns = (known after apply) + max_session_duration = 3600 + name = "ec2-iam-role" + name_prefix = (known after apply) + path = "/" + tags_all = (known after apply) + unique_id = (known after apply) } # aws_instance.AdminVM will be created + resource "aws_instance" "AdminVM" { + ami = "ami-0ad8b6fa068e0299a" + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + disable_api_stop = (known after apply) + disable_api_termination = (known after apply) + ebs_optimized = (known after apply) + get_password_data = false + host_id = (known after apply) + host_resource_group_arn = (known after apply) + iam_instance_profile = (known after apply) + id = (known after apply) + instance_initiated_shutdown_behavior = (known after apply) + instance_lifecycle = (known after apply) + instance_state = (known after apply) + instance_type = "t2.large" + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (sensitive value) + monitoring = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + placement_partition_number = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = "172.31.22.107" + public_dns = (known after apply) + public_ip = (known after apply) + secondary_private_ips = (known after apply) + security_groups = (known after apply) + source_dest_check = true + spot_instance_request_id = (known after apply) + subnet_id = "subnet-07e1XXXXXXXXXX" + tags = { + "Name" = "TACG-AWS-AVM" } + tags_all = { + "Name" = "TACG-AWS-AVM" } + tenancy = (known after apply) + user_data = "975296c878XXXXXXXXXX" + user_data_base64 = (known after apply) + user_data_replace_on_change = false + vpc_security_group_ids = [ + "sg-072eXXXXXXXXXX", ] } # aws_instance.CC1 will be created + resource "aws_instance" "CC1" { + ami = "ami-0ad8b6fa068e0299a" + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + disable_api_stop = (known after apply) + disable_api_termination = (known after apply) + ebs_optimized = (known after apply) + get_password_data = false + host_id = (known after apply) + host_resource_group_arn = (known after apply) + iam_instance_profile = (known after apply) + id = (known after apply) + instance_initiated_shutdown_behavior = (known after apply) + instance_lifecycle = (known after apply) + instance_state = (known after apply) + instance_type = "t2.medium" + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (sensitive value) + monitoring = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + placement_partition_number = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = "172.31.22.104" + public_dns = (known after apply) + public_ip = (known after apply) + secondary_private_ips = (known after apply) + security_groups = (known after apply) + source_dest_check = true + spot_instance_request_id = (known after apply) + subnet_id = "subnet-07e1XXXXXXXXXX" + tags = { + "Name" = "TACG-AWS-CC1" } + tags_all = { + "Name" = "TACG-AWS-CC1" } + tenancy = (known after apply) + user_data = "5daf6ab616e8eXXXXXXXXXX" + user_data_base64 = (known after apply) + user_data_replace_on_change = false + vpc_security_group_ids = [ + "sg-072eXXXXXXXXXX", ] } # aws_instance.CC2 will be created + resource "aws_instance" "CC2" { + ami = "ami-0ad8b6fa068e0299a" + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + disable_api_stop = (known after apply) + disable_api_termination = (known after apply) + ebs_optimized = (known after apply) + get_password_data = false + host_id = (known after apply) + host_resource_group_arn = (known after apply) + iam_instance_profile = (known after apply) + id = (known after apply) + instance_initiated_shutdown_behavior = (known after apply) + instance_lifecycle = (known after apply) + instance_state = (known after apply) + instance_type = "t2.medium" + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (sensitive value) + monitoring = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + placement_partition_number = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = "172.31.22.105" + public_dns = (known after apply) + public_ip = (known after apply) + secondary_private_ips = (known after apply) + security_groups = (known after apply) + source_dest_check = true + spot_instance_request_id = (known after apply) + subnet_id = "subnet-07e1XXXXXXXXXX" + tags = { + "Name" = "TACG-AWS-CC2" } + tags_all = { + "Name" = "TACG-AWS-CC2" } + tenancy = (known after apply) + user_data = "71b8c58dcf57XXXXXXXXXX" + user_data_base64 = (known after apply) + user_data_replace_on_change = false + vpc_security_group_ids = [ + "sg-072eXXXXXXXXXX", ] } # aws_instance.WMI will be created + resource "aws_instance" "WMI" { + ami = "ami-0ad8b6fa068e0299a" + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + disable_api_stop = (known after apply) + disable_api_termination = (known after apply) + ebs_optimized = (known after apply) + get_password_data = false + host_id = (known after apply) + host_resource_group_arn = (known after apply) + iam_instance_profile = (known after apply) + id = (known after apply) + instance_initiated_shutdown_behavior = (known after apply) + instance_lifecycle = (known after apply) + instance_state = (known after apply) + instance_type = "t2.medium" + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (sensitive value) + monitoring = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + placement_partition_number = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = "172.31.22.106" + public_dns = (known after apply) + public_ip = (known after apply) + secondary_private_ips = (known after apply) + security_groups = (known after apply) + source_dest_check = true + spot_instance_request_id = (known after apply) + subnet_id = "subnet-07e168f0c2a28edf3" + tags = { + "Name" = "TACG-AWS-WMI" } + tags_all = { + "Name" = "TACG-AWS-WMI" } + tenancy = (known after apply) + user_data = "bd599dcdaa3dXXXXXXXXXXX" + user_data_base64 = (known after apply) + user_data_replace_on_change = false + vpc_security_group_ids = [ + "sg-072eXXXXXXXXXX", ] } # aws_vpc_dhcp_options.vpc-dhcp-options will be created + resource "aws_vpc_dhcp_options" "vpc-dhcp-options" { + arn = (known after apply) + domain_name_servers = [ + "172.31.22.103", ] + id = (known after apply) + owner_id = (known after apply) + tags_all = (known after apply) } # aws_vpc_dhcp_options_association.dns_resolver will be created + resource "aws_vpc_dhcp_options_association" "dns_resolver" { + dhcp_options_id = (known after apply) + id = (known after apply) + vpc_id = "vpc-0f9aXXXXXXXXXX" } # local_file.GetBearerToken will be created + resource "local_file" "GetBearerToken" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "./GetBT.ps1" + id = (known after apply) } # terraform_data.GetBT will be created + resource "terraform_data" "GetBT" { + id = (known after apply) } Plan: 14 to add, 0 to change, 0 to destroy. PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> terraform apply data.template_file.Add-EC2InstanceToDomainScriptCC2: Reading... data.template_file.Add-EC2InstanceToDomainScriptWMI: Reading... data.template_file.Add-EC2InstanceToDomainScriptAdminVM: Reading... data.template_file.Add-EC2InstanceToDomainScriptCC1: Reading... data.template_file.Add-EC2InstanceToDomainScriptWMI: Read complete after 0s [id=85de6bac9e35231cbd60a4c1636a554940abb789938916a626a5193f27f22498] data.template_file.Add-EC2InstanceToDomainScriptCC1: Read complete after 0s [id=24ee722eca6982b33be472de4f84edbae000d0bff0a139dec2ce97c8ea14a0ca] data.template_file.Add-EC2InstanceToDomainScriptAdminVM: Read complete after 0s [id=91b3ae99f8d4a2effb377f35dda69a583de194739c8191ee665c96e663ad8615] data.template_file.Add-EC2InstanceToDomainScriptCC2: Read complete after 0s [id=15075f0d18ca3e200ab603e397339245a8ff055fd688facfc5165dd5e455d151] data.aws_iam_policy_document.ec2_assume_role: Reading... data.aws_iam_policy_document.ec2_assume_role: Read complete after 0s [id=2851119427] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.local_file.Retrieve_BT will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "Retrieve_BT" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "./GetBT.txt" + id = (known after apply) } ... ** Output shortened **... # terraform_data.GetBT will be created + resource "terraform_data" "GetBT" { + id = (known after apply) } Plan: 14 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_instance.AdminVM: Creating... aws_instance.AdminVM: Still creating... [10s elapsed] aws_instance.AdminVM: Still creating... [20s elapsed] aws_instance.AdminVM: Creation complete after 22s [id=i-0ad3352d673db8068] ... ** Output shortened **... aws_instance.WMI: Creating... aws_instance.WMI: Still creating... [10s elapsed] aws_instance.WMI: Still creating... [20s elapsed] aws_instance.WMI: Still creating... [30s elapsed] aws_instance.WMI: Creation complete after 34s [id=i-0ad3352d673db8068] Apply complete! Resources: 14 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> As no errors occurred, Terraform has completed the creation and partial configuration of the relevant prerequisites on Amazon EC2. Example of successful creation: All VMs were successfully created and registered on the Domain Controller: Now the next step can be started. Module 2: Install and Configure all Resources in Amazon EC2 This module is split into the following configuration parts: Configuring the three previously created Virtual Machines on Amazon EC2: Installing the needed software on the CCs Installing the needed software on the Admin-VM Creating the necessary Resources in Citrix Cloud: Creating a Resource Location in Citrix Cloud Configuring the 2 CCs as Cloud Connectors Registering the 2 CCs in the newly created Resource Location Our provider currently does not support creating a Resource Location on Citrix Cloud. Therefore we use a PowerShell script to create it using a REST-API call. Please make sure you have configured the variables according to your needs by using the corresponding .auto.tfvars.json file. Terraform runs various scripts before creating the configuration of the CCs to determine needed information like the Site-ID, the Zone-ID, and the Resource Location-ID. These IDs are used in other scripts or files - for example the parameter file for deploying the Cloud Connector needs the Resource Location ID of the Resource Location which Terraform creates automatically. Unfortunately the REST-API provider does not return the ID of the newly created Resource Location, so we need to run PowerShell after the creation of the Resource Location: Examples of necessary scripts: At first, Terraform writes the configuration file without the Resource Location ID: #### Create CWC-Installer configuration file based on variables and save it into Transfer directory resource "local_file" "CWC-Configuration" { depends_on = [restapi_object.CreateRL] content = jsonencode( { "customerName" = "${var.CC_CustomerID}", "clientId" = "${var.CC_APIKey-ClientID}", "clientSecret" = "${var.CC_APIKey-ClientSecret}", "resourceLocationId" = "XXXXXXXXXX", "acceptTermsOfService" = true } ) filename = "${var.CC_Install_LogPath}/DATA/cwc.json" } After installing further pre-requisites, Terraform runs a PowerShell script to get the needed ID and updates the configuration file for the CWC installer: #### Create PowerShell file for determining the correct RL-ID resource "local_file" "InstallPreReqsOnAVM2-ps1" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" # Correct the Resource Location ID in cwc.json file $requestUri = "https://api-eu.cloud.com/resourcelocations" $headers = @{ "Accept"="application/json"; "Authorization" = "${var.CC_APIKey-Bearer}"; "Citrix-CustomerId" = "${var.CC_CustomerID}"} $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json $RLs = ConvertFrom-Json $response $RLFiltered = $RLs.items | Where-Object name -in "${var.CC_RestRLName}" Add-Content ${var.CC_Install_LogPath}/log.txt $RLFiltered $RLID = $RLFiltered.id $OrigContent = Get-Content ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/log.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt $OrigContent $CorrContent = $OrigCOntent.Replace('XXXXXXXXXX', $RLID) | Out-File -FilePath ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/DATA/GetRLID.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt "`ncwc.json corrected." Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript completed." } EOT filename = "${path.module}/DATA/InstallPreReqsOnAVM2.ps1" } The Terraform configuration contains some idle time slots to make sure that background operations on Amazon EC2 or on the VMs can be completed before the next configuration steps occur. We have seen different elapsed configuration times related to different loads on the Amazon EC2 systems! Before running Terraform, we cannot see the Resource Location: The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> terraform init Initializing the backend... Initializing provider plugins... - terraform.io/builtin/terraform is built in to Terraform - Finding latest version of hashicorp/random... - Finding latest version of hashicorp/time... - Finding latest version of hashicorp/local... - Finding latest version of hashicorp/null... - Finding hashicorp/aws versions matching ">= 5.4.0"... - Finding mastercard/restapi versions matching "1.18.2"... - Finding citrix/citrix versions matching ">= 0.5.2"... - Installing citrix/citrix v0.5.2... - Installed citrix/citrix v0.5.2 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) - Installing hashicorp/random v3.6.0... - Installed hashicorp/random v3.6.0 (signed by HashiCorp) - Installing hashicorp/time v0.11.1... - Installed hashicorp/time v0.11.1 (signed by HashiCorp) - Installing hashicorp/local v2.5.1... - Installed hashicorp/local v2.5.1 (signed by HashiCorp) - Installing hashicorp/null v3.2.2... - Installed hashicorp/null v3.2.2 (signed by HashiCorp) - Installing hashicorp/aws v5.41.0... - Installed hashicorp/aws v5.41.0 (signed by HashiCorp) - Installing mastercard/restapi v1.18.2... - Installed mastercard/restapi v1.18.2 (self-signed, key ID DCB8C431D71C30AB) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> terraform plan Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.local_file.input_site will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "input_site" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "c:/temp/xdinst/DATA/GetSiteID.txt" + id = (known after apply) } ... ** Output shortened **... # terraform_data.SiteID will be created + resource "terraform_data" "SiteID" { + id = (known after apply) } # terraform_data.ZoneID will be created + resource "terraform_data" "ZoneID" { + id = (known after apply) } # time_sleep.wait_300_seconds will be created + resource "time_sleep" "wait_300_seconds" { + create_duration = "300s" + id = (known after apply) } Plan: 18 to add, 0 to change, 0 to destroy. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now. PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> terraform apply Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.local_file.input_site will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "input_site" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "c:/temp/xdinst/DATA/GetSiteID.txt" + id = (known after apply) } # local_file.CWC-Configuration will be created + resource "local_file" "CWC-Configuration" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/cwc.json" + id = (known after apply) } ... ** Output shortened **... # terraform_data.SiteID will be created + resource "terraform_data" "SiteID" { + id = (known after apply) } # terraform_data.ZoneID will be created + resource "terraform_data" "ZoneID" { + id = (known after apply) } # time_sleep.wait_300_seconds will be created + resource "time_sleep" "wait_300_seconds" { + create_duration = "300s" + id = (known after apply) } Plan: 18 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes time_sleep.wait_300_seconds: Creating... random_uuid.IDforCCRL: Creating... random_uuid.IDforCCRL: Creation complete after 0s [id=76ef126a-4b14-cc24-e067-62dbf8c98d5c] local_file.InstallPreReqsOnCC-ps1: Creating... local_file.InstallPreReqsOnAVM1-ps1: Creating... local_file.InstallPreReqsOnAVM2-ps1: Creating... local_file.InstallPreReqsOnAVM1-ps1: Creation complete after 0s [id=c4e0ae63ee7a4c5ce89827fcf741942c19ae7aa0] local_file.InstallPreReqsOnCC-ps1: Creation complete after 0s [id=1927ce7666e1be3a10049619e515d97c2f1e031d] restapi_object.CreateRL: Creating... local_file.InstallPreReqsOnAVM2-ps1: Creation complete after 0s [id=dccb6ef781887e459e04d6c3866871fbc11d9868] restapi_object.CreateRL: Creation complete after 1s [id=76ef126a-4b14-cc24-e067-62dbf8c98d5c] local_file.CWC-Configuration: Creating... local_file.CWC-Configuration: Creation complete after 0s [id=0257d2b92f197fa4d043b6cd4c4959be284dddc2] time_sleep.wait_300_seconds: Still creating... [10s elapsed] time_sleep.wait_300_seconds: Still creating... [20s elapsed] time_sleep.wait_300_seconds: Still creating... [30s elapsed] time_sleep.wait_300_seconds: Still creating... [40s elapsed] time_sleep.wait_300_seconds: Still creating... [50s elapsed] ... ** Output shortened **... time_sleep.wait_300_seconds: Still creating... [4m31s elapsed] time_sleep.wait_300_seconds: Still creating... [4m41s elapsed] time_sleep.wait_300_seconds: Still creating... [4m51s elapsed] time_sleep.wait_300_seconds: Creation complete after 5m0s [id=2024-03-15T09:29:54Z] local_file.GetSiteIDScript: Creating... local_file.GetSiteIDScript: Creation complete after 0s [id=11e3d863bb343b8e10beca1d1b8d2e1918aff757] terraform_data.SiteID: Creating... terraform_data.SiteID: Provisioning with 'local-exec'... terraform_data.SiteID (local-exec): Executing: ["PowerShell" "-File" "GetSiteID.ps1"] null_resource.UploadRequiredComponentsToAVM: Creating... null_resource.UploadRequiredComponentsToAVM: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>terraform_data.SiteID: Creation complete after 2s [id=d8069268-d023-dc1f-b6e3-5eb4a96d7e34] data.local_file.input_site: Reading... data.local_file.input_site: Read complete after 0s [id=f684114a2b93cc095c4ac5f81999ee1a111d53b9] local_file.GetZoneIDScript: Creating... local_file.GetZoneIDScript: Creation complete after 0s [id=acf533cb047cc8f963f8bf53b792f236eb8d9cd3] terraform_data.ZoneID: Creating... terraform_data.ZoneID: Provisioning with 'local-exec'... terraform_data.ZoneID (local-exec): Executing: ["PowerShell" "-File" "GetZoneID.ps1"] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToAVM: Provisioning with 'file'... terraform_data.ZoneID: Creation complete after 0s [id=560905f8-976a-56ff-a73a-95f287a69761] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToAVM: Creation complete after 3s [id=1331150174844471895] null_resource.CallRequiredScriptsOnAVM1: Creating... null_resource.CallRequiredScriptsOnAVM1: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnAVM1 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Password: true null_resource.CallRequiredScriptsOnAVM1 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnAVM1 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnAVM1 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnAVM1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/InstallPreReqsOnAVM1.ps1 null_resource.CallRequiredScriptsOnAVM1: Still creating... [10s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [20s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [30s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [40s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [50s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [1m0s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [1m11s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [1m21s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [1m31s elapsed] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CallRequiredScriptsOnAVM1: Creation complete after 1m32s [id=1765564400293269918] null_resource.CallRequiredScriptsOnAVM2: Creating... null_resource.CallRequiredScriptsOnAVM2: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnAVM2 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Password: true null_resource.CallRequiredScriptsOnAVM2 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnAVM2 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnAVM2 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnAVM2 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/InstallPreReqsOnAVM2.ps1 #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CallRequiredScriptsOnAVM2: Creation complete after 3s [id=1571484748961023525] null_resource.UploadRequiredComponentsToCC1: Creating... null_resource.UploadRequiredComponentsToCC2: Creating... null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... ... ** Output shortened **... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Creation complete after 8s [id=2593060114553604983] null_resource.CallRequiredScriptsOnCC2: Creating... null_resource.CallRequiredScriptsOnCC2: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnCC2 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC2 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC2 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC2 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnCC2 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/InstallPreReqsOnCC.ps1} null_resource.CallRequiredScriptsOnCC2 (remote-exec): Windows PowerShell null_resource.CallRequiredScriptsOnCC2 (remote-exec): Processing -File 'c:/temp/xdinst/InstallPreReqsOnCC.ps1}' null_resource.CallRequiredScriptsOnCC2 (remote-exec): Copyright (C) Microsoft Corporation. All rights reserved. null_resource.CallRequiredScriptsOnCC2 (remote-exec): Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows null_resource.UploadRequiredComponentsToCC1: Still creating... [10s elapsed] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Creation complete after 15s [id=2911410724639395293] null_resource.CallRequiredScriptsOnCC1: Creating... null_resource.CallRequiredScriptsOnCC1: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnCC1 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC1 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC1 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC1 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... ... ** Output shortened **... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Creation complete after 7s [id=8529594446213212779] null_resource.CallRequiredScriptsOnCC1: Creating... null_resource.CallRequiredScriptsOnCC1: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnCC1 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC1 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC1 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC1 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): CACert: false #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Creation complete after 8s [id=5071991036813727940] null_resource.CallRequiredScriptsOnCC2: Creating... null_resource.CallRequiredScriptsOnCC2: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnCC2 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC2 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC2 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC2 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connected! null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnCC1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/InstallPreReqsOnCC.ps1 ... ** Output shortened **... Apply complete! Resources: 18 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> This configuration completes the creation and configuration of all initial resources: Installing the needed software on the CCs Creating a Resource Location in Citrix Cloud Configuring the 2 CCs as Cloud Connectors Registering the 2 CCs in the newly created Resource Location After successful runs of all needed scripts we can see the new Resource Location in Citrix Cloud and the 2 Cloud Connectors bound to the Resource Location: The environment is now ready to deploy a Machine Catalog and a Delivery Group using Module 3. Module 3: Create all Resources in Amazon EC2 and Citrix Cloud This module is split into the following configuration parts: Creating a Hypervisor Connection to Amazon EC2 and a Hypervisor Pool Creating a Machine Catalog (MC) in the newly created Resource Location Creating a Delivery Group (DG) based on the MC in the newly created Resource Location Deploying some example policies using Terraform The Terraform configuration contains some idle time slots to make sure that background operations on Amazon EC2 or the VMs can be completed before the next configuration steps occur. We have seen different elapsed configuration times related to different loads on the Amazon EC2 systems! Before Terraform can create the Hypervisor Connection and the Hypervisor Pool, Terraform needs to retrieve the Site-ID and Zone-ID of the newly created Resource Location. As the Citrix Terraform Provider currently has no Cloud-level functionalities implemented, Terraform needs PowerShell scripts to retrieve the IDs. It created the necessary scripts with all the needed variables, saved the scripts, and ran them in Module 2. After retrieving the IDs, Terraform configures a Hypervisor Connection to Amazon EC2 and a Hypervisor Resource Pool associated with the Hypervisor Connection. When these prerequisites are completed, the Machine Catalog is created. After the successful creation of the Hypervisor Connection, the Hypervisor Resource Pool, and the Machine Catalog the last step of the deployment process starts - the creation of the Delivery Group. The Terraform configuration assumes that all machines in the created Machine Catalog are used in the Delivery Group and that Autoscale will be configured for this Delivery Group. More information about Autoscale can be found here: https://docs.citrix.com/en-us/tech-zone/learn/tech-briefs/autoscale.html The deployment of Citrix Policies is a new feature that was built-in in version 0.5.2. We need to know the internal policy name as localized policy names and descriptions are not usable. Therefore we need to use a PowerShell script to determine all internal names - some pre-requisites are necessary for the script to work. You can need any machine but the Cloud Connectors! Install the Citrix Supportability Pack Install the Citrix Group Policy Management - scroll down to Group Policy .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Import-Module "C:\TACG\Supportability Pack\Tools\Scout\Current\Utilities\Citrix.GroupPolicy.commands.psm1" -force new-psdrive -name LocalFarmGpo -psprovider CitrixGroupPolicy -controller localhost \ Get-PSDrive cd LocalFarmGpo: Get-CtxGroupPolicyConfiguration Type: User ProfileLoadTimeMonitoring_Threshold ICALatencyMonitoring_Enable ICALatencyMonitoring_Period ICALatencyMonitoring_Threshold EnableLossless ProGraphics FRVideos_Part FRVideosPath_Part FRStartMenu_Part FRStartMenuPath_Part FRSearches_Part FRSearchesPath_Part FRSavedGames_Part FRSavedGamesPath_Part FRPictures_Part FRPicturesPath_Part FRMusic_Part FRMusicPath_Part FRLinks_Part FRLinksPath_Part FRFavorites_Part FRFavoritesPath_Part FRDownloads_Part FRDownloadsPath_Part FRDocuments_Part FRDocumentsPath_Part FRDesktop_Part FRDesktopPath_Part FRContacts_Part FRContactsPath_Part FRAdminAccess_Part FRIncDomainName_Part FRAppData_Part FRAppDataPath_Part StorefrontAccountsList AllowFidoRedirection AllowWIARedirection ClientClipboardWriteAllowedFormats ClipboardRedirection ClipboardSelectionUpdateMode DesktopLaunchForNonAdmins DragDrop LimitClipboardTransferC2H LimitClipboardTransferH2C LossTolerantModeAvailable NonPublishedProgramLaunching PrimarySelectionUpdateMode ReadonlyClipboard RestrictClientClipboardWrite RestrictSessionClipboardWrite SessionClipboardWriteAllowedFormats FramesPerSecond PreferredColorDepthForSimpleGraphics VisualQuality ExtraColorCompression ExtraColorCompressionThreshold LossyCompressionLevel LossyCompressionThreshold ProgressiveHeavyweightCompression MinimumAdaptiveDisplayJpegQuality MovingImageCompressionConfiguration ProgressiveCompressionLevel ProgressiveCompressionThreshold TargetedMinimumFramesPerSecond ClientUsbDeviceOptimizationRules UsbConnectExistingDevices UsbConnectNewDevices UsbDeviceRedirection UsbDeviceRedirectionRules USBDeviceRulesV2 UsbPlugAndPlayRedirection TwainCompressionLevel TwainRedirection LocalTimeEstimation RestoreServerTime SessionTimeZone EnableSessionWatermark WatermarkStyle WatermarkTransparency WatermarkCustomText WatermarkIncludeClientIPAddress WatermarkIncludeConnectTime WatermarkIncludeLogonUsername WatermarkIncludeVDAHostName WatermarkIncludeVDAIPAddress EnableRemotePCDisconnectTimer SessionConnectionTimer SessionConnectionTimerInterval SessionDisconnectTimer SessionDisconnectTimerInterval SessionIdleTimer SessionIdleTimerInterval LossTolerantThresholds EnableServerConnectionTimer EnableServerDisconnectionTimer EnableServerIdleTimer ServerConnectionTimerInterval ServerDisconnectionTimerInterval ServerIdleTimerInterval MinimumEncryptionLevel AutoCreationEventLogPreference ClientPrinterRedirection DefaultClientPrinter PrinterAssignments SessionPrinters WaitForPrintersToBeCreated UpsPrintStreamInputBandwidthLimit DPILimit EMFProcessingMode ImageCompressionLimit UniversalPrintingPreviewPreference UPDCompressionDefaults InboxDriverAutoInstallation UniversalDriverPriority UniversalPrintDriverUsage AutoCreatePDFPrinter ClientPrinterAutoCreation ClientPrinterNames DirectConnectionsToPrintServers GenericUniversalPrinterAutoCreation PrinterDriverMappings PrinterPropertiesRetention ClientComPortRedirection ClientComPortsAutoConnection ClientLptPortRedirection ClientLptPortsAutoConnection MaxSpeexQuality MSTeamsRedirection MultimediaOptimization UseGPUForMultimediaOptimization VideoLoadManagement VideoQuality WebBrowserRedirectionAcl WebBrowserRedirectionAuthenticationSites WebBrowserRedirectionBlacklist WebBrowserRedirectionIwaSupport WebBrowserRedirectionProxy WebBrowserRedirectionProxyAuth MultiStream AutoKeyboardPopUp ComboboxRemoting MobileDesktop TabletModeToggle ClientKeyboardLayoutSyncAndIME EnableUnicodeKeyboardLayoutMapping HideKeyboardLayoutSwitchPopupMessageBox AllowVisuallyLosslessCompression DisplayLosslessIndicator OptimizeFor3dWorkload ScreenSharing UseHardwareEncodingForVideoCodec UseVideoCodecForCompression EnableFramehawkDisplayChannel AllowFileDownload AllowFileTransfer AllowFileUpload AsynchronousWrites AutoConnectDrives ClientDriveLetterPreservation ClientDriveRedirection ClientFixedDrives ClientFloppyDrives ClientNetworkDrives ClientOpticalDrives ClientRemoveableDrives HostToClientRedirection ReadOnlyMappedDrive SpecialFolderRedirection AeroRedirection DesktopWallpaper GraphicsQuality MenuAnimation WindowContentsVisibleWhileDragging AllowLocationServices AllowBidirectionalContentRedirection BidirectionalRedirectionConfig ClientURLs VDAURLs AudioBandwidthLimit AudioBandwidthPercent ClipboardBandwidthLimit ClipboardBandwidthPercent ComPortBandwidthLimit ComPortBandwidthPercent FileRedirectionBandwidthLimit FileRedirectionBandwidthPercent HDXMultimediaBandwidthLimit HDXMultimediaBandwidthPercent LptBandwidthLimit LptBandwidthLimitPercent OverallBandwidthLimit PrinterBandwidthLimit PrinterBandwidthPercent TwainBandwidthLimit TwainBandwidthPercent USBBandwidthLimit USBBandwidthPercent AllowRtpAudio AudioPlugNPlay AudioQuality ClientAudioRedirection EnableAdaptiveAudio MicrophoneRedirection FlashAcceleration FlashBackwardsCompatibility FlashDefaultBehavior FlashEventLogging FlashIntelligentFallback FlashLatencyThreshold FlashServerSideContentFetchingWhitelist FlashUrlColorList FlashUrlCompatibilityList HDXFlashLoadManagement HDXFlashLoadManagementErrorSwf Type: Computer WemCloudConnectorList VirtualLoopbackPrograms VirtualLoopbackSupport EnableAutoUpdateOfControllers AppFailureExclusionList EnableProcessMonitoring EnableResourceMonitoring EnableWorkstationVDAFaultMonitoring SelectedFailureLevel CPUUsageMonitoring_Enable CPUUsageMonitoring_Period CPUUsageMonitoring_Threshold VdcPolicyEnable EnableClipboardMetadataCollection EnableVdaDiagnosticsCollection XenAppOptimizationDefinitionPathData XenAppOptimizationEnabled ExclusionList_Part IncludeListRegistry_Part LastKnownGoodRegistry DefaultExclusionList ExclusionDefaultReg01 ExclusionDefaultReg02 ExclusionDefaultReg03 PSAlwaysCache PSAlwaysCache_Part PSEnabled PSForFoldersEnabled PSForPendingAreaEnabled PSPendingLockTimeout PSUserGroups_Part StreamingExclusionList_Part ApplicationProfilesAutoMigration DeleteCachedProfilesOnLogoff LocalProfileConflictHandling_Part MigrateWindowsProfilesToUserStore_Part ProfileDeleteDelay_Part TemplateProfileIsMandatory TemplateProfileOverridesLocalProfile TemplateProfileOverridesRoamingProfile TemplateProfilePath DisableConcurrentAccessToOneDriveContainer DisableConcurrentAccessToProfileContainer EnableVHDAutoExtend EnableVHDDiskCompaction GroupsToAccessProfileContainer_Part PreventLoginWhenMountFailed_Part ProfileContainerExclusionListDir_Part ProfileContainerExclusionListFile_Part ProfileContainerInclusionListDir_Part ProfileContainerInclusionListFile_Part ProfileContainerLocalCache DebugFilePath_Part DebugMode LogLevel_ActiveDirectoryActions LogLevel_FileSystemActions LogLevel_FileSystemNotification LogLevel_Information LogLevel_Logoff LogLevel_Logon LogLevel_PolicyUserLogon LogLevel_RegistryActions LogLevel_RegistryDifference LogLevel_UserName LogLevel_Warnings MaxLogSize_Part LargeFileHandlingList_Part LogonExclusionCheck_Part AccelerateFolderMirroring MirrorFoldersList_Part ProfileContainer_Part SyncDirList_Part SyncFileList_Part ExclusionListSyncDir_Part ExclusionListSyncFiles_Part DefaultExclusionListSyncDir ExclusionDefaultDir01 ExclusionDefaultDir02 ExclusionDefaultDir03 ExclusionDefaultDir04 ExclusionDefaultDir05 ExclusionDefaultDir06 ExclusionDefaultDir07 ExclusionDefaultDir08 ExclusionDefaultDir09 ExclusionDefaultDir10 ExclusionDefaultDir11 ExclusionDefaultDir12 ExclusionDefaultDir13 ExclusionDefaultDir14 ExclusionDefaultDir15 ExclusionDefaultDir16 ExclusionDefaultDir17 ExclusionDefaultDir18 ExclusionDefaultDir19 ExclusionDefaultDir20 ExclusionDefaultDir21 ExclusionDefaultDir22 ExclusionDefaultDir23 ExclusionDefaultDir24 ExclusionDefaultDir25 ExclusionDefaultDir26 ExclusionDefaultDir27 ExclusionDefaultDir28 ExclusionDefaultDir29 ExclusionDefaultDir30 SharedStoreFileExclusionList_Part SharedStoreFileInclusionList_Part SharedStoreProfileContainerFileSizeLimit_Part CPEnable CPMigrationFromBaseProfileToCPStore CPPathData CPSchemaPathData CPUserGroups_Part DATPath_Part ExcludedGroups_Part MigrateUserStore_Part OfflineSupport ProcessAdmins ProcessedGroups_Part PSMidSessionWriteBack PSMidSessionWriteBackReg PSMidSessionWriteBackSessionLock ServiceActive AppAccessControl_Part CEIPEnabled CredBasedAccessEnabled DisableDynamicConfig EnableVolumeReattach FreeRatio4Compaction_Part FSLogixProfileContainerSupport LoadRetries_Part LogoffRatherThanTempProfile MultiSiteReplication_Part NDefrag4Compaction NLogoffs4Compaction_Part OneDriveContainer_Part OrderedGroups_Part OutlookEdbBackupEnabled OutlookSearchRoamingConcurrentSession OutlookSearchRoamingConcurrentSession_Part OutlookSearchRoamingEnabled ProcessCookieFiles SyncGpoStateEnabled UserGroupLevelConfigEnabled UserStoreSelection_Part UwpAppsRoaming VhdAutoExpansionIncrement_Part VhdAutoExpansionLimit_Part VhdAutoExpansionThreshold_Part VhdContainerCapacity_Part VhdStorePath_Part UplCustomizedUserLayerSizeInGb UplGroupsUsingCustomizedUserLayerSize UplRepositoryPath UplUserExclusions UplUserLayerSizeInGb ConcurrentLogonsTolerance CPUUsage CPUUsageExcludedProcessPriority DiskUsage MaximumNumberOfSessions MemoryUsage MemoryUsageBaseLoad ApplicationLaunchWaitTimeout HDXAdaptiveTransport HDXDirect HDXDirectMode HDXDirectPortRange IcaListenerPortNumber IcaListenerTimeout LogoffCheckerStartupDelay RemoteCredentialGuard RendezvousProtocol RendezvousProxy SecureHDX VdaUpgradeProxy VirtualChannelWhiteList VirtualChannelWhiteListLogging VirtualChannelWhiteListLogThrottling AcceptWebSocketsConnections WebSocketsPort WSTrustedOriginServerList SessionReliabilityConnections SessionReliabilityPort SessionReliabilityTimeout IdleTimerInterval LoadBalancedPrintServers PrintServersOutOfServiceThreshold UpcHttpConnectTimeout UpcHttpReceiveTimeout UpcHttpSendTimeout UpcSslCgpPort UpcSslCipherSuite UpcSslComplianceMode UpcSslEnable UpcSslFips UpcSslHttpsPort UpcSslProtocolVersion UpsCgpPort UpsEnable UpsHttpPort HTML5VideoRedirection MultimediaAcceleration MultimediaAccelerationDefaultBufferSize MultimediaAccelerationEnableCSF MultimediaAccelerationUseDefaultBufferSize MultimediaConferencing WebBrowserRedirection MultiPortPolicy MultiStreamAssignment MultiStreamPolicy RtpAudioPortRange UDPAudioOnServer AllowLocalAppAccess URLRedirectionBlackList URLRedirectionWhiteList IcaKeepAlives IcaKeepAliveTimeout DisplayDegradePreference DisplayDegradeUserNotification DisplayMemoryLimit DynamicPreview ImageCaching LegacyGraphicsMode MaximumColorDepth QueueingAndTossing FramehawkDisplayChannelPortRange PersistentCache EnhancedDesktopExperience IcaRoundTripCalculation IcaRoundTripCalculationInterval IcaRoundTripCalculationWhenIdle ACRTimeout AutoClientReconnect AutoClientReconnectAuthenticationRequired AutoClientReconnectLogging ReconnectionUiTransparencyLevel AppProtectionPostureCheck AdvanceWarningFrequency AdvanceWarningMessageTitle AdvanceWarningPeriod AgentTaskInterval FinalForceLogoffMessageBody FinalForceLogoffMessageTitle ForceLogoffGracePeriod ForceLogoffMessageTitle ImageProviderIntegrationEnabled RebootMessageBody Using these names allows the deployment of policies by using the Terraform provider. Caution: Before running Terraform no Terraform-related entities are available: The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> terraform init Initializing the backend... Initializing provider plugins... - Finding citrix/citrix versions matching ">= 0.5.2"... - Finding hashicorp/aws versions matching ">= 5.4.0"... - Finding mastercard/restapi versions matching "1.18.2"... - Finding latest version of hashicorp/local... - Installing hashicorp/aws v5.41.0... - Installed hashicorp/aws v5.41.0 (signed by HashiCorp) - Installing mastercard/restapi v1.18.2... - Installed mastercard/restapi v1.18.2 (self-signed, key ID DCB8C431D71C30AB) - Installing hashicorp/local v2.5.1... - Installed hashicorp/local v2.5.1 (signed by HashiCorp) - Installing citrix/citrix v0.5.2... - Installed citrix/citrix v0.5.2 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> terraform plan data.local_file.LoadZoneID: Reading... data.local_file.LoadZoneID: Read complete after 0s [id=a7592ebe91057eab80084fc014fa06ca52453732] data.aws_vpc.AWSAZ: Reading... data.aws_vpc.AWSVPC: Reading... data.aws_subnet.AWSSubnet: Reading... data.aws_subnet.AWSSubnet: Read complete after 0s [id=subnet-07e168f0c2a28edf3] data.aws_vpc.AWSVPC: Read complete after 0s [id=vpc-0f9ac384f3bf8cb3a] data.aws_vpc.AWSAZ: Read complete after 0s [id=vpc-0f9ac384f3bf8cb3a] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_ami_from_instance.CreateAMIFromWMI will be created + resource "aws_ami_from_instance" "CreateAMIFromWMI" { + architecture = (known after apply) + arn = (known after apply) + boot_mode = (known after apply) + ena_support = (known after apply) + hypervisor = (known after apply) + id = (known after apply) + image_location = (known after apply) + image_owner_alias = (known after apply) + image_type = (known after apply) + imds_support = (known after apply) + kernel_id = (known after apply) + manage_ebs_snapshots = (known after apply) + name = "TACG-AWS-TF-AMIFromWMI" + owner_id = (known after apply) + platform = (known after apply) + platform_details = (known after apply) + public = (known after apply) + ramdisk_id = (known after apply) + root_device_name = (known after apply) + root_snapshot_id = (known after apply) + source_instance_id = "i-024f77470f3f63c08" + sriov_net_support = (known after apply) + tags_all = (known after apply) + tpm_support = (known after apply) + usage_operation = (known after apply) + virtualization_type = (known after apply) + timeouts { + create = "45m" } } # citrix_aws_hypervisor.CreateHypervisorConnection will be created + resource "citrix_aws_hypervisor" "CreateHypervisorConnection" { + api_key = (sensitive value) + id = (known after apply) + name = "TACG-AWS-TF-HypConn" + region = (sensitive value) + secret_key = (sensitive value) + zone = "8d5dXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" } # citrix_aws_hypervisor_resource_pool.CreateHypervisorPool will be created + resource "citrix_aws_hypervisor_resource_pool" "CreateHypervisorPool" { + availability_zone = "eu-central-1a" + hypervisor = (known after apply) + id = (known after apply) + name = "TACG-AWS-TF-HypConnPool" + subnets = [ + "172.31.16.0/20", ] + vpc = "TACG-VPC" } # citrix_machine_catalog.CreateMCSCatalog will be created + resource "citrix_machine_catalog" "CreateMCSCatalog" { + allocation_type = "Random" + description = "Terraform-based Machine Catalog" + id = (known after apply) + is_power_managed = true + is_remote_pc = false + name = "MC-TACG-AWS-TF" + provisioning_scheme = { + aws_machine_config = { + image_ami = (known after apply) + master_image = "TACG-AWS-TF-AMIFromWMI" + service_offering = "T2 Large Instance" } + hypervisor = (known after apply) + hypervisor_resource_pool = (known after apply) + identity_type = "ActiveDirectory" + machine_account_creation_rules = { + naming_scheme = "TACG-AWS-WM-#" + naming_scheme_type = "Numeric" } + machine_domain_identity = { + domain = "aws.the-austrian-citrix-guy.at" + domain_ou = "CN=Computers,DC=aws,DC=the-austrian-citrix-guy,DC=at" + service_account = (sensitive value) + service_account_password = (sensitive value) } + network_mapping = { + network = "172.31.16.0/20" + network_device = "0" } + number_of_total_machines = 1 } + provisioning_type = "MCS" + session_support = "MultiSession" + zone = "8d5dXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" + minimum_functional_level = "L7_20" } # citrix_delivery_group.CreateDG will be created + resource "citrix_delivery_group" "CreateDG" { + associated_machine_catalogs = [ + { + machine_catalog = "f4e34a11-6e31-421f-8cb4-060bc4a13fef" + machine_count = 1 }, ] + autoscale_settings = { + autoscale_enabled = true + disconnect_off_peak_idle_session_after_seconds = 0 + disconnect_peak_idle_session_after_seconds = 300 + log_off_off_peak_disconnected_session_after_seconds = 0 + log_off_peak_disconnected_session_after_seconds = 300 + off_peak_buffer_size_percent = 0 + off_peak_disconnect_action = "Nothing" + off_peak_disconnect_timeout_minutes = 0 + off_peak_extended_disconnect_action = "Nothing" + off_peak_extended_disconnect_timeout_minutes = 0 + off_peak_log_off_action = "Nothing" + peak_buffer_size_percent = 0 + peak_disconnect_action = "Nothing" + peak_disconnect_timeout_minutes = 0 + peak_extended_disconnect_action = "Nothing" + peak_extended_disconnect_timeout_minutes = 0 + peak_log_off_action = "Nothing" + power_off_delay_minutes = 30 + power_time_schemes = [ + { + days_of_week = [ + "Monday", + "Tuesday", + "Wednesday", + "Thursday", + "Friday", ] + display_name = "TACG-AWS-TF-AS-Weekdays" + peak_time_ranges = [ + "09:00-17:00", ] + pool_size_schedules = [ + { + pool_size = 1 + time_range = "09:00-17:00" }, ] + pool_using_percentage = false }, ] } + desktops = [ + { + description = "Terraform-based Delivery Group running on AWS EC2" + enable_session_roaming = true + enabled = true + published_name = "DG-TF-TACG-AWS" + restricted_access_users = { + allow_list = [ + "TACG-AWS\\vdaallowed", ] } }, ] + id = (known after apply) + name = "DG-TF-TACG-AWS" + reboot_schedules = [ + { + days_in_week = [ + "Sunday", ] + frequency = "Weekly" + frequency_factor = 1 + ignore_maintenance_mode = true + name = "TACG-AWS-Reboot Schedule" + natural_reboot_schedule = false + reboot_duration_minutes = 0 + reboot_schedule_enabled = true + start_date = "2024-01-01" + start_time = "02:00" }, ] + restricted_access_users = { + allow_list = [ + "TACG-AWS\\vdaallowed", ] } + total_machines = (known after apply) } # time_sleep.Wait_60_Seconds_2 will be created + resource "time_sleep" "Wait_60_Seconds_2" { + create_duration = "60s" + id = (known after apply) } # time_sleep.wait_60_seconds_1 will be created + resource "time_sleep" "wait_60_seconds_1" { + create_duration = "60s" + id = (known after apply) } Plan: 8 to add, 0 to change, 0 to destroy. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now. PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> terraform apply data.local_file.LoadZoneID: Reading... data.local_file.LoadZoneID: Read complete after 0s [id=a7592ebe91057eab80084fc014fa06ca52453732] data.aws_subnet.AWSSubnet: Reading... data.aws_vpc.AWSVPC: Reading... data.aws_vpc.AWSAZ: Reading... data.aws_subnet.AWSSubnet: Read complete after 0s [id=subnet-07e168f0c2a28edf3] data.aws_vpc.AWSAZ: Read complete after 0s [id=vpc-0f9ac384f3bf8cb3a] data.aws_vpc.AWSVPC: Read complete after 0s [id=vpc-0f9ac384f3bf8cb3a] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_ami_from_instance.CreateAMIFromWMI will be created + resource "aws_ami_from_instance" "CreateAMIFromWMI" { + architecture = (known after apply) + arn = (known after apply) + boot_mode = (known after apply) + ena_support = (known after apply) + hypervisor = (known after apply) + id = (known after apply) + image_location = (known after apply) + image_owner_alias = (known after apply) + image_type = (known after apply) + imds_support = (known after apply) + kernel_id = (known after apply) + manage_ebs_snapshots = (known after apply) + name = "TACG-AWS-TF-AMIFromWMI" + owner_id = (known after apply) + platform = (known after apply) + platform_details = (known after apply) + public = (known after apply) + ramdisk_id = (known after apply) + root_device_name = (known after apply) + root_snapshot_id = (known after apply) + source_instance_id = "i-024f77470f3f63c08" + sriov_net_support = (known after apply) + tags_all = (known after apply) + tpm_support = (known after apply) + usage_operation = (known after apply) + virtualization_type = (known after apply) + timeouts { + create = "45m" } } # citrix_aws_hypervisor.CreateHypervisorConnection will be created + resource "citrix_aws_hypervisor" "CreateHypervisorConnection" { + api_key = (sensitive value) + id = (known after apply) + name = "TACG-AWS-TF-HypConn" + region = (sensitive value) + secret_key = (sensitive value) + zone = "8d5d77ba-4803-4b71-9a6b-6e28071304c1" } # citrix_aws_hypervisor_resource_pool.CreateHypervisorPool will be created + resource "citrix_aws_hypervisor_resource_pool" "CreateHypervisorPool" { + availability_zone = "eu-central-1a" + hypervisor = (known after apply) + id = (known after apply) + name = "TACG-AWS-TF-HypConnPool" + subnets = [ + "172.31.16.0/20", ] + vpc = "TACG-VPC" } # citrix_machine_catalog.CreateMCSCatalog will be created + resource "citrix_machine_catalog" "CreateMCSCatalog" { + allocation_type = "Random" + description = "Terraform-based Machine Catalog" + id = (known after apply) + is_power_managed = true + is_remote_pc = false + name = "MC-TACG-AWS-TF" + provisioning_scheme = { + aws_machine_config = { + image_ami = (known after apply) + master_image = "TACG-AWS-TF-AMIFromWMI" + service_offering = "T2 Large Instance" } + hypervisor = (known after apply) + hypervisor_resource_pool = (known after apply) + identity_type = "ActiveDirectory" + machine_account_creation_rules = { + naming_scheme = "TACG-AWS-WM-#" + naming_scheme_type = "Numeric" } + machine_domain_identity = { + domain = "aws.the-austrian-citrix-guy.at" + domain_ou = "CN=Computers,DC=aws,DC=the-austrian-citrix-guy,DC=at" + service_account = (sensitive value) + service_account_password = (sensitive value) } + network_mapping = { + network = "172.31.16.0/20" + network_device = "0" } + number_of_total_machines = 1 } + provisioning_type = "MCS" + session_support = "MultiSession" + zone = "8d5dXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" } # citrix_delivery_group.CreateDG will be created + resource "citrix_delivery_group" "CreateDG" { + associated_machine_catalogs = [ + { + machine_catalog = "f4e3XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" + machine_count = 1 }, ] + autoscale_settings = { + autoscale_enabled = true + disconnect_off_peak_idle_session_after_seconds = 0 + disconnect_peak_idle_session_after_seconds = 300 + log_off_off_peak_disconnected_session_after_seconds = 0 + log_off_peak_disconnected_session_after_seconds = 300 + off_peak_buffer_size_percent = 0 + off_peak_disconnect_action = "Nothing" + off_peak_disconnect_timeout_minutes = 0 + off_peak_extended_disconnect_action = "Nothing" + off_peak_extended_disconnect_timeout_minutes = 0 + off_peak_log_off_action = "Nothing" + peak_buffer_size_percent = 0 + peak_disconnect_action = "Nothing" + peak_disconnect_timeout_minutes = 0 + peak_extended_disconnect_action = "Nothing" + peak_extended_disconnect_timeout_minutes = 0 + peak_log_off_action = "Nothing" + power_off_delay_minutes = 30 + power_time_schemes = [ + { + days_of_week = [ + "Monday", + "Tuesday", + "Wednesday", + "Thursday", + "Friday", ] + display_name = "TACG-AWS-TF-AS-Weekdays" + peak_time_ranges = [ + "09:00-17:00", ] + pool_size_schedules = [ + { + pool_size = 1 + time_range = "09:00-17:00" }, ] + pool_using_percentage = false }, ] } + desktops = [ + { + description = "Terraform-based Delivery Group running on AWS EC2" + enable_session_roaming = true + enabled = true + published_name = "DG-TF-TACG-AWS" + restricted_access_users = { + allow_list = [ + "TACG-AWS\\vdaallowed", ] } }, ] + id = (known after apply) + name = "DG-TF-TACG-AWS" + reboot_schedules = [ + { + days_in_week = [ + "Sunday", ] + frequency = "Weekly" + frequency_factor = 1 + ignore_maintenance_mode = true + name = "TACG-AWS-Reboot Schedule" + natural_reboot_schedule = false + reboot_duration_minutes = 0 + reboot_schedule_enabled = true + start_date = "2024-01-01" + start_time = "02:00" }, ] + restricted_access_users = { + allow_list = [ + "TACG-AWS\\vdaallowed", ] } + total_machines = (known after apply) } # time_sleep.Wait_60_Seconds_2 will be created + resource "time_sleep" "Wait_60_Seconds_2" { + create_duration = "60s" + id = (known after apply) } # time_sleep.wait_60_seconds_1 will be created + resource "time_sleep" "wait_60_seconds_1" { + create_duration = "60s" + id = (known after apply) } Plan: 8 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_ami_from_instance.CreateAMIFromWMI: Creating... citrix_aws_hypervisor.CreateHypervisorConnection: Creating... aws_ami_from_instance.CreateAMIFromWMI: Still creating... [10s elapsed] citrix_aws_hypervisor.CreateHypervisorConnection: Still creating... [10s elapsed] citrix_aws_hypervisor.CreateHypervisorConnection: Creation complete after 10s [id=706c408a-6eed-42b1-8102-93888db8a0eb] citrix_aws_hypervisor_resource_pool.CreateHypervisorPool: Creating... aws_ami_from_instance.CreateAMIFromWMI: Still creating... [20s elapsed] citrix_aws_hypervisor_resource_pool.CreateHypervisorPool: Still creating... [10s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [30s elapsed] citrix_aws_hypervisor_resource_pool.CreateHypervisorPool: Still creating... [20s elapsed] citrix_aws_hypervisor_resource_pool.CreateHypervisorPool: Creation complete after 22s [id=b460dfc2-f760-4b52-bc94-fad9c85a0b8f] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [40s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [50s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m0s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m10s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m20s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m30s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m40s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m50s elapsed] ... ** Output shortened **... aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m10s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m20s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m30s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m40s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m50s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [7m0s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Creation complete after 7m7s [id=ami-0e01b8c8d09a5fbe5] time_sleep.wait_30_seconds: Creating... time_sleep.wait_30_seconds: Still creating... [10s elapsed] time_sleep.wait_30_seconds: Still creating... [20s elapsed] time_sleep.wait_30_seconds: Creation complete after 30s [id=2024-03-15T17:40:18Z] citrix_machine_catalog.CreateMCSCatalog: Creating... citrix_machine_catalog.CreateMCSCatalog: Still creating... [10s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [20s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [30s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [40s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [50s elapsed] ... ** Output shortened **... citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m1s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m11s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m21s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m31s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m41s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m51s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [17m1s elapsed] citrix_machine_catalog.CreateMCSCatalog: Creation complete after 17m4s [id=f4e34a11-6e31-421f-8cb4-060bc4a13fef] time_sleep.wait_60_seconds_1: Creating... time_sleep.wait_60_seconds_1: Still creating... [10s elapsed] time_sleep.wait_60_seconds_1: Still creating... [20s elapsed] time_sleep.wait_60_seconds_1: Still creating... [30s elapsed] time_sleep.wait_60_seconds_1: Still creating... [40s elapsed] time_sleep.wait_60_seconds_1: Still creating... [50s elapsed] time_sleep.wait_60_seconds_1: Creation complete after 1m0s [id=2024-03-20T08:06:45Z] time_sleep.Wait_60_Seconds_2: Creating... time_sleep.Wait_60_Seconds_2: Still creating... [10s elapsed] time_sleep.Wait_60_Seconds_2: Still creating... [20s elapsed] time_sleep.Wait_60_Seconds_2: Still creating... [30s elapsed] time_sleep.Wait_60_Seconds_2: Still creating... [40s elapsed] time_sleep.Wait_60_Seconds_2: Still creating... [50s elapsed] time_sleep.Wait_60_Seconds_2: Creation complete after 1m0s [id=2024-03-20T08:07:45Z] citrix_delivery_group.CreateDG: Creating... citrix_delivery_group.CreateDG: Still creating... [10s elapsed] citrix_delivery_group.CreateDG: Creation complete after 12s [id=7e2c73bf-f8b1-4e37-8cd8-efa1338304dc] Apply complete! Resources: 8 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> This configuration completes the full deployment of a Citrix Cloud Resource Location in Microsoft Amazon EC2. The environment created by Terraform is now ready for usage. All entities are in place: The Resource Location: The Hypervisor Connection and the Hypervisor Pool: The Machine Catalog: The Worker VM in the Machine Catalog: The Delivery Group: The AutoScale settings of the Delivery Group: The Desktop in the Library: The Desktop in the Library: Connection to the Worker VM´s Desktop: Appendix Examples of the Terraform scripts Module 1: CConAWS-Creation These are the Terraform configuration files for Module 1 (excerpts): _CCOnAWS-Creation-Provider.tf # Terraform deployment of Citrix DaaS on Amazon AWS EC2 ## Definition of all required Terraform providers terraform { required_version = ">= 1.4.0" required_providers { aws = { source = "hashicorp/aws" version = ">= 5.4.0" } restapi = { source = "Mastercard/restapi" version = "1.18.2" } citrix = { source = "citrix/citrix" version = ">=0.5.3" } } } # Configure the AWS Provider provider "aws" { region = "${var.AWSEC2_Region}" access_key = "${var.AWSEC2_AccessKey}" secret_key = "${var.AWSEC2_AccessKeySecret}" } # Configure the Citrix Provider provider "citrix" { customer_id = "${var.CC_CustomerID}" client_id = "${var.CC_APIKey-ClientID}" client_secret = "${var.CC_APIKey-ClientSecret}" } # Configure the REST-API provider provider "restapi" { alias = "restapi_rl" uri = "${var.CC_RestAPIURI}" create_method = "POST" write_returns_object = true debug = true headers = { "Content-Type" = "application/json", "Citrix-CustomerId" = "${var.CC_CustomerID}", "Accept" = "application/json", "Authorization" = "${var.CC_APIKey-Bearer}" } } _CCOnAWS-Creation-Create.tf # Terraform deployment of Citrix DaaS on Amazon AWS EC2 ## Creation of all required VMs - two Cloud Connectors and one Worker Master image locals { } ### Create needed IAM roles #### IAM EC2 Policy with Assume Role data "aws_iam_policy_document" "ec2_assume_role" { statement { actions = ["sts:AssumeRole"] principals { type = "Service" identifiers = ["ec2.amazonaws.com"] } } } #### Create EC2 IAM Role resource "aws_iam_role" "ec2_iam_role" { name = "ec2-iam-role" path = "/" assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json } #### Create EC2 IAM Instance Profile resource "aws_iam_instance_profile" "ec2_profile" { name = "ec2-profile" role = aws_iam_role.ec2_iam_role.name } #### Attach Policies to Instance Role resource "aws_iam_policy_attachment" "ec2_attach1" { name = "ec2-iam-attachment" roles = [aws_iam_role.ec2_iam_role.id] policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" } resource "aws_iam_policy_attachment" "ec2_attach2" { name = "ec2-iam-attachment" roles = [aws_iam_role.ec2_iam_role.id] policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM" } #### Create Secret Manager IAM Policy resource "aws_iam_policy" "secret_manager_ec2_policy" { name = "secret-manager-ec2-policy" description = "Secret Manager EC2 policy" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "secretsmanager:*" ] Effect = "Allow" Resource = "*" }, ] }) } #### Attach Secret Manager Policies to Instance Role resource "aws_iam_policy_attachment" "api_secret_manager_ec2_attach" { name = "secret-manager-ec2-attachment" roles = [aws_iam_role.ec2_iam_role.id] policy_arn = aws_iam_policy.secret_manager_ec2_policy.arn } data "template_file" "Add-EC2InstanceToDomainScriptCC1" { template = file("${path.module}/Add-EC2InstanceToDomainCC1.ps1") vars = { ad_secret_id = "AD/SA/DomainJoin" ad_domain = "aws.the-austrian.citrix.guy.at" } } data "template_file" "Add-EC2InstanceToDomainScriptCC2" { template = file("${path.module}/Add-EC2InstanceToDomainCC2.ps1") vars = { ad_secret_id = "AD/SA/DomainJoin" ad_domain = "aws.the-austrian.citrix.guy.at" } } data "template_file" "Add-EC2InstanceToDomainScriptWMI" { template = file("${path.module}/Add-EC2InstanceToDomainWMI.ps1") vars = { ad_secret_id = "AD/SA/DomainJoin" ad_domain = "aws.the-austrian.citrix.guy.at" } } data "template_file" "Add-EC2InstanceToDomainScriptAdminVM" { template = file("${path.module}/Add-EC2InstanceToDomainAdminVM.ps1") vars = { ad_secret_id = "AD/SA/DomainJoin" ad_domain = "aws.the-austrian.citrix.guy.at" } } #### Create DHCP settings resource "aws_vpc_dhcp_options" "vpc-dhcp-options" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptCC1 ] domain_name_servers = [ var.AWSEC2_DC-IP ] } resource "aws_vpc_dhcp_options_association" "dns_resolver" { vpc_id = var.AWSEC2_VPC-ID dhcp_options_id = aws_vpc_dhcp_options.vpc-dhcp-options.id } ### Create CC1-VM resource "aws_instance" "CC1" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptCC1 ] ami = var.AWSEC2_AMI subnet_id = var.AWSEC2_Subnet-ID private_ip = var.AWSEC2_PrivIP-CC1 instance_type = var.AWSEC2_Instance-Type key_name = var.AWSEC2_AMI-KeyPairName vpc_security_group_ids = [ var.AWSEC2_SecurityGroup ] tags = { Name = var.AWSEC2_InstanceName-CC1 } user_data = data.template_file.Add-EC2InstanceToDomainScriptCC1.rendered iam_instance_profile = aws_iam_instance_profile.ec2_profile.id } ### Create CC2-VM resource "aws_instance" "CC2" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptCC2 ] ami = var.AWSEC2_AMI subnet_id = var.AWSEC2_Subnet-ID private_ip = var.AWSEC2_PrivIP-CC2 instance_type = var.AWSEC2_Instance-Type key_name = var.AWSEC2_AMI-KeyPairName vpc_security_group_ids = [ var.AWSEC2_SecurityGroup ] tags = { Name = var.AWSEC2_InstanceName-CC2 } user_data = data.template_file.Add-EC2InstanceToDomainScriptCC2.rendered iam_instance_profile = aws_iam_instance_profile.ec2_profile.id } ### Create Admin-VM resource "aws_instance" "AdminVM" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptAdminVM ] ami = var.AWSEC2_AMI subnet_id = var.AWSEC2_Subnet-ID private_ip = var.AWSEC2_PrivIP-AdminVM instance_type = var.AWSEC2_Instance-Type-Worker key_name = var.AWSEC2_AMI-KeyPairName vpc_security_group_ids = [ var.AWSEC2_SecurityGroup ] tags = { Name = var.AWSEC2_InstanceName-AdminVM } user_data = data.template_file.Add-EC2InstanceToDomainScriptAdminVM.rendered iam_instance_profile = aws_iam_instance_profile.ec2_profile.id } ### Create WMI-VM resource "aws_instance" "WMI" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptWMI ] ami = var.AWSEC2_AMI subnet_id = var.AWSEC2_Subnet-ID private_ip = var.AWSEC2_PrivIP-WMI instance_type = var.AWSEC2_Instance-Type key_name = var.AWSEC2_AMI-KeyPairName vpc_security_group_ids = [ var.AWSEC2_SecurityGroup ] tags = { Name = var.AWSEC2_InstanceName-WMI } user_data = data.template_file.Add-EC2InstanceToDomainScriptWMI.rendered iam_instance_profile = aws_iam_instance_profile.ec2_profile.id } _CCOnAWS-Creation-GetBearerToken.tf ### Create PowerShell file for retrieving the Bearer Token resource "local_file" "GetBearerToken" { content = <<-EOT asnp Citrix* $key= "${var.CC_APIKey-ClientID}" $secret= "${var.CC_APIKey-ClientSecret}" $customer= "${var.CC_CustomerID}" $XDStoredCredentials = Set-XDCredentials -StoreAs default -ProfileType CloudApi -CustomerId $customer -APIKey $key -SecretKey $secret $auth = Get-XDAuthentication $BT = $GLOBAL:XDAuthToken | Out-File "${path.module}/GetBT.txt" EOT filename = "${path.module}/GetBT.ps1" } ### Running GetBearertoken-Script to retrieve the Bearer Token resource "terraform_data" "GetBT" { depends_on = [ local_file.GetBearerToken ] provisioner "local-exec" { command = "${path.module}/GetBT.ps1" interpreter = ["PowerShell", "-File"] } } ### Retrieving the Bearer Token data "local_file" "Retrieve_BT" { depends_on = [ terraform_data.GetBT ] filename = "${path.module}/GetBT.txt" } output "terraform_data_BR_Read" { value = data.local_file.Retrieve_BT.content } Module 2: CConAWS-Install These are the Terraform configuration files for Module 2 (excerpts): _CCOnAWS-Install-CreatePreReqs.tf # Terraform deployment of Citrix DaaS on Amazon AWS EC2 ## Creating a dedicated Resource Location on Citrix Cloud resource "random_uuid" "IDforCCRL" { } ### Create local directory resource "local_file" "Log" { content = "Directory created." filename = "${var.CC_Install_LogPath}/log.txt" } resource "local_file" "LogData" { depends_on = [ local_file.Log ] content = "Directory created." filename = "${var.CC_Install_LogPath}/DATA/log.txt" } ### Create PowerShell command to be run on the CC machines locals { randomuuid = random_uuid.IDforCCRL.result } ### Create a dedicated Resource Location in Citrix Cloud resource "restapi_object" "CreateRL" { depends_on = [ local_file.log ] provider = restapi.restapi_rl path="/resourcelocations" data = jsonencode( { "id" = "${local.randomuuid}", "name" = "${var.CC_RestRLName}", "internalOnly" = false, "timeZone" = "GMT Standard Time", "readOnly" = false } ) } ### Create PowerShell files with configuration and next steps and save it into Transfer directory #### Create CWC-Installer configuration file based on variables and save it into Transfer directory resource "local_file" "CWC-Configuration" { depends_on = [restapi_object.CreateRL] content = jsonencode( { "customerName" = "${var.CC_CustomerID}", "clientId" = "${var.CC_APIKey-ClientID}", "clientSecret" = "${var.CC_APIKey-ClientSecret}", "resourceLocationId" = "XXXXXXXXXX", "acceptTermsOfService" = true } ) filename = "${var.CC_Install_LogPath}/DATA/cwc.json" } #### Wait 5 mins after RL creation to settle Zone creation resource "time_sleep" "wait_300_seconds" { create_duration = "300s" } ### Create PowerShell file for determining the SiteID resource "local_file" "GetSiteIDScript" { depends_on = [restapi_object.CreateRL, time_sleep.wait_300_seconds] content = <<-EOT $requestUri = "https://api-eu.cloud.com/cvad/manage/me" $headers = @{ "Accept"="application/json"; "Authorization" = "${var.CC_APIKey-Bearer}"; "Citrix-CustomerId" = "${var.CC_CustomerID}" } $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Select-Object Customers $responsetojson = $response | Convertto-Json -Depth 3 $responsekorr = $responsetojson -replace("null","""empty""") $responsefromjson = $responsekorr | Convertfrom-json $SitesObj=$responsefromjson.Customers[0].Sites[0] $Export1 = $SitesObj -replace("@{Id=","") $SplittedString = $Export1.Split(";") $SiteID= $SplittedString[0] $PathCompl = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" Set-Content -Path $PathCompl -Value $SiteID EOT filename = "${path.module}/GetSiteID.ps1" } ### Running the SiteID-Script to generate the SiteID resource "terraform_data" "SiteID" { depends_on = [ local_file.GetSiteIDScript ] provisioner "local-exec" { command = "GetSiteID.ps1" interpreter = ["PowerShell", "-File"] } } ### Retrieving the SiteID data "local_file" "input_site" { depends_on = [ terraform_data.SiteID ] filename = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" } ### Create PowerShell file for determining the ZoneID resource "local_file" "GetZoneIDScript" { depends_on = [ data.local_file.input_site ] content = <<-EOT $requestUri = "https://api-eu.cloud.com/cvad/manage/Zones" $headers = @{ "Accept"="application/json"; "Authorization" = "${var.CC_APIKey-Bearer}"; "Citrix-CustomerId" = "${var.CC_CustomerID}"; "Citrix-InstanceId" = "${data.local_file.input_site.content}" } $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json $responsedejson = $response | ConvertFrom-Json $ZoneId = $responsedejson.Items | Where-Object { $_.Name -eq "${var.CC_RestRLName}" } | Select-Object id $Export1 = $ZoneId -replace("@{Id=","") $ZoneID = $Export1 -replace("}","") $PathCompl = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" Set-Content -Path $PathCompl -Value $ZoneID EOT filename = "${path.module}/GetZoneID.ps1" } ### Running the ZoneID-Script to generate the ZoneID resource "terraform_data" "ZoneID" { depends_on = [ local_file.GetZoneIDScript ] provisioner "local-exec" { command = "GetZoneID.ps1" interpreter = ["PowerShell", "-File"] } } #### Create PowerShell file for installing the Citrix Cloud Connector - we need to determine the correct RL-ID resource "local_file" "InstallPreReqsOnCC-ps1" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" If(!(test-path -PathType container $path)) { New-Item -ItemType Directory -Path $path } Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript started." # Download the Citrix Cloud Connector-Software to CC Invoke-WebRequest ${var.CC_Install_CWCURI} -OutFile '${var.CC_Install_LogPath}/DATA/CWCConnector.exe' # Install Citrix Cloud Controller based on the cwc.json configuration file Add-Content ${var.CC_Install_LogPath}/log.txt "`nInstalling Cloud Connector." Start-Process -Filepath "${var.CC_Install_LogPath}/DATA/CWCConnector.exe" -ArgumentList "/q /ParametersFilePath:${var.CC_Install_LogPath}/DATA/cwc.json" Add-Content ${var.CC_Install_LogPath}/log.txt "`nInstalled Cloud Connector." Restart-Computer -Force -Timeout 1800 } EOT filename = "${path.module}/DATA/InstallPreReqsOnCC.ps1" } #### Create PowerShell file for installing the Citrix Remote PoSH SDK on AVM - we need to determine the correct RL-ID resource "local_file" "InstallPreReqsOnAVM1-ps1" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" If(!(test-path -PathType container $path)) { New-Item -ItemType Directory -Path $path } Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript started." # Download Citrix Remote PowerShell SDK Invoke-WebRequest '${var.CC_Install_RPoSHURI}' -OutFile '${var.CC_Install_LogPath}/DATA/CitrixPoshSdk.exe' Add-Content ${var.CC_Install_LogPath}/log.txt "`nPowerShell SDK downloaded." # Install Citrix Remote PowerShell SDK Start-Process -Filepath "${var.CC_Install_LogPath}/DATA/CitrixPoshSdk.exe" -ArgumentList "-quiet" Add-Content ${var.CC_Install_LogPath}/log.txt "`nPowerShell SDK installed." # Timeout to settle all processes Start-Sleep -Seconds 60 Add-Content ${var.CC_Install_LogPath}/log.txt "`nTimeout elapsed." } EOT filename = "${path.module}/DATA/InstallPreReqsOnAVM1.ps1" } #### Create PowerShell file for installing the Citrix Remote PoSH SDK on AVM - we need to determine the correct RL-ID resource "local_file" "InstallPreReqsOnAVM2-ps1" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" # Correct the Resource Location ID in cwc.json file $requestUri = "https://api-eu.cloud.com/resourcelocations" $headers = @{ "Accept"="application/json"; "Authorization" = "${var.CC_APIKey-Bearer}"; "Citrix-CustomerId" = "${var.CC_CustomerID}"} $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json $RLs = ConvertFrom-Json $response $RLFiltered = $RLs.items | Where-Object name -in "${var.CC_RestRLName}" Add-Content ${var.CC_Install_LogPath}/log.txt $RLFiltered $RLID = $RLFiltered.id $OrigContent = Get-Content ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/log.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt $OrigContent $CorrContent = $OrigCOntent.Replace('XXXXXXXXXX', $RLID) | Out-File -FilePath ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/DATA/GetRLID.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt "`ncwc.json corrected." Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript completed." } EOT filename = "${path.module}/DATA/InstallPreReqsOnAVM2.ps1" } ### Upload required components to AVM #### Set the Provisioner-Connection resource "null_resource" "UploadRequiredComponentsToAVM" { depends_on = [ local_file.InstallPreReqsOnAVM1-ps1, local_file.GetSiteIDScript ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } ###### Upload PreReqs script to AVM provisioner "file" { source = "${path.module}/DATA/InstallPreReqsOnAVM1.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallPreReqsOnAVM1.ps1" } provisioner "file" { source = "${path.module}/DATA/InstallPreReqsOnAVM2.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallPreReqsOnAVM2.ps1" } } ### Call the required scripts on AVM #### Set the Provisioner-Connection resource "null_resource" "CallRequiredScriptsOnAVM1" { depends_on = [ null_resource.UploadRequiredComponentsToAVM ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } ###### Execute the PreReqs script on AVM provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallPreReqsOnAVM1.ps1" ] } } resource "null_resource" "CallRequiredScriptsOnAVM2" { depends_on = [ null_resource.CallRequiredScriptsOnAVM1 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } ###### Execute the PreReqs script on AVM provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallPreReqsOnAVM2.ps1" ] } } ############################################################################################################################## ### Upload required components to CC1 #### Set the Provisioner-Connection resource "null_resource" "UploadRequiredComponentsToCC1" { depends_on = [ local_file.InstallPreReqsOnCC-ps1, null_resource.CallRequiredScriptsOnAVM2] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_DDC1-IP timeout = var.Provisioner_Timeout } ###### Upload Cloud Connector configuration file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/cwc.json" destination = "${var.CC_Install_LogPath}/DATA/cwc.json" } ###### Upload SiteID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" } ###### Upload ZoneID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } ###### Upload RLID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" } ###### Upload PreReqs script to CC1 provisioner "file" { source = "${path.module}/DATA/InstallPreReqsOnCC.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallPreReqsOnCC.ps1" } } ###### Execute the PreReqs script on CC1 resource "null_resource" "CallRequiredScriptsOnCC1" { depends_on = [ null_resource.UploadRequiredComponentsToCC1 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallPreReqsOnCC.ps1" ] } } ### Upload required components to CC2 #### Set the Provisioner-Connection resource "null_resource" "UploadRequiredComponentsToCC2" { depends_on = [ local_file.InstallPreReqsOnCC-ps1,null_resource.CallRequiredScriptsOnAVM2 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_DDC2-IP timeout = var.Provisioner_Timeout } ###### Upload Cloud Connector configuration file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/cwc.json" destination = "${var.CC_Install_LogPath}/DATA/cwc.json" } ###### Upload SiteID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" } ###### Upload ZoneID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } ###### Upload RLID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" } ###### Upload PreReqs script to CC2 provisioner "file" { source = "${path.module}/DATA/InstallPreReqsOnCC.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallPreReqsOnCC.ps1" } } ###### Execute the PreReqs script on CC2 resource "null_resource" "CallRequiredScriptsOnCC2" { depends_on = [ null_resource.UploadRequiredComponentsToCC2 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallPreReqsOnCC.ps1" ] } } Module 3: CConAWS-CCStuff These are the Terraform configuration files for Module 3 (excerpts): _CCOnAWS-CCStuff-CreateCCEntities.tf # Terraform deployment of Citrix DaaS on Amazon AWS EC2 ## Creating all Citrix Cloud-related entities ### Creating a Hypervisor Connection #### Retrieving the ZoneID data "local_file" "LoadZoneID" { filename = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } #### Creating the Hypervisor Connection resource "citrix_aws_hypervisor" "CreateHypervisorConnection" { depends_on = [ data.local_file.LoadZoneID ] name = "${var.CC_AWSEC2-HypConn-Name}" zone = data.local_file.LoadZoneID.content api_key = "${var.AWSEC2_AccessKey}" secret_key = "${var.AWSEC2_AccessKeySecret}" region = "${var.AWSEC2_Region}" } ### Creating a Hypervisor Resource Pool #### Retrieving the VPC name based on the VPC ID data "aws_vpc" "AWSVPC" { id = "${var.AWSEC2_VPC-ID}" } #### Retrieving the Availability Zone data "aws_vpc" "AWSAZ" { id = "${var.AWSEC2_VPC-ID}" } #### Retrieving the Subnet Mask based on the Subnet ID data "aws_subnet" "AWSSubnet" { id = "${var.AWSEC2_Subnet-ID}" } #### Create the Hypervisor Resource Pool resource "citrix_aws_hypervisor_resource_pool" "CreateHypervisorPool" { depends_on = [ citrix_aws_hypervisor.CreateHypervisorConnection ] name = "${var.CC_AWSEC2-HypConnPool-Name}" hypervisor = citrix_aws_hypervisor.CreateHypervisorConnection.id subnets = [ "${data.aws_subnet.AWSSubnet.cidr_block}", ] vpc = "${var.AWSEC2_VPC-Name}" availability_zone = data.aws_subnet.AWSSubnet.availability_zone } #### Create AMI from WMI instance resource "aws_ami_from_instance" "CreateAMIFromWMI" { name = "TACG-AWS-TF-AMIFromWMI" source_instance_id = "${var.AWSEC2_AMI-ID}" timeouts { create = "45m" } } #### Sleep 60s to let AWS Background processes settle resource "time_sleep" "wait_60_seconds" { depends_on = [ citrix_aws_hypervisor_resource_pool.CreateHypervisorPool, aws_ami_from_instance.CreateAMIFromWMI ] create_duration = "60s" } #### Create the Machine Catalog resource "citrix_machine_catalog" "CreateMCSCatalog" { depends_on = [ time_sleep.wait_60_seconds ] name = "${var.CC_AWSEC2-MC-Name}" description = "${var.CC_AWSEC2-MC-Description}" allocation_type = "${var.CC_AWSEC2-MC-AllocationType}" session_support = "${var.CC_AWSEC2-MC-SessionType}" is_power_managed = true is_remote_pc = false provisioning_type = "MCS" zone = data.local_file.LoadZoneID.content provisioning_scheme = { hypervisor = citrix_aws_hypervisor.CreateHypervisorConnection.id hypervisor_resource_pool = citrix_aws_hypervisor_resource_pool.CreateHypervisorPool.id identity_type = "${var.CC_AWSEC2-MC-IDPType}" machine_domain_identity = { domain = "${var.CC_AWSEC2-MC-Domain}" #domain_ou = "${var.CC_AWSEC2-MC-DomainOU}" service_account = "${var.Provisioner_DomainAdmin-Username-UPN}" service_account_password = "${var.Provisioner_DomainAdmin-Password}" } aws_machine_config = { image_ami = aws_ami_from_instance.CreateAMIFromWMI.id master_image = aws_ami_from_instance.CreateAMIFromWMI.name service_offering = "${var.CC_AWSEC2-MC-Service_Offering}" } number_of_total_machines = "${var.CC_AWSEC2-MC-Machine_Count}" network_mapping = { network_device = "0" network = "${data.aws_subnet.AWSSubnet.cidr_block}" } machine_account_creation_rules = { naming_scheme = "${var.CC_AWSEC2-MC-Naming_Scheme_Name}" naming_scheme_type = "${var.CC_AWSEC2-MC-Naming_Scheme_Type}" } } } #### Sleep 60s to let CC Background processes settle resource "time_sleep" "wait_60_seconds_1" { #depends_on = [ citrix_machine_catalog.CreateMCSCatalog ] create_duration = "60s" } #### Create an Example-Policy Set resource "citrix_policy_set" "SetPolicies" { count = var.CC_AWSEC2-Policy-IsNotDaaS ? 1 : 0 depends_on = [ time_sleep.wait_60_seconds_1 ] name = "${var.CC_AWSEC2-Policy-Name}" description = "${var.CC_AWSEC2-Policy-Description}" type = "DeliveryGroupPolicies" scopes = [ "All" ] policies = [ { name = "TACG-AWS-TF-Pol1" description = "Policy to enable use of Universal Printer" is_enabled = true policy_settings = [ { name = "UniversalPrintDriverUsage" value = "Use universal printing only" use_default = false }, ] policy_filters = [ { type = "DesktopGroup" is_enabled = true is_allowed = true }, ] }, { name = "TACG-AWS-TF-Pol2" description = "Policy to enable Client Drive Redirection" is_enabled = true policy_settings = [ { name = "UniversalPrintDriverUsage" value = "Prohibited" use_default = false }, ] policy_filters = [ { type = "DesktopGroup" is_enabled = true is_allowed = true }, ] } ] } #### Sleep 60s to let CC Background processes settle resource "time_sleep" "Wait_60_Seconds_2" { depends_on = [ citrix_policy_set.SetPolicies ] create_duration = "60s" } #### Create the Delivery Group based on the Machine Catalog resource "citrix_delivery_group" "CreateDG" { depends_on = [ time_sleep.Wait_60_Seconds_2] name = "${var.CC_AWSEC2-DG-Name}" associated_machine_catalogs = [ { #machine_catalog = citrix_machine_catalog.CreateMCSCatalog.id machine_catalog ="f4e34a11-6e31-421f-8cb4-060bc4a13fef" machine_count = "${var.CC_AWSEC2-MC-Machine_Count}" } ] desktops = [ { published_name = "${var.CC_AWSEC2-DG-PublishedDesktopName}" description = "${var.CC_AWSEC2-DG-Description}" restricted_access_users = { allow_list = [ "TACG-AWS\\vdaallowed" ] } enabled = true enable_session_roaming = var.CC_AWSEC2-DG-SessionRoaming } ] autoscale_settings = { autoscale_enabled = true disconnect_peak_idle_session_after_seconds = 300 log_off_peak_disconnected_session_after_seconds = 300 peak_log_off_action = "Nothing" power_time_schemes = [ { days_of_week = [ "Monday", "Tuesday", "Wednesday", "Thursday", "Friday" ] name = "${var.CC_AWSEC2-DG-AS-Name}" display_name = "${var.CC_AWSEC2-DG-AS-Name}" peak_time_ranges = [ "09:00-17:00" ] pool_size_schedules = [ { time_range = "09:00-17:00", pool_size = 1 } ] pool_using_percentage = false }, ] } restricted_access_users = { allow_list = [ "TACG-AWS\\vdaallowed" ] } reboot_schedules = [ { name = "TACG-AWS-Reboot Schedule" reboot_schedule_enabled = true frequency = "Weekly" frequency_factor = 1 days_in_week = [ "Sunday", ] start_time = "02:00" start_date = "2024-01-01" reboot_duration_minutes = 0 ignore_maintenance_mode = true natural_reboot_schedule = false } ] policy_set_id = citrix_policy_set.SetPolicies.id } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #0000ff; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #0000ff; background-color: #0000c8; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #0000ff; background-color: #0000ff; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-u3wl{background-color:#1f1f1f;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top}
  21. This is an amazing article. Thank you
  22. .noteBoxes { border: 1px solid; border-radius: 5px; padding: 10px; margin: 10px 0; width: 700px; } .type3 { border-color: #003098; background-color: #003098; color:white; } Overview Citrix DaaS service supports zone selection on Google Cloud Platform (GCP) to enable sole-tenant node functionality. You specify the zones where you want to create VMs in Citrix Studio. Then use Sole-tenant nodes allow to group your VMs together on the same hardware or separate your them from the other projects. Sole-tenant nodes also enable you to comply with network access control policy, security, and privacy requirements such as HIPAA. This document covers: Configuring a Google Cloud environment to support zone selection on the Google Cloud Platform in Citrix DaaS environments. Provisioning Virtual Machines on Sole Tenant nodes. Common error conditions and how to resolve them. Note: Please note that the GCP console screenshots in this article may not be up to date. But functionality-wise there is no difference! Prerequisites You must have existing knowledge of Google Cloud and Citrix DaaS for provisioning machine catalogs in a Google Cloud Project. To set up a GCP project for Citrix DaaS, follow the instructions here. Google Cloud sole tenant Sole tenancy provides exclusive access to a sole tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's VMs. Sole Tenant nodes allow you to group your VMs together on the same hardware or separate your VMs. These nodes can help you meet dedicated hardware requirements for Bring Your Own License (BYOL) scenarios. Sole Tenant nodes enable customers to comply with network access control policy, security, and privacy requirements such as HIPAA. Customers can create VMs in desired locations where Sole Tenant nodes are allocated. This functionality supports Win-10 based VDI deployments. A detailed description regarding Sole Tenant can be found on the Google documentation site. Reserving a Google Cloud sole tenant node 1. To reserve a Sole Tenant Node, access the Google Cloud Console menu, select Compute Engine, and then select Sole-tenant-nodes: 2. Sole tenants in Google Cloud are captured in Node Groups. The first step in reserving a sole tenant platform is to create a node group. In the GCP Console, select Create Node Group: 3. Start by configuring the new node group. Citrix recommends that the Region and Zone selected for your new node group allow access to your domain controller and the subnets utilized for provisioning catalogs. Consider the following: Fill in a name for the node group. In this example, we used mh-sole-tenant-node-group-1. Select a Region. For example, us-east1. Select a Zone where the reserved system resides. For example, us-east1-b. All node groups are associated with a node template, which indicates the performance characteristics of the systems reserved in the node group. These characteristics include the number of virtual CPUs, the quantity of memory dedicated to the node, and the machine type used for machines created on the node. Select the drop-down menu for the Node template. Then select Create node template: 4. Enter a name for the new template. For example: mh-sole-tenant-node-group-1-template-n1. 5. The next step is to select a Node Type. Select the Node type most applicable to your needs in the drop-down menu. Note: You can refer to this Google documentation page for more information on different node types. 5. Once you have chosen a node type, click Create: 6. The Create node group screen reappears after creating the node template. Click Create: Creating the VDA master image To deploy machines on the sole-tenant node, the catalog creation process requires extra steps when creating and preparing the machine image from the provisioned catalog. Machine Instances in Google Cloud have a property called Node affinity labels. Instances used as master images for catalogs deployed to sole-tenant environments need to have a Node affinity label that matches the name of the target node group. There are two ways to apply the affinity label: Set the label in the Google Cloud Console when creating an Instance. Using the gcloud command line to set the label on existing instances. An example of both approaches follows. Set the node affinity label at instance creation This section does not cover all the steps necessary for creating a GCP Instance. It provides sufficient information and context to understand the process of setting the Node affinity label. Recall that in the examples above, the node group was named mh-sole-tenant-node-grouop-1. This is the node group we need to apply to the Node affinity label on the Instance. New instance screen The new instance screen appears. At the bottom of the screen, a section for managing settings related to management, security, disks, networking, and sole tenancy appears. To set a new Instance: Click the section once to open the Management settings panel. Then click Sole Tenancy to see the related settings panel. 3. The panel for setting the Node affinity label appears. Click Browse to see the available Node Groups in the currently selected Google Cloud project: 4. The Google Cloud Project used for these examples contains one node group, the one that was created in the earlier example. To select the node group: Click the desired node group from the list. Then click Select at the bottom of the panel. 5. After clicking Select in the previous step, you will be returned to the Instance creation screen. The Node affinity labels field contains the needed value to ensure catalogs created from this master image are deployed to the indicated node group: Set the node affinity label for an existing instance 1. To set the Node affinity label for an existing Instance, access the Google Cloud Shell and use the gcloud compute instances command. More information about the gcloud compute instances command can be found on the Google Developer Tools page. Include three pieces of information with the gcloud command: Name of the VM. This example uses an existing VM named s2019-vda-base. Name of the Node group. The node group name, previously defined, is mh-sole-tenant-node-grouop-1. The Zone where the Instance resides. In this example, the VM resides in the us-east-1b zone. 2. The following image's buttons are at the top right of the Google Cloud Console window. Click the Cloud Shell button: 3. When the Cloud Shell first opens, it looks similar to the following: 4. Run this command in the Cloud Shell window: gcloud compute instances set-scheduling "s2019-vda-base" --node-group="mh-sole-tenant-node-group-1" --zone="us-east1-b" 5. Finally, verify the details for the s2019-vda-base instance: Google shared VPCs If you intend to use Google Sole-tenants with a Shared VPC, refer to the GCP Shared VPC Support with Citrix DaaS document. Shared VPC support requires extra configuration steps for Google Cloud permissions and service accounts. Create a Machine Catalog You can create a machine catalog after performing the previous steps in this document. Use the following steps to access Citrix Cloud and navigate to the Citrix Studio Console. 1. In Citrix Studio, select Machine Catalogs: 2. Select Create Machine Catalog: 3. Click Next to begin the configuration process: 4. Select an operating system type for the machines in the catalog. Click Next: 5. Accept the default setting that the catalog utilizes power-managed machines. Then, select MCS resources. In this example case, we are using the Resources named GCP1-useast1(Zone:My Resource Location). Click Next: Note: These resources come from a previously created host connection, representing the network and other resources like the domain controller and reserved sole tenants. These elements are used when deploying the catalog. The process of creating the host connection is not covered in this document. More information can be found on the Connections and resources page. 6. The next step is to select the master image for the catalog. Recall that to utilize the reserved Node Group. We must select an image with the Node affinity value set accordingly. For this example, we use the image from the previous example, s2019-vda-base. 7. Click Next: 8. This screen indicates the storage type used for the virtual machines in the machine catalog. For this example, we use the Standard Persistent Disk. Click Next: 9. This screen indicates the number of virtual machines and the zones to which the machines are deployed. In this example, we have specified three machines in the catalog. When using Sole-tenant node groups for machine catalogs, you must only select Zones containing reserved node Groups. In our example, we have a single node group that resides in Zone: us-central1-a, so that is the only zone selected. Click Next: 10. This screen provides the option to enable Write-back cache. For this example, we are not enabling this setting. Click Next: 11. During the provisioning process, MCS communicates with the domain controller to create hostnames for all the machines being created: Select the Domain into which the machines are created. Specify the Account naming scheme when generating the machine names. Since the catalog in this example has three machines, we have specified a naming scheme for MySTVms-\#\# the machines: MySTVms-01 MySTVms-02 MySTVms-03 12. Click Next: 13. Specify the credentials used to communicate with the domain controller, as mentioned in the previous step: Select Enter Credentials. Supply the credentials, then click OK. 14. This screen displays a summary of key information during the catalog creation process. The final step is to enter a catalog name and an optional description. In this example, the catalog name is My Sole Tenant Catalog. Enter the catalog name and click Finish: 15. When the catalog creation process finishes, the Citrix Studio Console resembles: 16. Use the Google Console to verify that the machines were created on the node group as expected: Currently, migrating machine catalogs from Google Cloud general/shared space to sole tenant nodes is impossible. Commonly encountered issues and errors Working with any complex system containing interdependencies results in unexpected situations. This section shows a few common issues and errors encountered when setting up and configuring CVAD and GCP Sole-tenants. The catalog was created successfully, but machines are not provisioned to the reserved node group If you have successfully created a catalog, the most likely reasons are: The node affinity label was not set on the master image. The node affinity label value does not match the name of the Node group. Incorrect zones were selected in the Virtual Machines screen during the catalog creation. Catalog creation fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone.’ This situation presents itself with this error when View details are selected in the Citrix Studio dialog window: System.Reflection.TargetInvocationException: One or more errors occurred. Citrix.MachineCreationAPI.MachineCreationException: One or more errors occurred. System.AggregateException: One or more errors occurred. Citrix.Provisioning.Gcp.Client.Exceptions.OperationException: Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone. One or more of the following are the likely causes of receiving this message: You are attempting to provision a new catalog to a zone without a reserved Sole Tenant Node. Ensure the zone selection is correct on the Virtual Machines screen. You have a Sole Tenant Node reserved, but the value of the VDA Master Node Affinity Label does not match the name of the reserved Node. Refer to these two sections: Set Node Affinity Label at Instance Creation and Setting the Node affinity label for an existing Instance. Upgrading an existing catalog fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone’ There are two cases in which this occurs: You are upgrading an existing sole tenant catalog that has already been provisioned using Sole Tenancy and Zone Selection. The causes of this are the same as those found in the earlier entry Catalog creation fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone’. You are upgrading an existing non-sole tenant catalog and do not have a sole tenant node reserved in each zone that is already provisioned with machines for the catalog. This case is considered a migration, intending to migrate machines from Google Cloud Common/Shared runtime space to a Sole Tenant Group. As noted in Migrating Non-Sole Tenant Catalogs, this is not possible. Unknown errors during catalog provisioning If you encounter a dialog like this when creating the catalog: Selecting View details produces a screen resembling: There are a few things you can check: Ensure that the Machine Type specified in the Node Group Template matches the Machine Type for the master image Instance. Ensure that the Machine Type for the master image has 2 or more CPUs. Test plan This section contains some exercises you may want to consider trying to get a feel for CVAD support of Google Cloud Sole-tenants. Single tenant catalog Reserve a group node in a single zone and provide both a persistent and non-persistent catalog. During the steps below, monitor the node group using the Google Cloud Console to ensure proper behavior: Power off the machines. Add machines. Power all machines on. Power all machines off. Delete some machines. Delete the machine catalog. Update the catalog Update the catalog from Non-Sole Tenant template to Sole Tenant Template Update the catalog from Sole Tenant template to Non-Sole Tenant Template Two zone catalog Like the exercise above, reserve two node groups and provide a persistent catalog in one zone and a non-persistent catalog in another. During the steps below, monitor the Node Group using the Google Cloud Console to ensure proper behavior: Power off the machines. Add machines. Power all machines on. Power all machines off. Delete some machines. Delete the machine catalog. Update the catalog. Update the catalog from the non-sole tenant template to the sole tenant template. Update the catalog from sole tenant template to non-sole tenant template.
  23. .noteBoxes { border: 1px solid; border-radius: 5px; padding: 10px; margin: 10px 0; width: 700px; } .type3 { border-color: #003098; background-color: #003098; color:white; } Overview This document describes the steps required to create an MCS Machine Catalog by using a Windows 10 VDA, Google Cloud Shared VPC, and Google Cloud Sole Tenant Nodes. Prerequisites Citrix DaaS and Google Cloud. For details, see the product documentation. GCP Zone Selection Support with Citrix DaaS. GCP Windows 10 VDA with Citrix DaaS. The following prerequisite is for users who want to use a Shared VPC in addition to using Sole Tenancy. GCP Shared VPC Support with Citrix DaaS. Once you meet all prerequisites, you must set up and configure the following environment and technical items: Google Cloud Service Project with permissions to use the Shared VPC Sole Tenant Node Group Reservation that resides in the Service Project Windows 10 VDA (Optional) Google Cloud Host Project with a Shared VPC and required firewall rules Example environment Creating the desired Windows 10-based MCS Machine Catalog in Google Cloud is similar to creating other catalogs. You can do more sophisticated work after you complete the full prerequisites as described in the preceding section. Then, you select the proper VDA and network resources. For this example, the following elements are in place: Host Connection The Host Connection in this example uses Google Cloud Shared VPC resources. This is not mandatory when using Zone Selection, a standard Local VPC-based Host Connection, can be used. Connection Name Shared VPC Resources Connection Resources SharedVPCSubnet Virtual Shared VPC Network gcp-test-vpc Shared VPC Subnet subnet-good Sole Tenant Node Reservation A Sole Tenant Node Group named mh-windows10-node-group located in Zone us-east1-b. Windows 10 VDA Image A Windows 10-based VDA that resides in a local project named ‘windows10-1909-vda-base’, also in zone us-east1-b. Catalog Creation The following steps cover creation of the Windows 10-based Machine Catalog that uses a Google Cloud Shared VPC and Zone Selection. The final steps describe how to validate that the resulting machines are using the desired resources. Start with Full Configuration, and Select Machine Catalogs The Machine Catalogs screen opens. Click Create Machine Catalog. The standard Catalog Creation Introduction screen may appear. Click Next. On the screen that appears, you specify the type of operating system the catalog will be based upon: Multi-Session OS, which indicates a Windows Server-based catalog Single-Session OS, which indicates a Windows Client-based catalog Remote PC Access, which indicates a catalog that includes physical machines This will be a Windows 10-based catalog, in which a Single-Session OS is used. Select Single-Session OS and then click Next. The next screen is used to indicate if the machines are power managed. The machines are power managed in this example. The screen also indicates the technology used to deploy the machines. Because MCS is being used, you must indicate the network resources to be used when deploying the machines. Note that in the following case, the Shared VPC SharedVPCSubnet noted in Example Environment has been selected for the resources to be used. Select the resources associated with your Shared VPC on the following screen and then click Next. Consider if users connect to a random desktop each time they log in or the same (static) desktop. Here we choose the Random desktop type. This option means that all changes that users make to the machine are discarded. Click Next. Select the image to be used as the base disk in the catalog. Here, we select windows10-1909-vda-base as noted in the Example Environment. Click Next Leave the defaults selected for Storage Click Next The Virtual Machine is another critical screen. Zone Selection is what enabled MCS to use the reserved Sole Tenant Node for placement of the provisioned Windows 10 virtual machine. The Example Environment section noted that both the Sole Tenant Node resides in Zone us-east1-b. Because we have a single Sole Tenant Node reserved, this is the only zone that should be selected. To distribute your machines across zones, reserve a Sole Tenant in each zone to be used. Click Next The key thing to ensure on the Active Directory Computer Accounts screen is that the AD Domain you select is the correct domain for provisioning machines in the Shared VPC network. Select The desired AD Domain, enter Account naming scheme and then click Next. On the Domain Credentials screen, enter credentials with sufficient privileges to create and delete computer accounts in the domain. Enter Credentials and then click Next. The Catalog Summary and Name screen shows a summary of the catalog to be created. You can also provide a name for the catalog. In this case, the catalog name is Windows 10 Shared VPC and Sole Tenant. Click Finish It may take a few minutes for the catalog creation to complete. Then, you can view machines in the catalog through the Search node on the tree. Validate Resource Utilization To validate resource utilization and ensure that the newly provisioned machines are using the expected resources, check the following: Are the machines running on the reserved Sole Tenant Node? Are the machines on the desired Shared VPC subnet? Remember that use of a Shared VPC is optional, so this validation step may not be applicable to your configuration. Machines Running on Sole Tenant Node The following figure shows that the three newly provisioned machines are running on the reserved Sole Tenant Node. Instance Details The details for the first Instance confirm the following: The proper Node Affinity Label tag is in place. The correct network gcp-test-vpc is being used. The correct subnet subnet-good is being used.
  24. .noteBoxes { border: 1px solid; border-radius: 5px; padding: 10px; margin: 10px 0; width: 700px; } .type3 { border-color: #003098; background-color: #003098; color:white; } Overview Citrix DaaS supports Google Cloud Platform (GCP) Shared VPC. This document covers: An overview of Citrix support for Google Cloud Shared VPCs. An overview of terminology related to Google Cloud Shared VPCs. Configuring a Google Cloud environment to support use of Shared VPCs. Use of Google Shared VPCs for host connections and machine catalog provisioning. Common error conditions and how to resolve them. Prerequisites This document assumes knowledge of Google Cloud and the use of Citrix DaaS for provisioning machine catalogs in a Google Cloud Project. To set up a GCP project for Citrix DaaS, see the product documentation. Summary Citrix MCS support for provisioning and managing machine catalogs deployed to Shared VPCs is functionally equivalent to what is supported in Local VPC's today. There are two ways in which they differ: A few more permissions must be granted to the Service Account used to create the Host Connection to allow MCS to access and use the Shared VPC Resources. The site administrator must create two firewall rules, one each for ingress and egress, to be used during the image mastering process. Both of these will be discussed in greater detail later in this document. Note: Please note that the GCP console screenshots in this article may not be up to date. But functionality-wise there is no difference! Google Cloud Shared VPCs GCP Shared VPCs comprise a Host Project, from which the shared subnets are made available, and one or more service projects that use the resources. Use of Shared VPCs is a good option for larger installations because they provide for more centralized control, usage, and administration of shared corporate Google cloud resources. Google Cloud describes it this way: "Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network." This paragraph was taken from the Google Documentation site. New Permissions Required When working with Citrix DaaS and Google Cloud, a GCP Service Account with specific permissions must be provided when creating the Host Connection. As noted earlier, to use GCP Shared VPCs some additional permissions must be granted to any service accounts used to create Shared VPC-based Host Connections. Technically speaking, the permissions required are not “new”, since they are already necessary to use Citrix DaaS with GCP and Local VPCs. The change is that the permissions must be granted to allow access to the Shared VPC resources. This is accomplished by adding the Service Account to the IAM Roles for the Host Project and will be covered in detail in the “How To” section of this document. Note: To review the permissions required for the currently shipping Citrix DaaS product, see the Citrix documentation site describing resource locations. In total, a maximum of four other permissions must be granted to the Service Account associated with the Host Connection: compute.firewalls.list - Mandatory This permission is necessary to allow Citrix MCS to retrieve the list of firewalls rules present on the Shared VPC (discussed in detail below). compute.networks.list - Mandatory This permission is necessary to allow Citrix MCS to identify the Shared VPC networks available to the Service Account. compute.subnetworks.list – May be Mandatory (see below) This permission is necessary to allow MCS to identify the subnets within the visible Shared VPCs. Note: This permission is already required for using Local VPCs but must also be assigned in the Shared VPC Host project. compute.subnetworks.use - May be Mandatory (see below) This permission is necessary to use the subnet resources in the provisioned machine catalogs. Note: This permission is already required for using Local VPCs but must also be assigned in the Shared VPC Host Project. The last two items are noted as “May be Mandatory” because there are two different approaches to be considered when dealing with these permissions: Project-level permissions Allows access to all Shared VPCs within the host project. Requires that the permissions #3 and #4 must be assigned to the Service Account. Subnet-level permissions Allow access to specific subnets within the Shared VPC. Permissions #3 and #4 are intrinsic to the subnet level assignment and therefore do not need to be assigned directly to the Service Account. Examples of both approaches are provided below in the “How To” section of this document. Either approach works equally well. Select the model more in tune with your organizational needs and security standards. More detailed information regarding the difference between Project-Level and Subnet-level permissions can be obtained from the Google Cloud documentation. Host Project To use Shared VPCs in Google Cloud, first choose and enable one Google Cloud Project to be the Host Project. This Host Project contains one or more Shared VPC Networks used by other Google Cloud Projects within the organization. Configuring the Shared VPC Host Project, creating subnets, and sharing either the entire project or specific subnets with other Google Cloud Projects are purely Google Cloud related activities and not included within the scope of this document. The Google Cloud documentation related to creating and working with Shared VPCs can be found here. Firewall Rules A key step in behind-the-scenes processing that occurs when provisioning or updating a machine catalog is called mastering. This is when the selected machine image is copied and prepared to be the master image system disk for the catalog. During mastering, this disk is attached to a temporary virtual machine, the prep machine, and started up to allow preparation scripts to run. This virtual machine needs to run in an isolated environment that prevents all inbound and outbound network traffic. This is accomplished through a pair of Deny-All firewall rules; one for ingress and one for egress. When using GCP Local VPCs, MCS creates this firewall rule pair on the fly in the local network, applies them to the machine for mastering, and removes them when mastering has completed. Citrix recommends keeping the number of new permissions required to use Shared VPCs to a minimum, because Shared VPCs are higher level corporate resources and typically have more rigid security protocols in place. For this reason, the site administrator must create a pair of firewall rules (one ingress, and one egress) on each Shared VPC with the highest priority and apply a new Target Tag to each of the rules. The Target Tag value is: citrix-provisioning-quarantine-firewall When MCS is creating or updating a machine catalog, it searches for firewall rules containing this Target Tag, examine the rules for correctness, and apply them to the prep machine. If the firewall rules are not found, or the rules are found, but the rules or priority are incorrect, a message of this form will be returned: Unable to find valid INGRESS and EGRESS quarantine firewall rules for VPC \<name\> in project \<project\>. Please ensure you have created deny all firewall rules with the network tagcitrix-provisioning-quarantine-firewall and proper priority. Refer to Citrix Documentation for details. Cloud Connectors When using a Shared VPC for Citrix DaaS machine catalogs, you create two or more Cloud Connectors to access the Domain Controller that resides within the Shared VPC. The recommendation in this case is to create a GCP machine instance in your local project and add an additional network interface to the instance. The first interface would be connected to a subnet in the Shared VPC. The second network interface would connect to a subnet in your Local VPC to allow access for administrative control and maintenance via your Local VPC Bastion Server. Unfortunately, you cannot add a network interface to a GCP instance after it has been created. It is a simple process and is covered below in one of the How To entries. How To Section The following section contains instructional examples to help you understand the steps to perform the configuration changes necessary to use Google Shared VPCs with Citrix DaaS. The examples presented in Google Console screenshots will all occur in a hypothetical Google Project named Shared VPC Project 1. How To: Create a New IAM Role Some additional permissions need to be granted to the Service Account used when creating the Host Connection. Since the intent of deploying to Shared VPCs is to allow multiple projects to deploy to the same Shared VPC, the most efficient approach is to create a role in the Host Project with the desired permissions and then assign that role to any Service Account that requires access to the Shared VPC. Below, we create the Project-Level role named Citrix-ProjectLevel-SharedVpcRole. The Subnet-Level role follows the same steps for the first two sets of permissions assigned. 1. Access the IAM & Admin configuration option in the Google Cloud Console: 2. Select Create Role: 3. A screen resembling the following appears: 4. Specify the Role Name. Click ADD PERMISSIONS to apply the update: 5. After clicking ADD PERMISSIONS, a screen resembling the one below appears. In this image, the “Filter Table” text entry field has been highlighted: 6. Clicking the Filter Table text entry field displays a contextual menu: 7. Copy and paste (or type) the string compute.firewalls.list into the text field, as shown below: 8. Selecting the compute.firewalls.list entry that has been filtered out of the table of permissions results in this dialog: 9. Click the toggle box to enable the permission: 10. Click ADD. The Create Role screen reappears. The compute.firewalls.list permission has been added to the role: 11. add the compute.networks.list permission using the same steps as above. However, make sure to select the proper rule. As you can see below, two permissions are listed when the permission text is entered into the filter table field. Choose the compute.networks.list entry: 12. Click ADD. 13. The two Mandatory permissions added to our role: 14. Determine what level of access the Role has, such as Project-Level access or a more restricted model using Subnet-Level access. For the purposes of this document, we are currently creating the role named Citrix-ProjectLevel-SharedVpc Role, so we add the compute.subnetworks.list and compute.subnetworks.use permissions using the same steps used above. The resulting screen looks like this, with the four permissions granted, just before clicking Create: 15. Click CREATE. Note: If the Subnet-Level Role were created here, we would have clicked CREATE rather than add the two other compute.subnetworks.list and compute.subnetworks.use permissions. How To: Add Service Account to Host Project IAM Role Now that we have created the Citrix-ProjectLevel-SharedVpc Role, we need to add a Service Account within the Host Project. For this example, we use a Service Account named citrix-shared-vpc-service-account. 1. The first step is to navigate to the IAM & Roles screen for the project. In the console, select IAM and Admin. Select IAM: 2. Add members with the specified permissions. Click ADD to display the list of members: 3, Clicking ADD displays a small panel as shown in the image below. Data is entered in the following step. 4. Start typing the name of your Service Account into the field. As you type, Google Cloud searches the projects you have permission to access and presents a narrowed list of possible matches. In this case, we have one match (displayed directly below the fill-in), so we select that entry: 5. After specifying the Member Name (in our case, the Service Account), select a role for the Service Account to function as in the Shared VPC Project. Start this process by clicking the indicated list: /p> 6. The Select a Role process is similar to those used in the previous How To - Create a New IAM Role. In this case, several more options are displayed, including the fill-in. 7. Since we know the Role we want to apply for, we can start typing. Once the intended Role appears, select the role: 8. After selecting the Role, click Save: We have now successfully added the Service Account to the Host Project. How To: Subnet-Level Permissions If you have chosen to use Subnet-Level access rather than Project-Level access, you must add the Service Accounts to be used with the Shared VPC as members for each subnet representing the resources to be accessed. For this How To section, we are going to provide the Service Account named sharedvpc-sa\@citrix-mcs-documentation.iam.gserviceaccount.com with access to a single subnet in our Shared VPC. 1. The first step is to navigate to the Shared VPC screen in Google Console: 2. This is the landing page for the Google Cloud Console Shared VPC screen. This project shows five subnets. The Service Account in this example requires access to the second subnet-good subnet (the last subnet on the list below). Select the checkbox next to the second subnet-good subnet: 3. Now that the checkbox for the last subnet has been selected, note that the ADD MEMBER option appears on the upper right of the screen. It is also useful for this exercise to take note of the number of users this subnet has been shared with. As indicated, one user has access to this subnet. 4. Click ADD MEMBER: 5. Similar to the steps required to add the Service Account to the Host Project in the preceding How To section, the New member name must also be provided here. After filling in the name, Google Cloud lists all related items (as before) so that we can select the relevant Service Account. In this case, it is a single entry. Double-click the Service Account to select it: 6. After a Service Account has been selected, a Role for the new Member must also be chosen. In the list, click Select a role, then Double-click the Compute Network User Role. 7. The image shows that the Service Account and Role have been specified. The only remaining step is to click SAVE to commit the changes: 8. After the changes have been saved, the main Shared VPC screen appears. Observe that the number of users who have access to the last subnet has, as expected, increased to two: How To: Add Project CloudBuild Service Account to the Shared VPC Every Google Cloud Subscription has a service account named after the project ID number followed by cloudbuild.gserviceaccount. A full name example (using a made-up project ID) is: 705794712345\@ cloudbuild.gserviceaccount. This cloudbuild Service Account also needs to be added as a member of the Shared VPC, just as the Service Account you use for creating Host Connections was in Step 3 of How To: Add Service Account to Host Project IAM Role. 1. You can determine what the Project ID number is for your project by selecting Home and Dashboard in the Google Cloud Console menu: 2. Find the Project Number under the Project Info area of the screen. 3. Enter the project number/cloudbuild.gserviceaccount combination into the Add Member field. Assign a Role of Computer Network User: 4. Select Save. How To: Firewall Rules Creating the needed firewall rules is a bit easier than creating the Roles. As noted in this document, the two firewall rules must be created in the Host Project. 1. Make certain you have selected the Host Project. 2. From the Google Console menu, navigate to VPC > Firewall, as shown below: 3. The top of the Firewall screen in Google Console includes a button to create a rule. Click CREATE FIREWALL RULE: 4. The screen used to create a firewall rule is shown below: 5. First, create the needed Deny-All Ingress rule by adding or changing values to the following fields: Name Give your Deny-All Ingress firewall rule a name. For example, citrix-deny-all-ingress-rule. Network Select the Shared VPC network. This ingress firewall rule is applied. For example, gcp-test-vpc. Priority The value in this field is critical. In the world of firewall rules, the lower the priority value, the higher the rule's priority. This is why all the default rules have a value of 66536, so that any custom rules with a value lower than 65536 will take priority over any of the default rules. We need these two rules to be the highest priority rules on the network. We use a value of 10. Direction of traffic The default for creating a rule is Ingress, which should already be selected. Action on match This value defaults to Allow. We must change it to Deny. Targets This is the other critical field. The default target type is Specified target tags, precisely what we want. In the text box labeled Target Tags enter the value citrix-provisioning-quarantine-firewall. Source Filter For the source filter, we will retain the default IP range filter type and enter a range that matches all traffic. We use a value of 0.0.0.0/0. Protocols and ports Under Protocols and ports, select Deny All. 6. The completed screen should look like this: 7. Click CREATE and generate the new rule. 8. The egress rule is almost identical to the previously created ingress rule. Use the CREATE FIREWALL RULE again as was done above, and fill in the fields as detailed below: Name Give your Deny-All Egress firewall rule a name. Here, we call it citrix-deny-all-egress-rule. Network Select the Shared VPC network used here when creating the ingress firewall rule above. For example, gcp-test-vpc. Priority As noted above, we use a value of 10. Direction of traffic For this rule, we must change from the default and select Egress. Action on match This value defaults to Allow. We must change it to Deny. Targets Enter the value citrix-provisioning-quarantine-firewall into the Target Tags field. Source Filter For the source filter, we will retain the default IP range filter type and enter a range that matches all traffic. We use 0.0.0.0/0 for that. Protocols and ports Under Protocols and ports, select Deny All. 9. The completed screen should look like this: 10. Click CREATE to generate the new rule. Both of the necessary firewall rules have been created. If multiple Shared VPCs are used when deploying machine catalogs, repeat the above steps. Create two rules for each identified Shared VPCs in their respective Host Projects. How To: Add Network Interface to Cloud Connector Instances When creating Cloud Connectors for use with the Shared VPC, an additional network interface must be added to the instance when it is being created. Additional network interfaces cannot be added once the instance exists. To add the second network interface: 1. This is the initial panel for network settings presented when first creating a network instance 2. Since we want to use the first network instance for the Shared VPC, click the Pencil icon to enter Edit mode. The expanded network settings screen is below. A key item to note is that we can now see the option for Networks shared with me (from host project: citrix-shared-vpc-project-1) directly beneath the Network Interface banner: 3. The Network Settings panel with Shared VPC selected panel shows: Selected the Shared VPC network. Selected the subnet-good subnet. Modified the setting to an external IP address to None. 4. Click Done to save the changes. Click Add Network Interface. 5. Our first interface is connected to the Shared VPC (as indicated). We can configure the second interface using the same steps normally when creating a Cloud Connector. Select a network subnet, decide on an external IP address, and then click Done: How To: Creating Host Connection and Hosting Unit Creating a Host Connection for use with Shared VPCs is not much different from creating one for use with a Local VPC. The difference is in selecting the resources to be associated with the Host Connection. When creating a Host Connection to access Shared VPC resources, use Service Account JSON files related to the project where the provisioned machines reside. 1. Creating a Host Connection for using Shared VPC resources is similar to creating any other GCP-related Host Connection: 2. Once your project, shown as Developer Project in the following figure, has been added to the list that can access the Shared VPC, you may see both your project and the Shared VPC project in Studio. It is important to ensure you select the project where the deployed machine catalog should reside and not the Shared VPC: 3. Select the resources associated with the Host Connection. Consider the following: The Name given for the resources is SharedVPCResources. The list of virtual networks to choose from includes those from the local project, and those from the Shared VPC, as indicated by (Shared) appended to the network names. Note: If you do not see any networks with Shared appended to the name, click the Back button and verify you have chosen the correct Project. If you verify the project chosen is correct and still do not see any shared VPCs, something is misconfigured in the Google Cloud Console. See the Commonly Encountered Issues and Errors later in this document. 4, The following figure shows that the gcp-test-vpc (Shared) virtual network was selected in the previous step. It also shows that the subnet named subnet-good has been selected. Click Next: 5. After clicking Next, the Summary screen appears. In this screen, consider: The Project is Developer Project. The virtual network is gcp-test-vpc, one from the Shared VPC. The subnet is subnet-good. How To: Creating a Catalog Everything from this point forward, such as catalog creation, starting/stopping machines, updating machines, and so on, is performed exactly the same way as when using Local VPCs. Commonly Encountered Issues and Errors Working with any complex system with interdependencies may result in unexpected situations. Below are a few common issues and errors that may be encountered when performing the setup and configuration to use Citrix DaaS and GCP Shared VPCs. Missing or Incorrect Firewall Rules If the firewall rules are not found, or the rules are found but the rules or priority are incorrect, a message of this form will be returned: "Unable to find valid INGRESS and EGRESS quarantine firewall rules for VPC \ in project \. "Please ensure you have created 'deny all' firewall rules with the network tag ‘citrix-provisioning-quarantine-firewall' and proper priority." "Refer to Citrix Documentation for details."); If you encounter this message, you must review the firewall rules, their priorities, and what networks they apply to. For details, Refer to the section How To—Firewall Rules. Missing Shared Resources When Creating Host Connection There are a few reasons this situation may occur: The incorrect project was selected when creating the Host Connection For example, if the Shared VPC Host Project was selected when creating the Host Connection instead of your project, you will still see the network resources from the Shared VPC but they will not have (Shared) appended to them. If you see the shared subnets without that extra information, the Host Connection was likely made with the wrong Service Account. See How To -Creating Host Connection and Hosting Unit. Wrong Role was assigned to the Service Account If the wrong role was assigned to the Service Account, you may not be able to access the desired resources in the Shared VPC. See How To - Add Service Account To Host Project IAM Role. Incomplete or incorrect permissions granted to Role The correct Role may be assigned to the Service Account, but the Role itself may be incomplete. See How To – Create a New IAM Role. Service Account not added as subnet Member If you use Subnet-Level access, ensure the Service Account was properly added as a Member (User) of the desired subnet resources. See How To -Subnet-Level Permissions. Cannot find path error in Studio If you receive an error while creating a catalog in Studio of the form: Cannot find path “XDHyp:\\Connections …” because it does not exist It is most likely that a new Cloud Connector was not created to facilitate the use of the Shared VPC resources. It is a simple thing to overlook after going through all the above steps to configure everything. Refer to Cloud Connectors for important points on creating them.
  25. .noteBoxes { border: 1px solid; border-radius: 5px; padding: 10px; margin: 10px 0; width: 700px; } .type3 { border-color: #003098; background-color: #003098; color:white; } Audience This document is intended for architects, network designers, technical professionals, partners, and consultants interested in implementing the Citrix Secure Private Access On-Premises solution. It is also designed for network administrators, Citrix administrators, managed service providers, or anyone looking to deploy this solution. Solution Overview Citrix Secure Private Access On-Premises is a customer-managed Zero Trust Network Access (ZTNA) solution that provides VPN less access to Internal web and SaaS applications with least privilege principle, single sign-on (SSO), Multifactor Authentication and Device posture assessment, application-level security controls and app protection features along with a seamless end-user experience. The solution uses the StoreFront on-premises and Citrix Workspace app to enable a seamless and secure access experience to web and SaaS apps within Citrix Enterprise Browser. This solution also uses the NetScaler Gateway to enforce authentication and authorization controls. Citrix Secure Private Access On-Premises solution enhances an organization’s overall security and compliance posture by easily delivering Zero Trust access to browser-based (internal web apps and SaaS apps) apps using the StoreFront on-premises portal as a unified access portal to web and SaaS apps, along with virtual apps and desktops, as an integrated part of Citrix Workspace. Citrix Secure Private Access combines elements of NetScaler Gateway and StoreFront to deliver an integrated experience for end users and administrators. Functionality Service/Component providing the functionality Consistent UI to access apps StoreFront on-premises/Citrix Workspace app SSO to SaaS and Web apps NetScaler Gateway Multifactor Authentication (MFA) and device posture (aka End-Point Analysis) NetScaler Gateway Security controls and App protection controls for web and SaaS apps Citrix Enterprise Browser Authorization policies Secure Private Access Configuration and Management Citrix Secure Private Access UI, NetScaler UI Visibility, Monitoring, and Troubleshooting Citrix Secure Private Access UI and Citrix Director Use Cases Citrix Secure Private Access (SPA) On-Premises solution with Citrix Virtual Apps and Desktops On-Premises provides a unified and secure end-user experience to virtualized and browser-based apps (web apps and SaaS apps) with consistent security. SPA On-Premises solution is designed to address the following use cases via a customer-managed solution. Use case #1: Secure access for Employees and Contractors to internal web and SaaS apps from managed or unmanaged devices without publishing a browser or using a VPN. Use case #2: Provide comprehensive last-mile Zero Trust enforcement with admin-configurable browser security controls for internal web and SaaS apps from managed or unmanaged devices without publishing a browser or using VPN. Use case #3: Accelerate Merger and Acquisitions (M & A) user access across multiple identity providers, ensure consistent security and provide seamless end-user access across multiple user groups. System Requirements This article guides deploying Secure Private Access with StoreFront and NetScaler Gateway. Citrix Enterprise Browser (incl. in Citrix Workspace app) is the client software used to interact with your SaaS or internal web apps securely. Global App Config Service (GACS) is a requirement for browser management of Citrix Enterprise Browser. Note: This article does not include guidance on deploying Citrix Virtual Apps and Desktops. This guide assumes that the reader has a basic understanding of the following Citrix and NetScaler offerings and general Windows administrative experience: Citrix Workspace app StoreFront NetScaler Gateway Global App Configuration service Windows Server SQL Express or Server Product communication matrix Secure Private Access for on-premises (Secure Private Access plug-in) Versions: Citrix Workspace app Windows – 2311 and above macOS – 2311 and above Citrix Virtual Apps and Desktops – Supported LTSR and current versions StoreFront – LTSR 2203 or CR 2212 and above NetScaler Gateway – 13.0 and above We recommend using the latest build of NetScaler 13.1 or 14.1 for optimized performance. Windows Server – 2019 and above (.NET 6.x and above runtime must be supported) SQL Express or Server – 2019 and above Note: Citrix Secure Private Access On-Premises are not supported on Citrix Workspace app for iOS and Android. Refer to the following documentation for more details as needed: Citrix Workspace app Windows macOS StoreFront System requirements Plan your StoreFront deployment NetScaler Gateway Before Getting Started Common Citrix Gateway deployments Global App Configuration service (GACS) Manage Citrix Enterprise Browser through Global App Configuration service Manage single sign‑on for Web and SaaS apps through the Global App Configuration service Technical Overview Access to internal web apps is possible from any location with any device at any time through NetScaler Gateway with Citrix Enterprise Browser (incl. in Citrix Workspace app) installed. The same applies to SaaS apps, with the difference that the access can be direct or indirect through NetScaler Gateway. Citrix Enterprise Browser and Citrix Workspace app connect to NetScaler Gateway using a TLS-encrypted connection. NetScaler Gateway provides zero trust-based access by assessing the user’s device, strong nFactor user authentication, app authorization, and single sign-on (SSO). StoreFront enumerates virtual and non-virtual apps through Citrix Desktop Delivery Controller and Secure Private Access (SPA) plug-in. Citrix Enterprise Browser tunnels internal traffic (for example, https://website.company.local) to NetScaler Gateway to allow access without needing a public-facing DNS entry. SaaS application access can be direct or, for special use cases. indirect through NetScaler Gateway. Citrix Secure Private Access with Citrix Enterprise Browser allows the configuration of additional security controls for web and SaaS apps like Watermarking, copy/paste-, up/download-, and print restrictions. These restrictions are dynamically applied on a per-app basis. Scenarios Citrix Secure Private Access On-Premises can be deployed in any environment with one or more StoreFront servers and NetScaler Gateways. This section describes a few different scenarios that have been successfully implemented and validated. Scenario 1 – Single server deployment Scenario 1 is for testing purposes only and should not be considered in production environments because of less redundancy. Scenario 2 – Scalable deployment Scenario 2 is designed for performance and redundancy. This is a recommended production deployment. Scenario 3 – Geo deployment (Coming Soon) Scenario 3 is for large enterprises with geographical data center redundancy. Scenario 1 - Simple deployment Scenario 1 is a straightforward deployment that uses the least infrastructure resources. Because of less redundancy, this scenario is not recommended for use in production. Note: We assume that a working Citrix Virtual Apps and Desktops infrastructure is installed and a NetScaler is deployed in a DMZ. On-premises infrastructure environment Active Directory NetScaler VPX/MPX (Gateway) Combined StoreFront and SPA plug-in server Webserver containing websites Webserver certificate Note: This is a simplified architectural overview of scenario 1. For more detailed communication information, please see Secure Private Access for on-premises (Secure Private Access plug-in). Installation (Scenario 1) StoreFront 1. Install a web server certificate on the StoreFront and Secure Private Access machine. 2. Download the Citrix Virtual Apps and Desktops ISO file from Citrix Download Center. 3. Run the ISO installer AutoSelect.exe. 4. Select Start from Virtual Apps and Desktops. 5. Because we want a combined StoreFront and SPA plug-in server, we first install Citrix StoreFront. 6. In the Citrix StoreFront installer, accept the license agreement and click Next. 7. In the Review prerequisites page, click Next. 8. In the Ready to Install page, click Install. 9. When the installation is successfully finished, click Finish. 10. Click Yes in the reboot dialog to restart the server. Secure Private Access 1. After the reboot, run the ISO installer again. 2. Now that Citrix StoreFront is installed let’s continue installing Secure Private Access. 3. Accept the license agreement in the Secure Private Access installer and click Next. 4. On the Core Components page, click Next. 5. On the Additional Components page, select Use SQL Express on the same machine and click Next. Note:In a production environment, it is recommended to use a dedicated database server. 6. On the Firewall page, click Next to create local Windows Firewall rules automatically. 7. On the Summary page, review your installation settings and click Install. 8. On the Finish Installation page, click Finish. Note: The SPA admin console opens automatically in a browser window. Before we start configuring SPA, we need to configure a StoreFront store. Configuration (Scenario 1) StoreFront 1. Open the Internet Information Service (IIS) Manager console and verify that the correct web server certificate is assigned. 2. Open the Citrix StoreFront console and create a new deployment. 3. Enter the base URL and click Next. Note: In a production environment, multiple StoreFront servers are load-balanced for redundancy and scalability. Therefore the base URL will be the FQDN of the load balancer virtual server IP. 4. On the getting started page, click Next. 5. On the store name and access page, enter a store name, for example, Store, and click Next. 6. On the Delivery Controllers page, enter your Citrix Delivery Controller and click Next. 7. On the Remote Access page, enable Remote Access, select No VPN tunnel, add your NetScaler Gateway appliance, and Next. 8. On the Configure Authentication Methods page, verify that the User name and password and Pass-through from Citrix Gateway are correct, and click Next. 9. On the XenApp Services URL page, click Create. 10. Verify that the store was successfully created on the Summary page and click Finish. Secure Private Access – Initial configuration wizard Note: Please create a StoreFront store before running the Secure Private Access initial configuration wizard! It is recommended that you configure Kerberos authentication for the browser that you use for the Secure Private Access admin console. This is because Secure Private Access uses Integrated Windows Authentication (IWA) for its admin authentication. If Kerberos authentication isn’t set, you’re prompted by the browser to enter your credentials when accessing the Secure Private Access admin console. Please refer to our SSO to admin console documentation. 1. From the Start menu, open Citrix Secure Private Access. 2. Click Continue to start the initial configuration wizard on the SPA admin console page. 3. On the Step 1 page, select Create a new Secure Private Access site and click Next. 4. On the Step 2 page, enter your SQL server host and Site name and click Test connection. The resulting database name is a combination of "CitrixAccessSecurity". 5. Select the type of deployment, Automatically or Manually. In this scenario, select Automatically and click Next. Note: For more information on a manual database setup, follow the instructions documented at Step 2: Configure databases - Manual configuration. 6. On the Step 3 page, enter the Secure Private Access address, StoreFront Store URL, Public NetScaler Gateway address, the NetScaler Gateway virtual IP address, and callback URL. When all URLs are successfully verified, click Next. 7. On the Step 4 page, click Save to start the configuration process. Note: Because the SPA plug-in is installed on the StoreFront machine, we do not need to run the StoreFront script manually on the StoreFront server. This is automatically done by the setup routine. 8. After the configuration process is completed, click Close. Secure Private Access – App creation 1. In the menu on the left, click Applications. 2. On the right side, click Add an app 3. In the Add an app dialog, add the required fields marked with a red star and click Save. Note: For details on application parameters, see Configure applications. 4. In the menu on the left, click Access Policies. 5. On the right side, click Create policy 6. In the Create policy dialog, add the required fields marked with a red star and click Save. Note: For details on application access policies, see Configure access policies for the applications. NetScaler Gateway 1. Open a new browser tab and navigate to https://www.citrix.com/downloads/citrix-secure-private-access/Shell-Script/Shell-Script-for-Gateway-Configuration.html. 2. When prompted, log on with your Citrix Cloud account. 3. Download the Shell Script for Gateway Configuration file archive and extract it to your local computer. Note: To create a new NetScaler Gateway configuration, use ns_gateway_secure_access.sh. To update an existing NetScaler Gateway configuration, use ns_gateway_secure_access_update.sh. 4. In this scenario, we have a working NetScaler Gateway configuration and must update it for Secure Private Access on-premises. Use a tool of your choice to upload the script ns_gateway_secure_access_update.sh to the NetScaler /var/tmp folder. 5. Connect to the NetScaler CLI using an SSH client and log on. 6. Enter shell, press the return key, and change the directory to /var/tmp. 7. Change the file permissions using the command: chmod +x /var/tmp/ns_gateway_secure_access_update.sh to make the script executable. 8. Run the script /var/tmp/ns_gateway_secure_access_update.sh. Note: If you see the error -bash: ./ns_gateway_secure_access_update.sh: /bin/sh^M: bad interpreter: No such file or directory, run the following command tr -d '\r' < /var/tmp/ns_gateway_secure_access_update.sh > /var/tmp/ns_gateway_secure_access_update_unix.sh to convert the Windows line endings to Unix. Change the file permissions using the command chmod +x /var/tmp/ns_gateway_secure_access_update_unix.sh to make the converted script executable. Run the converted script and insert the required parameters. Support for smart access tag Starting with the following versions, NetScaler Gateway sends the smart access tags automatically. This enhancement removes the required gateway callback from SPA plug-in to NetScaler Gateway. 13.1 - 48.47 and later 14.1 - 4.42 and later The above script automatically enables the enhancement flags ns_vpn_enable_spa_onprem and ns_vpn_disable_spa_onprem. To make the changes persistent, run the following commands in the NetScaler shell. root@xa04-adc01# echo "nsapimgr_wr.sh -ys call=ns_vpn_enable_spa_onprem">> /nsconfig/rc.netscaler root@xa04-adc01# echo "nsapimgr_wr.sh -ys call=toggle_vpn_enable_securebrowse_client_mode">> /nsconfig/rc.netscaler For more details, look at Support for smart access tags 1. A new NetScaler command script (the default is /var/tmp/ns_gateway_secure_access) is generated. 2. Switch back to the NetScaler CLI using the command exit. 3. Before executing the new NetScaler command script, let us verify the current NetScaler Gateway configuration and update it for Secure Private Access on-premises. 4. On the Gateway virtual server, verify the following: *ICA only is set to false (OFF) TCP Profile is set to nstcp_default_XA_XD_profile Deployment Type is set to ICA_STOREFRONT On the Gateway session action for the Workspace app, verify the following: *transparentInterception is set to OFF SSO is set to ON *ssoCredential is set to PRIMARY useMIP is set to NS *useIIP is set to OFF icaProxy is set to OFF *wihome is set to "https://xa04-spa.training.local/Citrix/StoreWeb" - replace with real store URL ClientChoices is set to OFF *ntDomain is set to "training.local" - used for SSO defaultAuthorizationAction is set to ALLOW *authorizationGroup is set to SecureAccessGroup (Make sure that this group is created in NetScaler, not Active Directory. It’s used to bind Secure Private Access specific authorization policies) clientlessVpnMode is set to ON *clientlessModeUrlEncoding is set to TRANSPARENT SecureBrowse is set to ENABLED *Storefronturl is set to "https://xa04-spa.training.local" - replace with StoreFront FQDN sfGatewayAuthType is set to domain Note: For details on session action parameters, see the Command line reference for vpn-sessionAction. Based on the above example, the default session action before adding SPA looks like: add vpn sessionAction AC_OS_172.16.1.106 -transparentInterception OFF -defaultAuthorizationAction ALLOW -SSO ON -ssoCredential PRIMARY -icaProxy ON -wihome "https://xa04-spa.training.local/Citrix/StoreWeb" -ClientChoices OFF -ntDomain training.local -clientlessVpnMode OFF -storefronturl "https://xa04-spa.training.local" -sfGatewayAuthType domain Let’s create the authorization group and a new session action and modify it for Secure Private Access on-premises: add aaa group SecureAccessGroup add vpn sessionAction AC_OS_172.16.1.106_SPAOP -transparentInterception OFF -defaultAuthorizationAction ALLOW -authorizationGroup SecureAccessGroup -SSO ON -ssoCredential PRIMARY -useMIP NS -useIIP OFF -icaProxy OFF -wihome "https://xa04-spa.training.local/Citrix/StoreWeb" -ClientChoices OFF -ntDomain training.local -clientlessVpnMode ON -clientlessModeUrlEncoding TRANSPARENT -SecureBrowse ENABLED -storefronturl "https://xa04-spa.training.local" -sfGatewayAuthType domain Switch the session policy for the Workspace app to the new session action: set vpn sessionPolicy PL_OS_172.16.1.106 -action AC_OS_172.16.1.106_SPAOP 1. Run the new NetScaler commands script with the batch command. batch -fileName /var/tmp/ns_gateway_secure_access_update -outfile /var/tmp/ns_gateway_secure_access_update_output.log -ntimes 1. 2. Verify the log file that there is no error For example: shell cat /var/tmp/ns_gateway_secure_access_update_output.log Note: In this scenario, one error is shown in the log file because StoreFront and SPA plug-in are installed on the same machine. ERROR: Specified pattern or range is already bound to dataset/patset 3. On the StoreFront and SPA plug-in machine, open Citrix Secure Private Access from the Start menu. 4. On the SPA admin console page, click Mark as done in the Configure Gateway section. Scenario 2 – Scalable deployment In Scenario 2, the NetScaler Gateway, StoreFront, SPA plug-in, and SQL server are deployed in Microsoft Azure, whereas all other services are deployed on-premises. Note: NetScaler Gateway, StoreFront, SPA plug-in, and SQL server can also be deployed in the local data center.This scenario should only showcase that deploying in any cloud is possible too. We assume that a working Citrix Virtual Apps and Desktops infrastructure is installed and a NetScaler is deployed in Azure. Cloud Infrastructure environment Azure Load Balancer for NetScaler with static public IP 2x NetScaler VPX (Gateway) on Azure 2x StoreFront server 2x SPA plug-in server 1x Database server 2x Active Directory server Webserver containing websites Webserver certificates Note: This is a simplified architectural overview of scenario 2. For more detailed communication information, see Secure Private Access for on-premises (Secure Private Access plug-in). Installation (Scenario 2) StoreFront 1.On the StoreFront machine, install a web server certificate containing the load balancing FQDN and StoreFront server FQDNs. For more information about certificates, have a look at StoreFront certificate requirements. 2. Download the Citrix Virtual Apps and Desktops ISO file from Citrix Download Center. 3. Run the ISO installer AutoSelect.exe. 4. Select Start from Virtual Apps and Desktops. 5. Because we want to have a combined StoreFront and SPA plug-in server, we first install Citrix StoreFront. 6. In the Citrix StoreFront installer, accept the license agreement and click Next. 7. In the Review prerequisites page, click Next. 8. In the Ready to install page, click Install. 9. When the installation is successfully finished, click Finish. 10. Click Yes in the reboot dialog to restart the server. 11. For redundancy, install a second StoreFront server following the same steps. Secure Private Access 1. On the Secure Private Access machine, install a web server certificate matching the load balancer FQDN name. The same certificate must be installed on the other SPA plug-in nodes. If the load balancing protocol used is SSL, the same certificate must be used on the load balancer. 2. Mount the downloaded Citrix Virtual Apps and Desktops ISO file and run the installer AutoSelect.exe. 3. Select Start from Virtual Apps and Desktops. 4. Click Secure Private Access to start the installation. 5. Accept the license agreement in the Secure Private Access installer and click Next. 6. On the Core Components page, click Next. 7. On the Additional Components page, deselect Use SQL Express on the same machine and click Next. Note: A dedicated database server is recommended for production deployment. 8. On the Firewall page, click Next to automatically create local Windows Firewall rules. 9. On the Summary page, review your installation settings and click Install. 10. On the Finish Installation page, click Finish. 11. For redundancy, install a second SPA plug-in server following the same steps. Note: The SPA admin console opens automatically in a browser window. Before we start configuring SPA, we need to configure a StoreFront store. Configuration (Scenario 2) StoreFront 1. Open the Internet Information Service (IIS) Manager console and verify that the correct web server certificate is assigned. 2. Open the Citrix StoreFront console and create a new deployment. 3. Enter the base URL using the load balancer FQDN and click Next. For example, https://stf-lb.training.local/. Note: The load balancing configuration is done later. 4. On the getting started page, click Next. 5. On the store name and access page, enter a store name, for example, StoreLB, and click Next. 6. On the Delivery Controllers page, enter your Citrix Delivery Controller and click Next. 7. On the Remote Access page, enable Remote Access, select No VPN tunnel, add your NetScaler Gateway appliance, and Next. 8. On the Configure Authentication Methods page, verify that User name and password and Pass-through from Citrix Gateway, and click Next. 9. On the XenApp Services URL page, click Create. 10. On the Summary page, verify that the store was successfully created and click Finish. 11. Open Windows PowerShell to update the StoreFront monitoring service URL and run the following commands: $ServiceUrl = "https://localhost:443/StorefrontMonitor" Set-STFServiceMonitor -ServiceUrl $ServiceUrl Get-STFServiceMonitor Default StoreFront monitoring service URL If you want to revert the service URL change, run the above commands again with a changed $ServiceUrl = "http://localhost:8000/StorefrontMonitor". 1. Verify that the Receiver for Web Sites loopback communication is set to On. Get-STFWebReceiverService -VirtualPath "/Citrix/StoreLBWeb" | Get-STFWebReceiverCommunication | Format-Table Loopback Loopback -------- On 2. Join the second StoreFront server in the server group. Please follow the documented instructions for Join an existing server group. Secure Private Access – Initial configuration wizard Note: Please create a StoreFront store before running the Secure Private Access initial configuration wizard! Information It is recommended that you configure Kerberos authentication for the browser that you use for the Secure Private Access admin console. This is because Secure Private Access uses Integrated Windows Authentication (IWA) for its admin authentication. If Kerberos authentication isn’t set, you’re prompted by the browser to enter your credentials when accessing the Secure Private Access admin console. Refer to our SSO to admin console documentation. 1. From the Start menu, open Citrix Secure Private Access. Important Within the web browser, verify the web server certificate that protects the SPA admin console. The certificate must be uploaded before the Secure Private Access installation. 2. On the SPA admin console page, click Continue to start the initial configuration wizard. 3. On the Step 1 page, select Create a new Secure Private Access site and click Next. 4. On the Step 2 page, enter your SQL server host and Site name and click Test connection. The resulting database name is a combination of "CitrixAccessSecurity". 5. Select the type of deployment, Automatically or Manually. In this scenario, select Manually and click Download script. Note: The displayed error is expected because the database does not exist. Secure Private Access – manual database setup 1. Open the SQL Server Management Studio and connect to the database engine using a database administrator account. 2. In the SQL Server Management Studio, click File, select Open and select File. 3. In the Open File dialog, search for the downloaded SQL script and click Open. 4. Verify the script content and click Execute. The script creates the database and a login for the Windows server training\xa05-spa. 5. Switch back to the SPA admin console and click Test connection. The connection is now successful and the server has write permissions to the database. 6. Click Next. 7. On the Step 3 page, enter the Secure Private Access address, StoreFront Store URL, Public NetScaler Gateway address, the NetScaler Gateway virtual IP address, and callback URL. When all URLs are successfully verified, click Next. 8. On the Step 4 page, click Save to start the configuration process. Note: Because StoreFront is installed on a different server, the SPA plug-in PowerShell script must manually be executed on the StoreFront server. The StoreFront server group replication mechanism propagates the changes to all members. 9. After the configuration process is completed, click Close. 10. Join the second SPA plug-in server to the cluster. Open another browser and open the second SPA plug-in admin console and click Continue. 11. On the Step 1 page, select Join an existingSC Secure Private Access site and click Next. 12. On the Step 2 page, enter your SQL server host and Site name, click Test connection, select Manually and click Download script. Secure Private Access – manual database setup 1. Open the SQL Server Management Studio and connect to the database engine using a database administrator account. 2. In the SQL Server Management Studio, click File, select Open and select File. 3. In the Open File dialog, search for the downloaded SQL script and click Open. 4. Verify the script content and click Execute. The script verifies that the database exits and creates the login for the Windows server training\xa04-spa. 5. Switch back to the SPA admin console and click Next. The server now has write permissions to the database. Click Next. 6. On the Step 4 page, click Save to start the configuration process. 7. After the configuration process is completed, click Close. 8. The SPA plug-in cluster can be managed over each node. Secure Private Access – App creation 1. In the menu on the left, click Applications. 2. On the right side, click Add an app 3. In the Add an app dialog, add the required fields marked with a red star and click Save. Note: For details on application parameters, see Configure applications. 4. In the menu on the left, click Access Policies. 5. On the right side, click Create policy 6. In the Create policy dialog, add the required fields marked with a red star and click Save. Note: For details on application access policies, see Configure access policies for the applications. Secure Private Access – StoreFront configuration 1. On the Secure Private Access server, open the Start menu and open Citrix Secure Private Access. 2, In the menu on the left, click Settings. 3, In the menu on the left, click Settings and select the Integrations tab. 4. In the StoreFront Store URL section, click Download script. 5. Copy the downloaded file StoreFrontScripts.zip to a StoreFront server and exact the files to any folder. 6. Open a Windows x64 bit compatible PowerShell window with admin privilege and run the PowerShell script ConfigureStorefront.ps1. The script modifies the StoreFront store (in this scenario, StoreLB) to support Secure Private Access applications. NetScaler StoreFront and SPA Plugin Load Balancing Note: The below example has not enabled SSL Default Profiles. If your NetScaler configuration does, add the cipher directly into the SSL profile and ignore the virtual server cipher configuration. The following servers are used - xa04-stf.training.local xa05-stf.training.local xa04-spa.training.local xa05-spa.training.local IP addresses 172.16.1.107 (StoreFront load balancing VIP) 172.16.1.108 (SPA plug-in load balancing VIP) Certificates dh5-2048.key (Diffie-Hellman key, group 5, 2048 bit) stf-lb.training.local spa-lb.training.local Make sure to create the Diffie-Hellman key and replace the server names, IP addresses, and certificates before running the commands in NetScaler CLI. Connect to NetScaler CLI using an SSH client and run the following commands: ## SSL Profile ## ## Do not forget to replace the Diffie-Hellmann key name ## add ssl profile SECURE_ssl_profile_frontend -dhCount 1000 -dh ENABLED -dhFile "/nsconfig/ssl/dh5-2048.key" -eRSA ENABLED -eRSACount 1000 -sessReuse ENABLED -sessTimeout 120 -tls1 DISABLED -tls11 DISABLED ## Monitors ## add lb monitor mon-StoreFront STOREFRONT -scriptName nssf.pl -dispatcherIP 127.0.0.1 -dispatcherPort 3013 -LRTM DISABLED -secure YES -storefrontcheckbackendservices YES add lb monitor mon-SPA-Plugin HTTP -respCode 200 -httpRequest "GET /secureAccess/health" -LRTM DISABLED -secure YES add lb monitor mon-SPA-Admin-console HTTP -respCode 200 -httpRequest "GET /accessSecurity/health" -LRTM DISABLED -secure YES ## Server ## ## Do not forget to replace server names ## add server xa04-stf.training.local xa04-stf.training.local add server xa05-stf.training.local xa05-stf.training.local add server xa04-spa.training.local xa04-spa.training.local add server xa05-spa.training.local xa05-spa.training.local ## Services ## ## Do not forget to replace service names ## add service xa04-stf.training.local_443 xa04-stf.training.local SSL 443 -gslb NONE -maxClient 0 -maxReq 0 -cip ENABLED X-Forwarded-For -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO -state DISABLED bind service xa04-stf.training.local_443 -monitorName mon-StoreFront add service xa05-stf.training.local_443 xa05-stf.training.local SSL 443 -gslb NONE -maxClient 0 -maxReq 0 -cip ENABLED X-Forwarded-For -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO bind service xa05-stf.training.local_443 -monitorName mon-StoreFront add service xa04-spa.training.local_443 xa04-spa.training.local SSL 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO -state DISABLED bind service xa04-spa.training.local_443 -monitorName mon-SPA-Plugin add service xa05-spa.training.local_443 xa05-spa.training.local SSL 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO bind service xa05-spa.training.local_443 -monitorName mon-SPA-Plugin add service xa04-spa.training.local_4443 xa04-spa.training.local SSL 4443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO -state DISABLED bind service xa04-spa.training.local_4443 -monitorName mon-SPA-Admin-console add service xa05-spa.training.local_4443 xa05-spa.training.local SSL 4443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO bind service xa05-spa.training.local_4443 -monitorName mon-SPA-Admin-console bind service xa04-spa.training.local_443 -monitorName mon-SPA-Plugin ## LB vServer ## ## Do not forget to replace vServer names and IP addresses ## add lb vserver lbvs-stf-lb.training.local_443 SSL 172.16.1.107 443 -persistenceType COOKIEINSERT -persistenceBackup SOURCEIP -cookieName STFPersistence -cltTimeout 180 add lb vserver lbvs-spa-lb.training.local_443 SSL 172.16.1.108 443 -persistenceType NONE -cltTimeout 180 add lb vserver lbvs-spa-lb.training.local_4443 SSL 172.16.1.108 4443 -persistenceType NONE -cltTimeout 180 ## Do not forget to replace vServer names and service bindings ## bind lb vserver lbvs-stf-lb.training.local_443 xa04-stf.training.local_443 bind lb vserver lbvs-stf-lb.training.local_443 xa05-stf.training.local_443 bind lb vserver lbvs-spa-lb.training.local_443 xa04-spa.training.local_443 bind lb vserver lbvs-spa-lb.training.local_443 xa05-spa.training.local_443 bind lb vserver lbvs-spa-lb.training.local_4443 xa04-spa.training.local_4443 bind lb vserver lbvs-spa-lb.training.local_4443 xa05-spa.training.local_4443 ## Do not forget to replace vServer names ## set ssl vserver lbvs-stf-lb.training.local_443 -sslProfile SECURE_ssl_profile_frontend set ssl vserver lbvs-spa-lb.training.local_443 -sslProfile SECURE_ssl_profile_frontend set ssl vserver lbvs-spa-lb.training.local_4443 -sslProfile SECURE_ssl_profile_frontend ## Do not forget to replace vServer names ## bind ssl vserver lbvs-stf-lb.training.local_443 -cipherName SECURE bind ssl vserver lbvs-spa-lb.training.local_443 -cipherName SECURE bind ssl vserver lbvs-spa-lb.training.local_4443 -cipherName SECURE ## Do not forget to replace vServer names and certificates ## bind ssl vserver lbvs-stf-lb.training.local_443 -certkeyName stf-lb.training.local bind ssl vserver lbvs-spa-lb.training.local_443 -certkeyName spa-lb.training.local bind ssl vserver lbvs-spa-lb.training.local_4443 -certkeyName spa-lb.training.local NetScaler Gateway Note: To create a new NetScaler Gateway configuration, use ns_gateway_secure_access.sh. To update an existing NetScaler Gateway configuration, use ns_gateway_secure_access_update.sh. 1. Open a new browser tab and navigate to https://www.citrix.com/downloads/citrix-secure-private-access/Shell-Script/Shell-Script-for-Gateway-Configuration.html. 2. When prompted, log on with your Citrix Cloud account. 3. Download the Shell Script for Gateway Configuration file archive and extract it to your local computer. 4. In this scenario, we have a working NetScaler Gateway configuration and must update it for Secure Private Access on-premises. 5. Use a tool of your choice to upload the script ns_gateway_secure_access_update.sh to the NetScaler /var/tmp folder. 6. Connect to the NetScaler CLI using an SSH client and log on. 7. Enter shell, press the return key, and change the directory to /var/tmp. 8. Change the file permissions using the command chmod +x /var/tmp/ns_gateway_secure_access_update.sh to make the script executable. 9. Run the script /var/tmp/ns_gateway_secure_access_update.sh. Note: If you see the error -bash: ./ns_gateway_secure_access_update.sh: /bin/sh^M: bad interpreter: No such file or directory, run the following command tr -d '\r' < /var/tmp/ns_gateway_secure_access_update.sh > /var/tmp/ns_gateway_secure_access_update_unix.sh to convert the Windows line endings to Unix.Change the file permissions using the command chmod +x /var/tmp/ns_gateway_secure_access_update_unix.sh to make the converted script executable. Run the converted script and insert the required parameters. Note: If you see the error -bash: ./ns_gateway_secure_access_update.sh: /bin/sh^M: bad interpreter: No such file or directory, run the following command tr -d '\r' < /var/tmp/ns_gateway_secure_access_update.sh > /var/tmp/ns_gateway_secure_access_update_unix.sh to convert the Windows line endings to Unix. Change the file permissions using the command chmod +x /var/tmp/ns_gateway_secure_access_update_unix.sh to make the converted script executable. Run the converted script and insert the required parameters. Support for smart access tag Starting with the following versions, NetScaler Gateway sends the smart access tags automatically. This enhancement removes the required gateway callback from SPA plug-in to NetScaler Gateway. 13.1 - 48.47 and later 14.1 - 4.42 and later The above script automatically enables the enhancement flags ns_vpn_enable_spa_onprem and ns_vpn_disable_spa_onprem. To make the changes persistent, run the following commands in the NetScaler shell. root@xa04-adc01# echo "nsapimgr_wr.sh -ys call=ns_vpn_enable_spa_onprem">> /nsconfig/rc.netscaler root@xa04-adc01# echo "nsapimgr_wr.sh -ys call=toggle_vpn_enable_securebrowse_client_mode">> /nsconfig/rc.netscaler For more details, look at Support for smart access tags 1. A new NetScaler command script (the default is /var/tmp/ns_gateway_secure_access) is generated. 2. Switch back to the NetScaler CLI using the command exit. 3. Before executing the new NetScaler command script, let us verify the current NetScaler Gateway configuration and update it for Secure Private Access on-premises. 4. On the Gateway virtual server, verify the following: *ICA only is set to false (OFF) TCP Profile is set to nstcp_default_XA_XD_profile Deployment Type is set to ICA_STOREFRONT On the Gateway session action for the Workspace app, verify the following: *transparentInterception is set to OFF SSO is set to ON *ssoCredential is set to PRIMARY useMIP is set to NS *useIIP is set to OFF icaProxy is set to OFF *wihome is set to "https://stf-lb.training.local/Citrix/StoreLBWeb" - replace with real store URL ClientChoices is set to OFF *ntDomain is set to "training.local" - used for SSO defaultAuthorizationAction is set to ALLOW *authorizationGroup is set to SecureAccessGroup (Make sure that this group is created in NetScaler, not Active Directory. It’s used to bind Secure Private Access specific authorization policies) clientlessVpnMode is set to ON *clientlessModeUrlEncoding is set to TRANSPARENT SecureBrowse is set to ENABLED *Storefronturl is set to "https://stf-lb.training.local" - replace with StoreFront FQDN sfGatewayAuthType is set to domain Note: For details on session action parameters, see the Command line reference for vpn-sessionAction. Based on the above example, the default session action before adding SPA looks like: add vpn sessionAction AC_OS_172.16.1.106 -transparentInterception OFF -defaultAuthorizationAction ALLOW -SSO ON -ssoCredential PRIMARY -icaProxy ON -wihome "https://stf-lb.training.local/Citrix/StoreLBWeb" -ClientChoices OFF -ntDomain training.local -clientlessVpnMode OFF -storefronturl "https://stf-lb.training.local" -sfGatewayAuthType domain Let’s create the authorization group and a new session action and modify it for Secure Private Access on-premises: add aaa group SecureAccessGroup add vpn sessionAction AC_OS_172.16.1.106_SPAOP -transparentInterception OFF -defaultAuthorizationAction ALLOW -authorizationGroup SecureAccessGroup -SSO ON -ssoCredential PRIMARY -useMIP NS -useIIP OFF -icaProxy OFF -wihome "https://stf-lb.training.local/Citrix/StoreLBWeb" -ClientChoices OFF -ntDomain training.local -clientlessVpnMode ON -clientlessModeUrlEncoding TRANSPARENT -SecureBrowse ENABLED -storefronturl "https://stf-lb.training.local" -sfGatewayAuthType domain Switch the session policy for the Workspace app to the new session action: set vpn sessionPolicy PL_OS_172.16.1.106 -action AC_OS_172.16.1.106_SPAOP 5. Run the new NetScaler commands script with the batch command. For example, batch -fileName /var/tmp/ns_gateway_secure_access_update -outfile /var/tmp/ns_gateway_secure_access_update_output.log -ntimes 1. 6. Verify the log file that there is no error For example, shell cat /var/tmp/ns_gateway_secure_access_update_output.log Note: In this scenario, one error is shown in the log file because StoreFront and SPA plug-in are installed on the same machine. ERROR: Specified pattern or range is already bound to dataset/patset 7. On the StoreFront and SPA plug-in machine, open Citrix Secure Private Access from the Start menu. 8. On the SPA admin console page, click Mark as done in the Configure Gateway section. Scenario 3 – Geo deployment (Coming Soon) Testing any scenario 1. Open the Citrix Workspace app and create a new account. In our scenarios, the URL https://citrix.training.com was used. 2. Log on to NetScaler Gateway. 3. Secure Private Access apps along with Citrix Virtual Apps and Desktops are displayed. In this scenario, no CVAD app is marked as a favorite. Thus, they are only displayed under APPS. 4. Launch web app Extranet. Note: All security controls are enabled on this application. Restrict clipboard access Restrict printing Restrict downloads Restrict uploads Display Watermark Restrict key logging Restrict screen capture The above screenshot shows "Display Watermark". The screenshot below shows "Restrict screen capture". Summary Citrix Secure Private Access for on-premises allows zero trust-based access to SaaS and internal web apps. This deployment guide covered publishing web apps and setting security controls. The result is an integrated solution with single sign-on for users to access SaaS and internal web apps like virtual apps.
  26. Very helpful document! Thank you! Looks like many hours spent. I have 2 or 3 suggestions. I think maybe the title should be changed. The article is all about hibernation but hibernation is not mentioned. Also the title mentions that this is for Powershell but the instructions have both Powershell and GUI versions throughout. Maybe "Using hibernation with Azure VMs in Citrix DaaS" or something? A second thing is that many of the images are hard to read, too small when seeing them in the web page and then not enough quality to read them if I enlarge. The screen by screen for the Create Machine Catalog Wizard is the main one that I had a bad time with. Lastly, perhaps this is a wider site issue but the navigation columns on the right and left of the web page are so prominent that the content is only in the center third of my screen.
  27. Overview Local Host Cache and Service Continuity should be at the forefront of conversations when building a resilient Citrix environment. Local Host Cache and Service Continuity are Citrix technologies that can maintain end-user access to business-critical workloads during potential service disruptions. While they function differently, both features serve the same purpose: to keep users launching their apps and desktops regardless of service health. Let's start by differentiating the 2 features: Local Host Cache (LHC): LHC leverages a locally cached copy of the Site database hosted on Cloud Connectors. The local copy of the Site database is used to broker sessions if connectivity between the Cloud Connectors and Citrix Cloud is lost. LHC is enabled by default for DaaS environments, but some configurations must be considered to ensure LHC works properly during a service disruption. Service Continuity: Service Continuity is a DaaS-only feature that uses Connection Lease files downloaded to a user’s endpoint when they login to Citrix Workspace from either the Workspace app or a browser (the Citrix Workspace web extension is required for Service Continuity to work in a browser). Service Continuity uses the Connection Lease files when a normal end-user launch path cannot be established. It’s important to note that Service Continuity can leverage the LHC database on the Cloud Connectors, so many of the LHC misconfigurations below can also impact customers using Service Continuity for resiliency. Service Continuity is only supported for DaaS connections through the Workspace service. Service Continuity cannot be used with an on-premises StoreFront server. Note: The deprecated Citrix feature called “connection leasing” resembles Workspace connection leases in that it improves connection resiliency during outages. Otherwise, that deprecated feature is unrelated to service continuity. Understanding the resiliency features is a critical first step in configuring your environment correctly in the case of a service disruption. This article assumes a working knowledge of LHC and Service Continuity. There are a few crucial configurations to check for to ensure that users can continue to access their resources when LHC or Service Continuity activates. The table below lists common misconfigurations that may impact the availability of DaaS resources in the event of a service disruption. Review this list and update any potential misconfigurations in your environment before it’s too late! Impacts All Access Methods In the table below, you’ll see that some misconfigurations impact StoreFront only, some impact Workspace only, and some can impact both. That’s because, with Service Continuity, the Cloud Connectors still attempt to retrieve the VDA addresses from the Citrix Cloud-hosted session broker (using the information supplied from the cached connection leases on the endpoint). If the session broker is unreachable, the VDA addresses are determined using the LHC database. You can see the entire process here in more detail. Misconfiguration Description Impact Detection Mitigation Pooled single-session OS VDAs that are power managed are unavailable during LHC Because power management services reside on Citrix Cloud infrastructure rather than Cloud Connectors, power management becomes unavailable during an LHC event. This results in an inability to reboot power-managed pooled single-session OS VDAs and reset their differencing/write cache disks while LHC is active. For security reasons, these VDAs are unavailable by default during LHC to avoid changes and data from previous user sessions being available on subsequent sessions. Pooled single-session VDAs in power-managed delivery groups are unavailable during LHC events. Review the Delivery Groups node in Studio. Any pooled single-session delivery group that is power-managed and is not configured for access during LHC will show a warning icon. Edit the Local Host Cache settings within the delivery group. Note: Changing the default can result in changes and data from previous user sessions being present in subsequent sessions. Too many VDAs in a Resource Location The LHC broker on Cloud Connectors caps VDA registrations at 10,000 per resource location if the resource location goes into LHC mode. More than 10,000 VDAs in a resource location can result in excessive load on Cloud Connectors. This can result in stability issues and a subset of VDAs being unavailable during LHC. Review the Zones node to see if any alerts are detected. If your environment has too many VDAs in a resource location, a warning icon will show on the resource location (check out the Troubleshoot tab at the bottom of the Zones node after clicking the resource location to learn more about the errors and warnings in that resource location). Reconfigure Resource Locations to contain no more than 10,000 VDAs. 3 Consecutive failed config syncs Cloud Connectors periodically sync configurations from Citrix Cloud to the local database to ensure up-to-date configurations if LHC activates. Failed configuration syncs can result in stale or corrupted configurations used in case of an LHC event. Monitor Cloud Connectors for 505 events from the Config Synchronizer Service. Email alerts for failed config syncs are on the roadmap for Citrix Monitor! Review firewall configurations to ensure your firewall accepts XML and SOAP traffic. Review CTX238909. Open a support ticket to determine why config sync failures occur in your environment. Multiple elected brokers One Connector per Resource Location should be elected for LHC events. Cloud Connectors must be able to communicate with each other to determine the elected broker and understand the health of other peer connectors to make a go/no-go decision about entering LHC mode. “Split brain” scenario where multiple Connectors in the same Resource Location remain active during an LHC event. VDAs may register with any elected connectors in the Resource Location, and launches may fail intermittently while LHC is active. Monitor Connectors for 3504 events from the High Availability Service. Review to see if more than one Connector per Resource Location is being elected. Ensure Connectors can communicate with each other at http://<FQDN_OF_PEER_CONNECTOR>:80/Citrix/CdsController/ISecondaryBrokerElection. If using a proxy, bypassing the proxy for traffic between Connectors is recommended. Lack of Regular Testing It’s debatable whether a lack of testing can be considered a misconfiguration, but what’s not up for debate is the impact testing can have on ensuring this tech works in your environment! Testing can ensure infrastructure is scaled properly and works as expected before a disruption occurs. Testing should be done at regular intervals. For testing LHC, see Force an Outage. Forcing LHC is relevant to both on-prem StoreFront customers and customers leveraging Service Continuity. DaaS resources are potentially inaccessible during a service disruption. If you don’t have a testing plan for LHC or Service Continuity in your environment, consider this misconfiguration detected. Verify that Local Host Cache is working. Create a testing plan to test Service Continuity and Local Host Cache regularly. Low vCPU cores per socket configuration LHC operates using a Microsoft SQL Server Express LocalDB on the Cloud Connector. Microsoft has a limitation on SQL Express in which the Connector is limited to the lesser of 1 socket or 4 vCPU cores when using the LHC DB. If we configure Cloud Connectors to use one core per socket (e.g., 4 sockets, 4 cores), we limit LHC operating on a single core during an LHC event. Because all VDA registration and brokering operations go through a single connector during LHC, this can negatively impact performance and cause issues with VDA registration during the service disruption. More info regarding LHC core and socket configurations can be found in the Recommended compute configuration for Local Host Cache. Negative impact on the stability of VDA re-registration during an LHC event and the performance of LHC brokering operations. Check Task Manager to view core and socket configurations. Divide the number of virtual processors by the number of sockets to get your core-per-socket ratio. Reconfigure your VM to use at least 4 cores per socket. A new instance type may have to be used for public cloud workloads. Rebooting your connector may be required to reconfigure the core and socket configuration. Undersized Cloud Connectors During an event in which LHC activates, a single Cloud Connector per resource location begins to broker sessions. The elected LHC broker handles all VDA registrations and session launch requests (see Resource locations with multiple Cloud Connectors for more information on the election process). Sizing connectors to handle this added load during an LHC event is important for ensuring consistent performance. Negative impact on stability and performance during LHC VDA registration and LHC steady state. Check out the Zones node within Citrix DaaS Web Studio! If your environment has undersized connectors, https://docs.citrix.com/en-us/citrix-daas/manage-deployment/zones.html#troubleshooting. The sizing of connectors can also be checked on each connector machine at the hypervisor or VM level. Reconfigure connectors to have at least 4 vCPU and 6 GB of RAM. Review the Recommended compute configuration for local host cache for recommended sizing guidelines based on the number of VDAs in the Resource Location. Multiple Active Directory (AD) domains in a single resource location As per Citrix DaaS limits e-docs, only one Active Directory domain is supported per resource location. Issues may arise if you have multiple Cloud Connectors in a zone and the connectors are in different AD domains. Multiple AD domains in a single Resource Location can cause issues with VDA registration during LHC events. VDAs may have to try multiple Connectors before finding one they can register with. This can impact VDA registration times and add additional load on Connectors when VDAs must register, especially in VDA registration storms like Local Host Cache re-registration or Autoscale events. If your environment has multiple AD domains in a resource location, a warning icon will show on the resource location (check out the “Troubleshoot” tab at the bottom of the Zones node after clicking the Resource Location to learn more about the errors and warnings in that Resource Location). If you click the zone, it will show you the FQDN of the connectors within the zone. Reconfigure resource locations to only contain connectors in a single AD domain per resource location. Impacts Workspace The following misconfigurations only apply when Workspace is used as the access tier. Misconfiguration Description Impact Detection Mitigation Service Continuity not enabled Things can’t work if they’re not turned on! Service Continuity is a core resiliency feature for customers leveraging Citrix Workspace service as their access tier. You can manage Service Continuity on the Citrix Cloud Workspace configuration page. Without Service Continuity enabled, Connection Lease files won’t be downloaded, and users won’t be able to access their apps and desktops during a service disruption. View the Service Continuity tab in the Citrix Cloud Workspace configuration page to see if the feature is enabled. Enable Service Continuity. See Configure Service Continuity for more information. Access clients are unsupported for Service Continuity To download Service Continuity Connection Lease files, users must access their Workspace from a client that supports Service Continuity. See User device requirements to learn which client versions and access scenarios are supported. Users accessing from clients that do not support Service Continuity will be unable to launch DaaS resources during a service disruption. Review session launches in Monitor for Workspace app versions. Update Citrix Workspace app clients to versions that support Service Continuity. Encourage users using the web browser to install the Citrix Workspace web extension. Impacts StoreFront The following misconfigurations only apply when StoreFront is used as the access tier. Misconfiguration Description Impact Detection Mitigation StoreFront ‘Advanced Health Check’ setting not configured StoreFront’s Advanced Health Check feature gives StoreFront additional information about the Resource Location where a published app or desktop can be launched. Without Advanced Health Check, StoreFront may send launch requests to a Resource Location that does not deliver that particular resource, resulting in intermittent launch failures during an LHC event. On StoreFront, run “Get-STFStoreFarmConfiguration” via PowerShell. Automated detection of the StoreFront Advanced Health Check feature is on our roadmap for Web Studio! Enable the StoreFront Advanced Health Check feature. For StoreFront 2308 forward, StoreFront Advanced Health Check is enabled by default. If you upgrade your StoreFront to the 2024 LTSR once it is released, Advanced Health Check will be enabled automatically. Incorrect load balancing monitor Some customers opt to use a load-balancing vServer to balance XML traffic between StoreFront and Connectors for optimized manageability and traffic management. When connectors in a resource location go into LHC, only the primary broker can service launch requests. The remaining Connectors send health checks to try to reconnect again. If an incorrect monitor is used on the load balancing server, StoreFront may continue to send launch requests to all connectors in the resource location rather than just the elected broker. Potential intermittent launch failures during an LHC event. Check your load balancer to ensure the monitor bound to the load balancer is monitoring for brokering capabilities, not just TCP responses. NetScaler has this functionality out of the box with the CITRIX-XD-DDC monitor. Note: The CITRIX-XML-SERVICE monitor is for previous versions of Citrix Virtual Apps and Desktops and does not perform the same checks as the CITRIX-XD-DDC monitor. Configure your load balancing vServer to monitor Connectors based on brokering capabilities (e.g., use the CITRIX-XD-DDC monitor for connector load balancing). Connectors not listed as a single set of resource feed in StoreFront With the addition of StoreFront’s Advanced Health Check feature, Citrix recommends that all Cloud Connectors within a single Cloud tenant be included as a single set of Delivery Controllers in StoreFront. Check out this Citrix TIPs blog for more information regarding recommended configurations. Duplicate icons for end users or more complex multi-site aggregation configurations are required in StoreFront. View your resource feed configuration in the “Manage Delivery Controllers” tab in StoreFront. Configure all Connectors (or all Connector load balancing vServers) are listed as a single Site within StoreFront. Review Add resource feeds for Desktops as a Service for more information. Tags used to restrict launches to a subset of Resource Locations in a Delivery Group With Advanced Health Check, StoreFront knows what resource locations a published app or desktop can launch from. StoreFront does this using Delivery Group to Machine Catalog mappings. However, StoreFront is not aware of tags. Consider a scenario in which a Delivery Group contains Machine Catalogs from Resource Location “A” and Resource Location “B”. If we use tags to restrict app/desktop launches to only Resource Location “A”, StoreFront will continue to send launch requests to both Resource Location “A” and “B” during an LHC event because it does not have tag information. Potential intermittent launch failures during LHC events. Review tags used in your environment. Automated detection of Resource Location-based tag restrictions is on our roadmap for Web Studio! Configure tags so that at least one (preferably multiple!) VDA in each Resource Location delivered from a Delivery Group contains each tag. Not all connectors are receiving Secure Ticket Authority (STA) requests A subset of connectors in a resource location are not receiving STA requests from StoreFront. This can be because either they are not listed in StoreFront or there is another problem with communication, such as an expired certificate on the connector. During a Local Host Cache event, a single Connector acts as the STA server for the Resource Location. If the elected broker is not receiving STA traffic, all launches through a NetScaler could fail during an LHC event. Check that all connectors are listed as STAs on your StoreFront and NetScaler Gateway servers. Automated detection of STA traffic is on our roadmap for Web Studio! View NetScaler Gateway configurations in StoreFront and ensure all Connectors are listed as STA servers. Review NetScaler appliances and ensure all STAs listed in StoreFront are in the same format in the NetScaler Gateway vServers. STA service health can also be monitored in the Gateway vServer. StoreFront not communicating with all Cloud Connectors StoreFront can only contact a subset of connectors in a resource location. This can be because either they are not listed in StoreFront or there is another problem with communication, such as an expired certificate on the connector. StoreFront not communicating with a subset of Cloud Connectors can negatively impact the scalability and performance of an environment during both steady-state and LHC operations. If the elected broker is not receiving StoreFront traffic, all LHC launch attempts may fail. Review the resource feeds in StoreFront to ensure that all connectors are listed. If so, test that StoreFront can communicate with all listed connectors over the port configured in the resource feed. Automated detection of StoreFront traffic is on our roadmap for Web Studio! Add all connectors to the resource feed in StoreFront and fix any communication issues between StoreFront and the connectors. One of the most common communication issues between StoreFront and connectors is an expired certificate on the connector when XML traffic is over port 443. Note: For customers with many connectors, it may be beneficial to configure load-balancing vServers for each resource location to reduce the management overhead and simplify troubleshooting. Review Citrix TIPs: Integrating Citrix Virtual Apps and Desktops service and StoreFront for more information. Summary Correctly configuring your Citrix environment significantly impacts its availability and performance. Review your environments for these potential misconfigurations to keep your business running, no matter what!
  1. Load more activity
×
×
  • Create New...