Jump to content
Welcome to our new Citrix community!

Citrix Tech Zone - Document History

Showing documents in Tech Briefs, Poc Guides, Citrix Features Explained, Citrix Demo Series, Tech Insights, Design Decisions, Diagrams Posters, Reference Architectures, Deployment Guides and Tech Papers posted in for the last 365 days.

This stream auto-updates

  1. Today
  2. 5 Stars!!! Thank you for this clear explanation Daniel, very well written and the videos and graphics were very helpful. My only suggestion is to update the name of Azure AD.
  3. Last week
  4. Overview This guide aims to provide an overview of using Terraform to create a complete Citrix DaaS Resource Location on Amazon EC2. At the end of the process, you created: A new Citrix Cloud Resource Location (RL) running on Amazon EC2 2 Cloud Connector Virtual Machines registered with the Domain and the Resource Location A Hypervisor Connection and a Hypervisor Pool pointing to the new Resource Location in Amazon EC2 A Machine Catalog based on the uploaded Master Image VHD or on an Amazon EC2-based Master Image A Delivery Group based on the Machine Catalog with full Autoscale Support Example policies and policy scope bound to the Delivery Group What is Terraform Terraform is an Infrastructure-as-Code (IaC) tool that defines cloud and on-prem resources in easy-readable configuration files rather than through a GUI. IaC allows you to build, change, and manage your infrastructure safely and consistently by defining resource configurations. These configurations can be versioned, reused, and shared and are created in its native declarative configuration language known as HashiCorp Configuration Language (HCL), or optionally using JSON. Terraform creates and manages resources on Cloud platforms and other services through their application programming interfaces (APIs). Terraform providers are compatible with virtually any platform or service with an accessible API. More information about Terraform can be found at https://developer.hashicorp.com/terraform/intro. Installation HashiCorp distributes Terraform as a binary package. You can also install Terraform using popular package managers. In this example, we use Chocolatey for Windows to deploy Terraform. Chocolatey is a free and open-source package management system for Windows. Install the Terraform package from the CLI. Installation of Chocolatey Open a PowerShell shell with Administrative rights and paste the following command: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) Chocolatey downloads and installs all necessary components automatically: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString (https://community.chocolatey.org/install.ps1)) Forcing web requests to allow TLS v1.2 (Required for requests to Chocolatey.org) Getting latest version of the Chocolatey package for download. Not using proxy. Getting Chocolatey from https://community.chocolatey.org/api/v2/package/chocolatey/2.2.2. Downloading https://community.chocolatey.org/api/v2/package/chocolatey/2.2.2 to C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall\chocolatey.zip Not using proxy. Extracting C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall\chocolatey.zip to C:\TACG\AppData\Local\Temp\chocolatey\chocoInstall Installing Chocolatey on the local machine Creating ChocolateyInstall as an environment variable (targeting 'Machine') Setting ChocolateyInstall to 'C:\ProgramData\chocolatey' WARNING: It's very likely you will need to close and reopen your shell before you can use choco. Restricting write permissions to Administrators We are setting up the Chocolatey package repository. The packages themselves go to 'C:\ProgramData\chocolatey\lib' (i.e. C:\ProgramData\chocolatey\lib\yourPackageName). A shim file for the command line goes to 'C:\ProgramData\chocolatey\bin' and points to an executable in 'C:\ProgramData\chocolatey\lib\yourPackageName'. Creating Chocolatey folders if they do not already exist. chocolatey.nupkg file not installed in lib. Attempting to locate it from bootstrapper. PATH environment variable does not have C:\ProgramData\chocolatey\bin in it. Adding... WARNING: Not setting tab completion: Profile file does not exist at 'C:\TACG\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1'. Chocolatey (choco.exe) is now ready. You can call choco from anywhere, command line or powershell by typing choco. Run choco /? for a list of functions. You may need to shut down and restart powershell and/or consoles first prior to using choco. Ensuring Chocolatey commands are on the path Ensuring chocolatey.nupkg is in the lib folder PS C:\TACG> Run choco --v to check if Chocolatey was installed successfully: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> choco --v Chocolatey v2.2.2 PS C:\TACG> Installation of Terraform After the successful installation of Chocolatey you can install Terraform by running this command on the PowerShell session: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> choco install terraform Chocolatey v2.2.2 Installing the following packages: terraform By installing, you accept licenses for the packages. Progress: Downloading terraform 1.6.4... 100% terraform v1.7.4 [Approved] terraform package files install completed. Performing other installation steps. The package terraform wants to run 'chocolateyInstall.ps1'. Note: If you don't run this script, the installation will fail. Note: To confirm automatically next time, use '-y' or consider: choco feature enable -n allowGlobalConfirmation Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): A Removing old terraform plug-ins Downloading terraform 64 bit from 'https://releases.hashicorp.com/terraform/1.7.4/terraform_1.7.4_windows_amd64.zip' Progress: 100% - Completed download of C:\TACG\AppData\Local\Temp\chocolatey\terraform\1.7.4\terraform_1.7.4_windows_amd64.zip (25.05 MB). Download of terraform_1.7.4_windows_amd64.zip (25.05 MB) completed. Hashes match. Extracting C:\TACG\AppData\Local\Temp\chocolatey\terraform\1.7.4\terraform_1.7.4_windows_amd64.zip to C:\ProgramData\chocolatey\lib\terraform\tools... C:\ProgramData\chocolatey\lib\terraform\tools ShimGen has successfully created a shim for terraform.exe The install of terraform was successful. Software installed to 'C:\ProgramData\chocolatey\lib\terraform\tools' Chocolatey installed 1/1 packages. See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log). PS C:\TACG> Run terraform -version to check if Terraform installed successfully: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> terraform -version Terraform v1.7.4 on windows_amd64 PS C:\TACG> The installation of Terraform is now completed. Terraform - Basics and Commands Terraform Block The terraform {} block contains Terraform settings, including the required providers to provision your infrastructure. Terraform installs providers from the Terraform Registry. Providers The provider block configures the specified provider. A provider is a plug-in that Terraform uses to create and manage your resources. Providing multiple provider blocks in the Terraform configuration enables managing resources from different providers. Resources Resource blocks define the components of the infrastructure - physical, virtual, or logical. These blocks contain arguments to configure the resource. The provider's reference lists the required and optional arguments for each resource. The core Terraform workflow consists of three stages Write: You define resources that are deployed, altered, or deleted. Plan: Terraform creates an execution plan describing the infrastructure that it creates, updates, or destroys based on the existing infrastructure and your configuration. Apply: On approval, Terraform does the proposed operations in the correct order, respecting any resource dependencies. Terraform does not only add complete configurations, it also allows you to change previously added configurations. For example, changing the DNS servers of an NIC of an Amazon EC2 VM does not require redeploying the whole configuration - Terraform only alters the needed resources. Terraform Provider for Citrix Citrix has developed a custom Terraform provider for automating Citrix product deployments and configurations. Using Terraform with Citrix' provider, you can manage your Citrix products via Infrastructure as Code. Terraform is giving you higher efficiency and consistency on infrastructure management and better reusability on infrastructure configuration. The provider defines individual units of infrastructure and currently supports both Citrix Virtual Apps and Desktops and Citrix Desktop as a Service solutions. You can automate the creation of a site setup including host connections, machine catalogs, and delivery groups. You can deploy resources in Amazon EC2, AWS, and GCP in addition to supported on-premises Hypervisors. Terraform expects to be invoked from a working directory that contains configuration files written in the Terraform language. Terraform uses configuration content from this directory and also uses the directory to store settings, cached plug-ins and modules, and state data. A working directory must be initialized before Terraform can do any operations in it. Initialize the working directory by using the command: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> terraform init Initializing the backend... Successfully configured the backend "local"! Terraform will automatically use this backend unless the backend configuration changes. Initializing provider plug-ins... - Finding citrix/citrix versions matching ">= 0.5.3"... - Installing citrix/citrix v0.5.3... - Installed citrix/citrix v0.5.3 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plug-ins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG> The provider defines how Terraform can interact with the underlying API. Configurations must declare which providers they require so Terraform can install and use them. Terraform CLI finds and installs providers when initializing a working directory. It can automatically download providers from a Terraform registry, or load them from a local mirror or cache. Example: Terraform configuration files - provider.tf The file provider.tf contains the information on the target site where to apply the configuration. Depending on whether it is a Citrix Cloud site or a Citrix On-Premises site, the provider needs to be configured differently: # Cloud Provider provider "Citrix" { customer_id = "idofthexxxxxxxxx" ‘Citrix Cloud Customer ID client_id = "3edrxxxx-XXXX-XXXX-XXXX-XXXXXXXXXX" ‘ID of the Secure API client planned to use client_secret = "*********************" ‘Secret of the Secure API client planned to use } A guide for the creation of a secure API client can be found in Citrix Developer Docs and is shown later as well. # On-Premises Provider provider "Citrix" { hostname = "10.0.0.6" client_id = "foo.local\\admin" client_secret = "foo" } Example - Schema used for the Provider configuration client_id (String): Client Id for Citrix DaaS service authentication For Citrix On-Premises customers: Use this variable to specify the Domain-Admin Username. For Citrix Cloud customers: Use this variable to specify Cloud API Key Client ID. Can be set via the Environment Variable CITRIX_CLIENT_ID. client_secret (String, Sensitive): Client Secret for Citrix DaaS service authentication For Citrix On-Premises customers: Use this variable to specify the Domain-Admin Password. For Citrix Cloud customers: Use this variable to specify Cloud API Key Client Secret. Can be set via the Environment Variable CITRIX_CLIENT_SECRET.| customer_id (String): Citrix Cloud customer ID Only applicable for Citrix Cloud customers. Can be set via the Environment Variable CITRIX_CUSTOMER_ID. disable_ssl_verification (Boolean): Disable SSL verification against the target DDC Only applicable to on-premises customers. Citrix Cloud customers do not need this option. Set to true to skip SSL verification only when the target DDC does not have a valid SSL certificate issued by a trusted CA. When set to true, please make sure that your provider config is set for a known DDC hostname. It is recommended to configure a valid certificate for the target DDC. Can be set via the Environment Variable CITRIX_DISABLE_SSL_VERIFICATION.| environment (String): Citrix Cloud environment of the customer Only applicable for Citrix Cloud customers. Available options: Production, Staging, Japan, JapanStaging. Can be set via the Environment Variable CITRIX_ENVIRONMENT.| hostname (String) | Hostname/base URL of Citrix DaaS service For Citrix On-Premises customers (Required): Use this variable to specify the Delivery Controller hostname. For Citrix Cloud customers (Optional): Use this variable to override the Citrix DaaS service hostname. Can be set via the Environment Variable CITRIX_HOSTNAME.| Deploying a Citrix Cloud Resource location on Amazon EC2 using Terraform Overview This guide aims to showcase the possibility of creating a complete Citrix Cloud Resource Location on Amazon EC2 using Terraform. We want to reduce manual interventions to the absolute minimum. All Terraform configuration files can be found late on GitHub - we update this guide when the GitHub repository is ready. In this guide we use an existing domain and will not deploy a new domain - for further instructions for deploying a new domain refer to the guide Citrix DaaS and Terraform - Automatic Deployment of a Resource Location on Microsoft Azure. Note that this guide will be reworked soon! The AD deployment used for this guide consists of a Hub-and-Spoke model - each Resource Location running on a Hypervisor/Hyperscaler is connected to the main Domain Controller by using IPSec-based Site-to-Site VPNs. Each Resource Location has its own sub-domain. The Terraform flow is split into different parts: The Terraform flow is split into different parts: Part One - this part can be run on any computer where Terraform is installed : Creating the initially needed Resources on Amazon EC2: Creating all needed IAM roles on Amazon EC2 Creating all needed IAM Instance profiles on Amazon EC2 Creating all needed IAM policies on Amazon EC2 Creating all needed Secret Manager configurations on Amazon EC2 Creating all needed DHCP configurations on Amazon EC2 Creating a Windows Server 2022-based Master Image VM used for deploying the Machine Catalog in step 3 Creating two Windows Server 2022-based VMs used as Cloud Connector VMs in step 2 Creating a Windows Server 2022-based VM acting as a Administrative workstation for running the Terraform steps 2 and 3 - this is necessary because of using WinRM for further configuration and deployment in steps 2 and 3! Creating all necessary scripts for joining the VMs to the existing sub-domain Putting the VMs into the existing sub-domain Part Two - this part can only be run on the previously created Administrative VM as the deployment of steps 2 and 3 relies heavily on WinRM: Configuring the three previously created Virtual Machines on Amazon EC2: Installing the needed software on the CCs Installing the needed software on the Admin-VM Creating the necessary Resources in Citrix Cloud: Creating a Resource Location in Citrix Cloud Configuring the 2 CCs as Cloud Connectors Registering the 2 CCs in the newly created Resource Location Part Three: Creating the Machine Catalog and Delivery Group in Citrix Cloud: Retrieving the Site- and Zone-ID of the Resource Location Creating a dedicated Hypervisor Connection to Amazon EC2 Creating a dedicated Hypervisor Resource Pool Creating a Machine Catalog (MC) in the newly created Resource Location Creating a Delivery Group (DG) based on the MC in the newly created Resource Location Determine if WinRM connections/communications are functioning We strongly recommend a quick check to determine the communication before starting the Terraform scripts. Open a PowerShell console and type the following command: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> test-wsman -ComputerName <IP-Address of the computer you want to reach> -Credential <IP-Address of the computer you want to reach>\administrator -Authentication Basic The response should look like: wsmid : http://schemas.dmtf.org/wbem/wsman/identity/1/wsmanidentity.xsd ProtocolVersion : http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd ProductVendor : Microsoft Corporation ProductVersion : OS: 0.0.0 SP: 0.0 Stack: 3.0 Another possibility is to open a PowerShell console and type: Enter-PSSession -ComputerName <IP-Address of the computer you want to reach> -Credential <IP-Address of the computer you want to reach>\administrator The response should look like: [172.31.22.104]: PS C:\Users\Administrator\Documents> A short Terraform script also checks if the communication via WinRM between the Admin-VM and in this example, the CC1-VM is working as intended: locals { #### Test the WinRM communication #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context TerraformTestWinRMScript = <<-EOT $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $FileNameForData = 'C:\temp\xdinst\Processes.txt' If (Test-Path $FileNameForData) {Remove-Item -Path $FileNameForData -Force} Get-Process | Out-File -FilePath 'C:\temp\xdinst\Processes.txt' } EOT } #### Write script into local data-directory resource "local_file" "WriteWinRMTestScriptIntoDataDirectory" { filename = "${path.module}/data/Terraform-Test-WinRM.ps1" content = local.TerraformTestWinRMScript } resource "null_resource" "CreateTestScriptOnCC1" { connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_CC1-IP timeout = var.Provisioner_Timeout } provisioner "file" { source = "${path.module}/data/Terraform-Test-WinRM.ps1" destination = "C:/temp/xdinst/Terraform-Test-WinRM.ps1" } provisioner "remote-exec" { inline = [ "powershell -File 'C:/temp/xdinst/Terraform-Test-WinRM.ps1'" ] } } If you can see in the Terraform console something like...: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} null_resource.CreateTestScriptOnCC1: Creating... null_resource.CreateTestScriptOnCC1: Provisioning with 'remote-exec'... null_resource.CreateTestScriptOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CreateTestScriptOnCC1 (remote-exec): Host: 172.31.22.103 null_resource.CreateTestScriptOnCC1 (remote-exec): Port: 5985 null_resource.CreateTestScriptOnCC1 (remote-exec): User: administrator null_resource.CreateTestScriptOnCC1 (remote-exec): Password: true null_resource.CreateTestScriptOnCC1 (remote-exec): HTTPS: false null_resource.CreateTestScriptOnCC1 (remote-exec): Insecure: false null_resource.CreateTestScriptOnCC1 (remote-exec): NTLM: false null_resource.CreateTestScriptOnCC1 (remote-exec): CACert: false null_resource.CreateTestScriptOnCC1 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CreateTestScriptOnCC1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/Terraform-Test-WinRM.ps1 #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CreateTestScriptOnCC1: Creation complete after 3s [id=1571484748961023525] ...then you can be sure that the provisioning using WinRM is working as intended! Configuration using variables All needed configuration settings are stored in the corresponding Variables which need to be set. Some Configuration settings are propagated throughout the whole Terraform configuration... You need to start each of the 3 modules manually using the Terraform workflow terraform init, terraform plan, and terraform apply in the corresponding module directory. Terraform then completes the necessary configuration steps of the corresponding module. File System structure Root-Directory Module 1: _CConAWS-Creation: Filename Purpose _CConAWS-Creation-Create.tf Resource configuration and primary flow definition _CConAWS-Creation-Create-variables.tf Definition of Variables _CConAWS-Creation-Create.auto.tfvars.json Setting the values of the Variables _CConAWS-Creation-Provider.tf Provider definition and configuration _CConAWS-Creation-Provider-variables.tf Definition of Variables _CConAWS-Creation-Provider.auto.tfvars.json Setting the values of the Variables Add-EC2InstanceToDomainAdminVM.ps1 Powershell-Script for joining the Admin-VM to the Domain Add-EC2InstanceToDomainCC1.ps1 Powershell-Script for joining the CC1-VM to the Domain Add-EC2InstanceToDomainCC2.ps1 Powershell-Script for joining the CC2-VM to the Domain Add-EC2InstanceToDomainWMI.ps1 Powershell-Script for joining the CC2-VM to the Domain DATA-Directory Place to put files to upload using file provisioning (NOT RECOMMENDED - see later explanation Module 2: _CConAWS-Install: Filename Purpose _CCOnAWS-Install-CreatePreReqs.tf Resource configuration and primary flow definition _CCOnAWS-Install-CreatePreReqs-variables.tf Definition of Variables _CCOnAWS-Install-CreatePreReqs.auto.tfvars.json Setting the values of the Variables _CConAWS-Install-Provider.tf Provider definition and configuration _CConAWS-Install-Provider-variables.tf Definition of Variables _CConAWS-Install-Provider.auto.tfvars.json Setting the values of the Variables GetSiteID.ps1 PowerShell script to make a REST-API call to determine the CC-Site-ID GetZoneID.ps1 PowerShell script to make a REST-API call to determine the CC-Zone-ID DATA/InstallPreReqsOnAVM1.ps1 PowerShell script to deploy needed pre-requisites on the Admin-VM DATA/InstallPreReqsOnAVM2.ps1 PowerShell script to deploy needed pre-requisites on the Admin-VM DATA/InstallPreReqsOnCC.ps1 PowerShell script to deploy needed pre-requisites on the Admin-VM Module 3: _CConAWS-CCStuff: Filename Purpose _CCOnAWS-CCStuff-CreateCCEntities.tf Resource configuration and primary flow definition _CCOnAWS-CCStuff-CreateCCEntities-variables.tf Definition of Variables _CCOnAWS-CCStuff-CreateCCEntities.auto.tfvars.json Setting the values of the Variables _CConAWS-CCStuff-Provider.tf Provider definition and configuration _CConAWS-CCStuff-Provider-variables.tf Definition of Variables _CConAWS-CCStuff-Provider.auto.tfvars.json Setting the values of the Variables Change the settings in the .json files according to your needs. The following prerequisites are needed before setting the corresponding settings or running the Terraform workflow to ensure a smooth and error-free build. Prerequisites Installing AWS.Tools for PowerShell and AWS CLI In this guide, we use AWS CLI and PowerShell cmdlets to determine further needed information. Further information about AWS CLI and the installation/configuration can be found at AWS Command Line Interface, further information about the PowerShell cmdlets for AWS can be found at Installing the AWS Tools for PowerShell on Windows. .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Examples: PS C:\TACG> Install-AWSToolsModule AWS.Tools.EC2 -Force Installing module AWS.Tools.EC2 version 4.1.533.0 PS C:\TACG> Install-AWSToolsModule AWS.Tools.IdentityManagement -Force Installing module AWS.Tools.IdentityManagement version 4.1.533.0 PS C:\TACG> Install-AWSToolsModule AWS.Tools.SimpleSystemsManagement -Force Installing module AWS.Tools.SimpleSystemsManagement version 4.1.533.0 PS C:\TACG> Existing Amazon EC2 entities We anticipate that the following resources are existing and are already configured on Amazon EC2: A working tenant All needed rights for the IAM user on the tenant A VPC At least one subnet in the VPC A security group configured for allowing inbound connections from the subnet and partially from the Internet: WinRM-HTTP, WinRM-HTTPS, UDP, DNS (UDP and TCP), ICMP (for testing purposes), HTTP, HTTPS, TCP (for testing purposes), RDP. No blocking rules for outbound connections should be in place An access key with its secret (see a description how to create the key later on) We can get the needed information about the VPC by using PowerShell: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Get-EC2VPC CidrBlock : 172.31.0.0/16 CidrBlockAssociationSet : {vpc-cidr-assoc-0a91XXXXXXXXXXX} DhcpOptionsId : dopt-0a71XXXXXXXXXXX InstanceTenancy : default Ipv6CidrBlockAssociationSet : {} IsDefault : True OwnerId : 968XXXXXX State : available Tags : {} VpcId : vpc-0f9XXXXXXXXXXXX Note the VpcId and put it into the corresponding .auto.tvars.json file. We can get the needed information about one or more subnets also by using PowerShell: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Get-EC2Subnet AssignIpv6AddressOnCreation : False AvailabilityZone : eu-central-1b AvailabilityZoneId : euc1-az3 AvailableIpAddressCount : 4091 CidrBlock : 172.31.32.0/20 CustomerOwnedIpv4Pool : DefaultForAz : True EnableDns64 : False EnableLniAtDeviceIndex : 0 Ipv6CidrBlockAssociationSet : {} Ipv6Native : False MapCustomerOwnedIpOnLaunch : False MapPublicIpOnLaunch : True OutpostArn : OwnerId : 968XXXXXX PrivateDnsNameOptionsOnLaunch : Amazon.EC2.Model.PrivateDnsNameOptionsOnLaunch State : available SubnetArn : arn:aws:ec2:eu-central-1:968XXXXXX:subnet/subnet-02e91c49df134f849 SubnetId : subnet-02eXXXXXXXXXX Tags : {Name} VpcId : vpc-0f9XXXXXXXXXXXX AssignIpv6AddressOnCreation : False AvailabilityZone : eu-central-1a AvailabilityZoneId : euc1-az2 AvailableIpAddressCount : 4089 CidrBlock : 172.31.16.0/20 CustomerOwnedIpv4Pool : DefaultForAz : True EnableDns64 : False EnableLniAtDeviceIndex : 0 Ipv6CidrBlockAssociationSet : {} Ipv6Native : False MapCustomerOwnedIpOnLaunch : False MapPublicIpOnLaunch : True OutpostArn : OwnerId : 968XXXXXX PrivateDnsNameOptionsOnLaunch : Amazon.EC2.Model.PrivateDnsNameOptionsOnLaunch State : available SubnetArn : arn:aws:ec2:eu-central-1:968XXXXXX:subnet/subnet-07eXXXXXXXXXX SubnetId : subnet-07eXXXXXXXXXX Tags : {Name} VpcId : vpc-0f9XXXXXXXXXXXX AssignIpv6AddressOnCreation : False AvailabilityZone : eu-central-1c AvailabilityZoneId : euc1-az1 AvailableIpAddressCount : 4090 CidrBlock : 172.31.0.0/20 CustomerOwnedIpv4Pool : DefaultForAz : True EnableDns64 : False EnableLniAtDeviceIndex : 0 Ipv6CidrBlockAssociationSet : {} Ipv6Native : False MapCustomerOwnedIpOnLaunch : False MapPublicIpOnLaunch : True OutpostArn : OwnerId : 968XXXXXX PrivateDnsNameOptionsOnLaunch : Amazon.EC2.Model.PrivateDnsNameOptionsOnLaunch State : available SubnetArn : arn:aws:ec2:eu-central-1:968XXXXXX:subnet/subnet-0359XXXXXXXXXXX SubnetId : subnet-0359XXXXXXXXXXX Tags : {Name} VpcId : vpc-0f9XXXXXXXXXXXX Note the SubnetID of the Availability Zone that you want to use and put it into the corresponding .auto.tvars.json file. Getting the region in Amazon EC2 where the resources will be deployed The currently configured default region can be found by using for example AWS CLI - open a PowerShell window and type: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> aws configure get region eu-central-1 PS C:\TACG> Write down the Location as we need to assign it to variables. Getting the available AMI Image-IDs from Amazon EC2 We want to automatically deploy the virtual machines necessary for the DC and the CCs - so we need detailed configuration settings: Set the credentials needed for allowing PowerShell to access the EC2 tenant: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Set-AWSCredential -AccessKey AKIXXXXXXXXXXXXXXXX -SecretKey RiXXXXXXXXXXXXXXXXXXXXXXXXXX -StoreAs default Get the available AMI images: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Initialize-AWSDefaultConfiguration -ProfileName default -Region eu-central-1 PS C:\TACG> Get-SSMLatestEC2Image -Path ami-windows-latest -ProfileName default -Region eu-central-1 Name Value ---- ----- EC2LaunchV2-Windows_Server-2016-English-Full-Base ami-05da8c5b8c31e1071 Windows_Server-2016-English-Full-SQL_2014_SP3_Enterprise ami-0792b126a5682d6e8 Windows_Server-2016-German-Full-Base ami-04482c384c5f44eba Windows_Server-2016-Japanese-Full-SQL_2016_SP3_Standard ami-06bae50c6434d597c Windows_Server-2016-Japanese-Full-SQL_2017_Web ami-069867bf028ce1d11 Windows_Server-2019-English-Core-EKS_Optimized-1.25 ami-0dc34920ee17ff0c7 Windows_Server-2019-Italian-Full-Base ami-0f6d5ffbe2b4e6daa Windows_Server-2022-Japanese-Full-SQL_2019_Enterprise ami-0ce4c5ab9a9ee18e0 Windows_Server-2022-Portuguese_Brazil-Full-Base ami-0fe6028dce619a01c amzn2-ami-hvm-2.0.20191217.0-x86_64-gp2-mono ami-0f7a4c9d36399c73f Windows_Server-2016-English-Deep-Learning ami-0873c2c3320a70d5b Windows_Server-2016-Japanese-Full-SQL_2016_SP3_Web ami-08565efb3c4b556ba Windows_Server-2016-Korean-Full-Base ami-08a0270377841480d Windows_Server-2019-English-STIG-Core ami-0b4eb638a465efce5 Windows_Server-2019-French-Full-Base ami-0443c855ecad9de50 Windows_Server-2022-English-Full-Base ami-0ad8b6fa068e0299a ... Note the value of the AMI that you want to use - for example ami-0ad8b6fa068e0299a. Getting the available VM sizes from Amazon EC2 We need to determine the available VM sizes. A PowerShell script helps us to list the available instance types on Amazon EC2: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> PS C:\TACG> (Get-EC2InstanceType -Region eu-central-1).Count 694 We need to filter the results to narrow down usable instances - we want to use instances with max. 4 vCPUs and max. 8 GB RAM: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Get-EC2InstanceType -Region eu-central-1 | Select-Object -Property InstanceType, @{Name="vCPUs"; Expression={$_.VCpuInfo.DefaultVCpus}}, @{Name="Memory in GB"; Expression={$_.MemoryInfo.SizeInMiB / 1024}} | Where-Object {$_vCPUs -le 4 -and $_."Memory in GB" -le 8 } | Sort-Object InstanceType | Format-Table InstanceType,vCPUs,"Memory in GB" InstanceType vCPUs Memory in GB ------------ ----- ------------ a1.large 2 4 a1.medium 1 2 a1.xlarge 4 8 c3.large 2 3,75 c3.xlarge 4 7,5 c4.large 2 3,75 c4.xlarge 4 7,5 c5.large 2 4 c5.xlarge 4 8 c5a.large 2 4 c5a.xlarge 4 8 c5ad.large 2 4 c5ad.xlarge 4 8 c5d.large 2 4 c5d.xlarge 4 8 c5n.large 2 5,25 c6a.large 2 4 c6a.xlarge 4 8 c6g.large 2 4 c6g.medium 1 2 c6g.xlarge 4 8 c6gd.large 2 4 c6gd.medium 1 2 c6gd.xlarge 4 8 c6gn.large 2 4 c6gn.medium 1 2 c6gn.xlarge 4 8 c6i.large 2 4 c6i.xlarge 4 8 c6id.large 2 4 c6id.xlarge 4 8 c6in.large 2 4 c6in.xlarge 4 8 c7a.large 2 4 c7a.medium 1 2 c7a.xlarge 4 8 c7g.large 2 4 c7g.medium 1 2 c7g.xlarge 4 8 c7gd.large 2 4 c7gd.medium 1 2 c7gd.xlarge 4 8 c7i.large 2 4 c7i.xlarge 4 8 t2.large 2 8 t2.medium 2 4 t2.micro 1 1 t2.nano 1 0,5 t2.small 1 2 t3.large 2 8 t3.medium 2 4 t3.micro 2 1 t3.nano 2 0,5 t3.small 2 2 t3a.large 2 8 t3a.medium 2 4 t3a.micro 2 1 t3a.nano 2 0,5 t3a.small 2 2 t4g.large 2 8 t4g.medium 2 4 t4g.micro 2 1 t4g.nano 2 0,5 t4g.small 2 2 ... PS C:\TACG> Note the Instancetype parameter of the Amazon EC2 Resource Location that you want to use. We use t3.medium and t3.large. Be sure to use the Citrix Terraform Provider version 0.5.3 or higher or we need to pass the Instance type to the Terraform provider in a different format (examples): Amazon EC2 syntax: Needed format: t3.nano T3 Nano Instance t3.small T3 Small Instance t3.medium T3 Medium Instance t3.large T3 Large Instance e2.small E2 Small Instance e2.medium E2 Medium Instance Example: Checking the current quotas for the instances (N-family) which we plan to use in this guide: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG> Install-AWSToolsModule AWS.Tools.ServiceQuotas PS C:\TACG> Get-SQServiceList ServiceCode ServiceName ----------- ----------- AWSCloudMap AWS Cloud Map access-analyzer Access Analyzer acm AWS Certificate Manager (ACM) acm-pca AWS Private Certificate Authority ... ec2 Amazon Elastic Compute Cloud (Amazon EC2) ec2-ipam IPAM ec2fastlaunch EC2 Fast Launch ... PS C:\TACG> Get-SQServiceQuota -ServiceCode ec2 -QuotaCode L-1216C47A Adjustable : True ErrorReason : GlobalQuota : False Period : QuotaAppliedAtLevel : ACCOUNT QuotaArn : arn:aws:servicequotas:eu-central-1:9XXXXXXXXX:ec2/L-1216C47A QuotaCode : L-1216C47A QuotaContext : QuotaName : Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances ServiceCode : ec2 ServiceName : Amazon Elastic Compute Cloud (Amazon EC2) Unit : None UsageMetric : Amazon.ServiceQuotas.Model.MetricInfo Value : 256 PS C:\TACG> The output of the cmdlet shows that we should have enough available resources on the instance level. The quotas can be increased using the Amazon EC2 Console or PowerShell. More information about increasing vCPU quotas can be found here: Amazon Elastic Compute Cloud (Amazon EC2) quotas. Further Software Components for Configuration and Deployment The Terraform deployment needs actual versions of the following software components: Citrix Cloud Connector Installer: cwcconnector.exe. Download the Citrix Cloud Connector Installer Citrix Remote PowerShell SDK Installer: CitrixPoshSdk.exe. Download the Citrix Remote PowerShell SDK Installer These components are required during the workflow. The Terraform engine looks for these files. In this guide we anticipate that the necessary software can be downloaded from a Storage Repository - we use an Azure Storage Blob where all necessary software is uploaded to. The URIs of the Storage Repository can be set in the corresponding variables: For the Cloud Connector: "CC_Install_CWCURI":"https://wmwblob.blob.core.windows.net/tfdata/cwcconnector.exe" For the Remote PowerShell SDK: "CC_Install_RPoSHURI":"https://wmwblob.blob.core.windows.net/tfdata/CitrixPoshSdk.exe" Creating an Access Key with a Secret for Amazon EC2 Authentication in AWS CLI and/or AWS.Tools for PowerShell Access Keys are long-term credentials for an IAM user or the root user which are used for Authentication and Authorization in EC2. They consist of two parts: an Access Key-ID and a Secret Access Key. Both are needed for Authentication and Authorization. Further information about Access Key management and the configuration can be found in Managing access keys for IAM users. The needed security information for the IAM Policies is stored in an EC2-Secrets Manager: Further information about EC2-Secrets Manager and the configuration can be found at AWS Secrets Manager. Creating a Secure Client in Citrix Cloud The Secure Client in Citrix Cloud is the same as the Access Key in Amazon EC2. It is used for Authentication. API clients in Citrix Cloud are always tied to one administrator and one customer. API clients are not visible to other administrators. If you want to access to more than one customer, you must create API clients within each customer. API clients are automatically restricted to the rights of the administrator that created it. For example, if an administrator is restricted to access only notifications, then the administrator’s API clients have the same restrictions: Reducing an administrator’s access also reduces the access of the API clients owned by that administrator Removing an administrator’s access also removes the administrator’s API clients. To create an API client, select the Identity and Access Management option from the menu. If this option does not appear, you do not have adequate permissions to create an API client. Contact your administrator to get the required permissions. Open Identity and Access Management in WebStudio: Creating an API Client in Citrix Cloud Click API Access, Secure Clients and put a name in the textbox next to the button Create Client. After entering a name. click Create Client: Creating an API Client in Citrix Cloud After the Secure Client is created, copy and write down the shown ID and Secret: Creating an API Client in Citrix Cloud The Secret is only visible during creation - after closing the window you are not able to get it anymore. The client-id and client-secret fields are needed by the Citrix Terraform provider. The also needed customer-id field can be found in your Citrix Cloud details. Put the values in the corresponding .auto.tvars.json file: ... "cc-customerId": "uzXXXXXXXX", "cc-apikey-clientId": "f4eXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "cc-apikey-clientSecret": "VXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "cc-apikey-type": "client_credentials", ... Creating a Bearer Token in Citrix Cloud The Bearer Token is needed for the Authorization of some REST-API calls in Citrix Cloud. As the Citrix provider currently has not implemented all functionalities yet, some REST-API calls are still needed. The Bearer Token authorizes these calls. It is important to set the URI to call and the required parameters correct: The URI must follow this syntax: For example [https://api-us.cloud.com/cctrustoauth2/{customerid}/tokens/clients] where {customerid} is your Customer ID you obtained from the Account Settings page. If your Customer ID is for example 1234567890, the URI is [https://api-us.cloud.com/cctrustoauth2/1234567890/tokens/clients] In this example, we use the Postman application to create a Bearer Token: Paste the correct URI into Postman´s address bar and select POST as the method. Verify the correct settings of the API call. Creating a Bearer Token using Postman If everything is set correctly, Postman shows a Response containing a JSON-formatted file containing the Bearer token in the field access-token: The token is normally valid for 3600 seconds. Put the values in the corresponding .auto.tvars.json file: ... "cc-apikey-type": "client_credentials", "cc-apikey-bearer": "CWSAuth bearer=eyJhbGciOiJSUzI1NiIsI...0q0IW7SZFVzeBittWnEwTYOZ7Q " ... You can also use PowerShell to request a Bearer Token - therefore you need a valid Secure Client stored in Citrix Cloud: .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} asnp Citrix* $key= "f4eXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" $secret= "VJCXXXXXXXXXXXXXX" $customer= "uXXXXXXXX" $XDStoredCredentials = Set-XDCredentials -StoreAs default -ProfileType CloudApi -CustomerId $customer -APIKey $key -SecretKey $secret $auth = Get-XDAuthentication $BT = $GLOBAL:XDAuthToken | Out-File "<Path where to store the Token\BT.txt" Module 1: Create the initially needed Resources on Amazon EC2 This module is split into the following configuration parts: Creating the initially needed Resources on Amazon EC2: Creating all needed IAM roles on Amazon EC2 Creating all needed IAM Instance profiles on Amazon EC2 Creating all needed IAM policies on Amazon EC2 Creating all needed Secret Manager configurations on Amazon EC2 Creating all needed DHCP configurations on Amazon EC2 Creating a Windows Server 2022-based Master Image VM used for deploying the Machine Catalog in step 3 Creating two Windows Server 2022-based VMs which will be used as Cloud Connector VMs in step 2 Creating a Windows Server 2022-based VM acting as an Administrative workstation for running the Terraform steps 2 and 3 - this is necessary because of using WinRM for further configuration and deployment in steps 2 and 3! Creating all necessary scripts for joining the VMs to the existing sub-domain Putting the VMs into the existing subdomain Fetching and saving a valid Bearer Token All these steps are automatically done by Terraform. Please make sure you have configured the variables according to your needs. The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> terraform init Initializing the backend... Initializing provider plugins... - terraform.io/builtin/terraform is built in to Terraform - Finding mastercard/restapi versions matching "1.18.2"... - Finding citrix/citrix versions matching ">= 0.5.0"... - Finding hashicorp/aws versions matching ">= 5.4.0"... - Finding latest version of hashicorp/local... - Finding latest version of hashicorp/template... - Installing hashicorp/local v2.5.1... - Installed hashicorp/local v2.5.1 (signed by HashiCorp) - Installing hashicorp/template v2.2.0... - Installed hashicorp/template v2.2.0 (signed by HashiCorp) - Installing mastercard/restapi v1.18.2... - Installed mastercard/restapi v1.18.2 (self-signed, key ID DCB8C431D71C30AB) - Installing citrix/citrix v0.5.2... - Installed citrix/citrix v0.5.2 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) - Installing hashicorp/aws v5.41.0... - Installed hashicorp/aws v5.41.0 (signed by HashiCorp) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> terraform plan data.template_file.Add-EC2InstanceToDomainScriptCC2: Reading... data.template_file.Add-EC2InstanceToDomainScriptWMI: Reading... data.template_file.Add-EC2InstanceToDomainScriptAdminVM: Reading... data.template_file.Add-EC2InstanceToDomainScriptCC1: Reading... data.template_file.Add-EC2InstanceToDomainScriptWMI: Read complete after 0s [id=85de6bac9e35231cbd60a4c1636a554940abb789938916a626a5193f27f22498] data.template_file.Add-EC2InstanceToDomainScriptCC1: Read complete after 0s [id=24ee722eca6982b33be472de4f84edbae000d0bff0a139dec2ce97c8ea14a0ca] data.template_file.Add-EC2InstanceToDomainScriptAdminVM: Read complete after 0s [id=91b3ae99f8d4a2effb377f35dda69a583de194739c8191ee665c96e663ad8615] data.template_file.Add-EC2InstanceToDomainScriptCC2: Read complete after 0s [id=15075f0d18ca3e200ab603e397339245a8ff055fd688facfc5165dd5e455d151] data.aws_iam_policy_document.ec2_assume_role: Reading... data.aws_iam_policy_document.ec2_assume_role: Read complete after 0s [id=2851119427] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.local_file.Retrieve_BT will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "Retrieve_BT" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "./GetBT.txt" + id = (known after apply) } # aws_iam_instance_profile.ec2_profile will be created + resource "aws_iam_instance_profile" "ec2_profile" { + arn = (known after apply) + create_date = (known after apply) + id = (known after apply) + name = "ec2-profile" + name_prefix = (known after apply) + path = "/" + role = "ec2-iam-role" + tags_all = (known after apply) + unique_id = (known after apply) } # aws_iam_policy.secret_manager_ec2_policy will be created + resource "aws_iam_policy" "secret_manager_ec2_policy" { + arn = (known after apply) + description = "Secret Manager EC2 policy" + id = (known after apply) + name = "secret-manager-ec2-policy" + name_prefix = (known after apply) + path = "/" + policy = jsonencode( { + Statement = [ + { + Action = [ + "secretsmanager:*", ] + Effect = "Allow" + Resource = "*" }, ] + Version = "2012-10-17" } ) + policy_id = (known after apply) + tags_all = (known after apply) } # aws_iam_policy_attachment.api_secret_manager_ec2_attach will be created + resource "aws_iam_policy_attachment" "api_secret_manager_ec2_attach" { + id = (known after apply) + name = "secret-manager-ec2-attachment" + policy_arn = (known after apply) + roles = (known after apply) } # aws_iam_policy_attachment.ec2_attach1 will be created + resource "aws_iam_policy_attachment" "ec2_attach1" { + id = (known after apply) + name = "ec2-iam-attachment" + policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" + roles = (known after apply) } # aws_iam_policy_attachment.ec2_attach2 will be created + resource "aws_iam_policy_attachment" "ec2_attach2" { + id = (known after apply) + name = "ec2-iam-attachment" + policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM" + roles = (known after apply) } # aws_iam_role.ec2_iam_role will be created + resource "aws_iam_role" "ec2_iam_role" { + arn = (known after apply) + assume_role_policy = jsonencode( { + Statement = [ + { + Action = "sts:AssumeRole" + Effect = "Allow" + Principal = { + Service = "ec2.amazonaws.com" } }, ] + Version = "2012-10-17" } ) + create_date = (known after apply) + force_detach_policies = false + id = (known after apply) + managed_policy_arns = (known after apply) + max_session_duration = 3600 + name = "ec2-iam-role" + name_prefix = (known after apply) + path = "/" + tags_all = (known after apply) + unique_id = (known after apply) } # aws_instance.AdminVM will be created + resource "aws_instance" "AdminVM" { + ami = "ami-0ad8b6fa068e0299a" + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + disable_api_stop = (known after apply) + disable_api_termination = (known after apply) + ebs_optimized = (known after apply) + get_password_data = false + host_id = (known after apply) + host_resource_group_arn = (known after apply) + iam_instance_profile = (known after apply) + id = (known after apply) + instance_initiated_shutdown_behavior = (known after apply) + instance_lifecycle = (known after apply) + instance_state = (known after apply) + instance_type = "t2.large" + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (sensitive value) + monitoring = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + placement_partition_number = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = "172.31.22.107" + public_dns = (known after apply) + public_ip = (known after apply) + secondary_private_ips = (known after apply) + security_groups = (known after apply) + source_dest_check = true + spot_instance_request_id = (known after apply) + subnet_id = "subnet-07e1XXXXXXXXXX" + tags = { + "Name" = "TACG-AWS-AVM" } + tags_all = { + "Name" = "TACG-AWS-AVM" } + tenancy = (known after apply) + user_data = "975296c878XXXXXXXXXX" + user_data_base64 = (known after apply) + user_data_replace_on_change = false + vpc_security_group_ids = [ + "sg-072eXXXXXXXXXX", ] } # aws_instance.CC1 will be created + resource "aws_instance" "CC1" { + ami = "ami-0ad8b6fa068e0299a" + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + disable_api_stop = (known after apply) + disable_api_termination = (known after apply) + ebs_optimized = (known after apply) + get_password_data = false + host_id = (known after apply) + host_resource_group_arn = (known after apply) + iam_instance_profile = (known after apply) + id = (known after apply) + instance_initiated_shutdown_behavior = (known after apply) + instance_lifecycle = (known after apply) + instance_state = (known after apply) + instance_type = "t2.medium" + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (sensitive value) + monitoring = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + placement_partition_number = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = "172.31.22.104" + public_dns = (known after apply) + public_ip = (known after apply) + secondary_private_ips = (known after apply) + security_groups = (known after apply) + source_dest_check = true + spot_instance_request_id = (known after apply) + subnet_id = "subnet-07e1XXXXXXXXXX" + tags = { + "Name" = "TACG-AWS-CC1" } + tags_all = { + "Name" = "TACG-AWS-CC1" } + tenancy = (known after apply) + user_data = "5daf6ab616e8eXXXXXXXXXX" + user_data_base64 = (known after apply) + user_data_replace_on_change = false + vpc_security_group_ids = [ + "sg-072eXXXXXXXXXX", ] } # aws_instance.CC2 will be created + resource "aws_instance" "CC2" { + ami = "ami-0ad8b6fa068e0299a" + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + disable_api_stop = (known after apply) + disable_api_termination = (known after apply) + ebs_optimized = (known after apply) + get_password_data = false + host_id = (known after apply) + host_resource_group_arn = (known after apply) + iam_instance_profile = (known after apply) + id = (known after apply) + instance_initiated_shutdown_behavior = (known after apply) + instance_lifecycle = (known after apply) + instance_state = (known after apply) + instance_type = "t2.medium" + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (sensitive value) + monitoring = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + placement_partition_number = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = "172.31.22.105" + public_dns = (known after apply) + public_ip = (known after apply) + secondary_private_ips = (known after apply) + security_groups = (known after apply) + source_dest_check = true + spot_instance_request_id = (known after apply) + subnet_id = "subnet-07e1XXXXXXXXXX" + tags = { + "Name" = "TACG-AWS-CC2" } + tags_all = { + "Name" = "TACG-AWS-CC2" } + tenancy = (known after apply) + user_data = "71b8c58dcf57XXXXXXXXXX" + user_data_base64 = (known after apply) + user_data_replace_on_change = false + vpc_security_group_ids = [ + "sg-072eXXXXXXXXXX", ] } # aws_instance.WMI will be created + resource "aws_instance" "WMI" { + ami = "ami-0ad8b6fa068e0299a" + arn = (known after apply) + associate_public_ip_address = (known after apply) + availability_zone = (known after apply) + cpu_core_count = (known after apply) + cpu_threads_per_core = (known after apply) + disable_api_stop = (known after apply) + disable_api_termination = (known after apply) + ebs_optimized = (known after apply) + get_password_data = false + host_id = (known after apply) + host_resource_group_arn = (known after apply) + iam_instance_profile = (known after apply) + id = (known after apply) + instance_initiated_shutdown_behavior = (known after apply) + instance_lifecycle = (known after apply) + instance_state = (known after apply) + instance_type = "t2.medium" + ipv6_address_count = (known after apply) + ipv6_addresses = (known after apply) + key_name = (sensitive value) + monitoring = (known after apply) + outpost_arn = (known after apply) + password_data = (known after apply) + placement_group = (known after apply) + placement_partition_number = (known after apply) + primary_network_interface_id = (known after apply) + private_dns = (known after apply) + private_ip = "172.31.22.106" + public_dns = (known after apply) + public_ip = (known after apply) + secondary_private_ips = (known after apply) + security_groups = (known after apply) + source_dest_check = true + spot_instance_request_id = (known after apply) + subnet_id = "subnet-07e168f0c2a28edf3" + tags = { + "Name" = "TACG-AWS-WMI" } + tags_all = { + "Name" = "TACG-AWS-WMI" } + tenancy = (known after apply) + user_data = "bd599dcdaa3dXXXXXXXXXXX" + user_data_base64 = (known after apply) + user_data_replace_on_change = false + vpc_security_group_ids = [ + "sg-072eXXXXXXXXXX", ] } # aws_vpc_dhcp_options.vpc-dhcp-options will be created + resource "aws_vpc_dhcp_options" "vpc-dhcp-options" { + arn = (known after apply) + domain_name_servers = [ + "172.31.22.103", ] + id = (known after apply) + owner_id = (known after apply) + tags_all = (known after apply) } # aws_vpc_dhcp_options_association.dns_resolver will be created + resource "aws_vpc_dhcp_options_association" "dns_resolver" { + dhcp_options_id = (known after apply) + id = (known after apply) + vpc_id = "vpc-0f9aXXXXXXXXXX" } # local_file.GetBearerToken will be created + resource "local_file" "GetBearerToken" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "./GetBT.ps1" + id = (known after apply) } # terraform_data.GetBT will be created + resource "terraform_data" "GetBT" { + id = (known after apply) } Plan: 14 to add, 0 to change, 0 to destroy. PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> terraform apply data.template_file.Add-EC2InstanceToDomainScriptCC2: Reading... data.template_file.Add-EC2InstanceToDomainScriptWMI: Reading... data.template_file.Add-EC2InstanceToDomainScriptAdminVM: Reading... data.template_file.Add-EC2InstanceToDomainScriptCC1: Reading... data.template_file.Add-EC2InstanceToDomainScriptWMI: Read complete after 0s [id=85de6bac9e35231cbd60a4c1636a554940abb789938916a626a5193f27f22498] data.template_file.Add-EC2InstanceToDomainScriptCC1: Read complete after 0s [id=24ee722eca6982b33be472de4f84edbae000d0bff0a139dec2ce97c8ea14a0ca] data.template_file.Add-EC2InstanceToDomainScriptAdminVM: Read complete after 0s [id=91b3ae99f8d4a2effb377f35dda69a583de194739c8191ee665c96e663ad8615] data.template_file.Add-EC2InstanceToDomainScriptCC2: Read complete after 0s [id=15075f0d18ca3e200ab603e397339245a8ff055fd688facfc5165dd5e455d151] data.aws_iam_policy_document.ec2_assume_role: Reading... data.aws_iam_policy_document.ec2_assume_role: Read complete after 0s [id=2851119427] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.local_file.Retrieve_BT will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "Retrieve_BT" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "./GetBT.txt" + id = (known after apply) } ... ** Output shortened **... # terraform_data.GetBT will be created + resource "terraform_data" "GetBT" { + id = (known after apply) } Plan: 14 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_instance.AdminVM: Creating... aws_instance.AdminVM: Still creating... [10s elapsed] aws_instance.AdminVM: Still creating... [20s elapsed] aws_instance.AdminVM: Creation complete after 22s [id=i-0ad3352d673db8068] ... ** Output shortened **... aws_instance.WMI: Creating... aws_instance.WMI: Still creating... [10s elapsed] aws_instance.WMI: Still creating... [20s elapsed] aws_instance.WMI: Still creating... [30s elapsed] aws_instance.WMI: Creation complete after 34s [id=i-0ad3352d673db8068] Apply complete! Resources: 14 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnAWS\_CCOnAWS-Creation> As no errors occurred, Terraform has completed the creation and partial configuration of the relevant prerequisites on Amazon EC2. Example of successful creation: All VMs were successfully created and registered on the Domain Controller: Now the next step can be started. Module 2: Install and Configure all Resources in Amazon EC2 This module is split into the following configuration parts: Configuring the three previously created Virtual Machines on Amazon EC2: Installing the needed software on the CCs Installing the needed software on the Admin-VM Creating the necessary Resources in Citrix Cloud: Creating a Resource Location in Citrix Cloud Configuring the 2 CCs as Cloud Connectors Registering the 2 CCs in the newly created Resource Location Our provider currently does not support creating a Resource Location on Citrix Cloud. Therefore we use a PowerShell script to create it using a REST-API call. Please make sure you have configured the variables according to your needs by using the corresponding .auto.tfvars.json file. Terraform runs various scripts before creating the configuration of the CCs to determine needed information like the Site-ID, the Zone-ID, and the Resource Location-ID. These IDs are used in other scripts or files - for example the parameter file for deploying the Cloud Connector needs the Resource Location ID of the Resource Location which Terraform creates automatically. Unfortunately the REST-API provider does not return the ID of the newly created Resource Location, so we need to run PowerShell after the creation of the Resource Location: Examples of necessary scripts: At first, Terraform writes the configuration file without the Resource Location ID: #### Create CWC-Installer configuration file based on variables and save it into Transfer directory resource "local_file" "CWC-Configuration" { depends_on = [restapi_object.CreateRL] content = jsonencode( { "customerName" = "${var.CC_CustomerID}", "clientId" = "${var.CC_APIKey-ClientID}", "clientSecret" = "${var.CC_APIKey-ClientSecret}", "resourceLocationId" = "XXXXXXXXXX", "acceptTermsOfService" = true } ) filename = "${var.CC_Install_LogPath}/DATA/cwc.json" } After installing further pre-requisites, Terraform runs a PowerShell script to get the needed ID and updates the configuration file for the CWC installer: #### Create PowerShell file for determining the correct RL-ID resource "local_file" "InstallPreReqsOnAVM2-ps1" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" # Correct the Resource Location ID in cwc.json file $requestUri = "https://api-eu.cloud.com/resourcelocations" $headers = @{ "Accept"="application/json"; "Authorization" = "${var.CC_APIKey-Bearer}"; "Citrix-CustomerId" = "${var.CC_CustomerID}"} $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json $RLs = ConvertFrom-Json $response $RLFiltered = $RLs.items | Where-Object name -in "${var.CC_RestRLName}" Add-Content ${var.CC_Install_LogPath}/log.txt $RLFiltered $RLID = $RLFiltered.id $OrigContent = Get-Content ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/log.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt $OrigContent $CorrContent = $OrigCOntent.Replace('XXXXXXXXXX', $RLID) | Out-File -FilePath ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/DATA/GetRLID.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt "`ncwc.json corrected." Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript completed." } EOT filename = "${path.module}/DATA/InstallPreReqsOnAVM2.ps1" } The Terraform configuration contains some idle time slots to make sure that background operations on Amazon EC2 or on the VMs can be completed before the next configuration steps occur. We have seen different elapsed configuration times related to different loads on the Amazon EC2 systems! Before running Terraform, we cannot see the Resource Location: The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> terraform init Initializing the backend... Initializing provider plugins... - terraform.io/builtin/terraform is built in to Terraform - Finding latest version of hashicorp/random... - Finding latest version of hashicorp/time... - Finding latest version of hashicorp/local... - Finding latest version of hashicorp/null... - Finding hashicorp/aws versions matching ">= 5.4.0"... - Finding mastercard/restapi versions matching "1.18.2"... - Finding citrix/citrix versions matching ">= 0.5.2"... - Installing citrix/citrix v0.5.2... - Installed citrix/citrix v0.5.2 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) - Installing hashicorp/random v3.6.0... - Installed hashicorp/random v3.6.0 (signed by HashiCorp) - Installing hashicorp/time v0.11.1... - Installed hashicorp/time v0.11.1 (signed by HashiCorp) - Installing hashicorp/local v2.5.1... - Installed hashicorp/local v2.5.1 (signed by HashiCorp) - Installing hashicorp/null v3.2.2... - Installed hashicorp/null v3.2.2 (signed by HashiCorp) - Installing hashicorp/aws v5.41.0... - Installed hashicorp/aws v5.41.0 (signed by HashiCorp) - Installing mastercard/restapi v1.18.2... - Installed mastercard/restapi v1.18.2 (self-signed, key ID DCB8C431D71C30AB) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> terraform plan Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.local_file.input_site will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "input_site" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "c:/temp/xdinst/DATA/GetSiteID.txt" + id = (known after apply) } ... ** Output shortened **... # terraform_data.SiteID will be created + resource "terraform_data" "SiteID" { + id = (known after apply) } # terraform_data.ZoneID will be created + resource "terraform_data" "ZoneID" { + id = (known after apply) } # time_sleep.wait_300_seconds will be created + resource "time_sleep" "wait_300_seconds" { + create_duration = "300s" + id = (known after apply) } Plan: 18 to add, 0 to change, 0 to destroy. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now. PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> terraform apply Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: # data.local_file.input_site will be read during apply # (depends on a resource or a module with changes pending) <= data "local_file" "input_site" { + content = (known after apply) + content_base64 = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + filename = "c:/temp/xdinst/DATA/GetSiteID.txt" + id = (known after apply) } # local_file.CWC-Configuration will be created + resource "local_file" "CWC-Configuration" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0777" + file_permission = "0777" + filename = "c:/temp/xdinst/DATA/cwc.json" + id = (known after apply) } ... ** Output shortened **... # terraform_data.SiteID will be created + resource "terraform_data" "SiteID" { + id = (known after apply) } # terraform_data.ZoneID will be created + resource "terraform_data" "ZoneID" { + id = (known after apply) } # time_sleep.wait_300_seconds will be created + resource "time_sleep" "wait_300_seconds" { + create_duration = "300s" + id = (known after apply) } Plan: 18 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes time_sleep.wait_300_seconds: Creating... random_uuid.IDforCCRL: Creating... random_uuid.IDforCCRL: Creation complete after 0s [id=76ef126a-4b14-cc24-e067-62dbf8c98d5c] local_file.InstallPreReqsOnCC-ps1: Creating... local_file.InstallPreReqsOnAVM1-ps1: Creating... local_file.InstallPreReqsOnAVM2-ps1: Creating... local_file.InstallPreReqsOnAVM1-ps1: Creation complete after 0s [id=c4e0ae63ee7a4c5ce89827fcf741942c19ae7aa0] local_file.InstallPreReqsOnCC-ps1: Creation complete after 0s [id=1927ce7666e1be3a10049619e515d97c2f1e031d] restapi_object.CreateRL: Creating... local_file.InstallPreReqsOnAVM2-ps1: Creation complete after 0s [id=dccb6ef781887e459e04d6c3866871fbc11d9868] restapi_object.CreateRL: Creation complete after 1s [id=76ef126a-4b14-cc24-e067-62dbf8c98d5c] local_file.CWC-Configuration: Creating... local_file.CWC-Configuration: Creation complete after 0s [id=0257d2b92f197fa4d043b6cd4c4959be284dddc2] time_sleep.wait_300_seconds: Still creating... [10s elapsed] time_sleep.wait_300_seconds: Still creating... [20s elapsed] time_sleep.wait_300_seconds: Still creating... [30s elapsed] time_sleep.wait_300_seconds: Still creating... [40s elapsed] time_sleep.wait_300_seconds: Still creating... [50s elapsed] ... ** Output shortened **... time_sleep.wait_300_seconds: Still creating... [4m31s elapsed] time_sleep.wait_300_seconds: Still creating... [4m41s elapsed] time_sleep.wait_300_seconds: Still creating... [4m51s elapsed] time_sleep.wait_300_seconds: Creation complete after 5m0s [id=2024-03-15T09:29:54Z] local_file.GetSiteIDScript: Creating... local_file.GetSiteIDScript: Creation complete after 0s [id=11e3d863bb343b8e10beca1d1b8d2e1918aff757] terraform_data.SiteID: Creating... terraform_data.SiteID: Provisioning with 'local-exec'... terraform_data.SiteID (local-exec): Executing: ["PowerShell" "-File" "GetSiteID.ps1"] null_resource.UploadRequiredComponentsToAVM: Creating... null_resource.UploadRequiredComponentsToAVM: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>terraform_data.SiteID: Creation complete after 2s [id=d8069268-d023-dc1f-b6e3-5eb4a96d7e34] data.local_file.input_site: Reading... data.local_file.input_site: Read complete after 0s [id=f684114a2b93cc095c4ac5f81999ee1a111d53b9] local_file.GetZoneIDScript: Creating... local_file.GetZoneIDScript: Creation complete after 0s [id=acf533cb047cc8f963f8bf53b792f236eb8d9cd3] terraform_data.ZoneID: Creating... terraform_data.ZoneID: Provisioning with 'local-exec'... terraform_data.ZoneID (local-exec): Executing: ["PowerShell" "-File" "GetZoneID.ps1"] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToAVM: Provisioning with 'file'... terraform_data.ZoneID: Creation complete after 0s [id=560905f8-976a-56ff-a73a-95f287a69761] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToAVM: Creation complete after 3s [id=1331150174844471895] null_resource.CallRequiredScriptsOnAVM1: Creating... null_resource.CallRequiredScriptsOnAVM1: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnAVM1 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Password: true null_resource.CallRequiredScriptsOnAVM1 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnAVM1 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnAVM1 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnAVM1 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnAVM1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/InstallPreReqsOnAVM1.ps1 null_resource.CallRequiredScriptsOnAVM1: Still creating... [10s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [20s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [30s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [40s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [50s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [1m0s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [1m11s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [1m21s elapsed] null_resource.CallRequiredScriptsOnAVM1: Still creating... [1m31s elapsed] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CallRequiredScriptsOnAVM1: Creation complete after 1m32s [id=1765564400293269918] null_resource.CallRequiredScriptsOnAVM2: Creating... null_resource.CallRequiredScriptsOnAVM2: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnAVM2 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Password: true null_resource.CallRequiredScriptsOnAVM2 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnAVM2 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnAVM2 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnAVM2 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnAVM2 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/InstallPreReqsOnAVM2.ps1 #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.CallRequiredScriptsOnAVM2: Creation complete after 3s [id=1571484748961023525] null_resource.UploadRequiredComponentsToCC1: Creating... null_resource.UploadRequiredComponentsToCC2: Creating... null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... ... ** Output shortened **... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Creation complete after 8s [id=2593060114553604983] null_resource.CallRequiredScriptsOnCC2: Creating... null_resource.CallRequiredScriptsOnCC2: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnCC2 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC2 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC2 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC2 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj><Obj S="progress" RefId="1"><TNRef RefId="0" /><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnCC2 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/InstallPreReqsOnCC.ps1} null_resource.CallRequiredScriptsOnCC2 (remote-exec): Windows PowerShell null_resource.CallRequiredScriptsOnCC2 (remote-exec): Processing -File 'c:/temp/xdinst/InstallPreReqsOnCC.ps1}' null_resource.CallRequiredScriptsOnCC2 (remote-exec): Copyright (C) Microsoft Corporation. All rights reserved. null_resource.CallRequiredScriptsOnCC2 (remote-exec): Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows null_resource.UploadRequiredComponentsToCC1: Still creating... [10s elapsed] #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Creation complete after 15s [id=2911410724639395293] null_resource.CallRequiredScriptsOnCC1: Creating... null_resource.CallRequiredScriptsOnCC1: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnCC1 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC1 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC1 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC1 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... ... ** Output shortened **... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Provisioning with 'file'... #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC1: Creation complete after 7s [id=8529594446213212779] null_resource.CallRequiredScriptsOnCC1: Creating... null_resource.CallRequiredScriptsOnCC1: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC1 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnCC1 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC1 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC1 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC1 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): CACert: false #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>null_resource.UploadRequiredComponentsToCC2: Creation complete after 8s [id=5071991036813727940] null_resource.CallRequiredScriptsOnCC2: Creating... null_resource.CallRequiredScriptsOnCC2: Provisioning with 'remote-exec'... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connecting to remote host via WinRM... null_resource.CallRequiredScriptsOnCC2 (remote-exec): Host: 172.31.22.107 null_resource.CallRequiredScriptsOnCC2 (remote-exec): Port: 5985 null_resource.CallRequiredScriptsOnCC2 (remote-exec): User: administrator null_resource.CallRequiredScriptsOnCC2 (remote-exec): Password: true null_resource.CallRequiredScriptsOnCC2 (remote-exec): HTTPS: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): Insecure: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): NTLM: false null_resource.CallRequiredScriptsOnCC2 (remote-exec): CACert: false null_resource.CallRequiredScriptsOnCC1 (remote-exec): Connected! null_resource.CallRequiredScriptsOnCC2 (remote-exec): Connected! #< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs>#< CLIXML <Objs Version="1.1.0.1" xmlns="http://schemas.microsoft.com/powershell/2004/04"><Obj S="progress" RefId="0"><TN RefId="0"><T>System.Management.Automation.PSCustomObject</T><T>System.Object</T></TN><MS><I64 N="SourceId">1</I64><PR N="Record"><AV>Preparing modules for first use.</AV><AI>0</AI><Nil /><PI>-1</PI><PC>-1</PC><T>Completed</T><SR>-1</SR><SD> </SD></PR></MS></Obj></Objs> null_resource.CallRequiredScriptsOnCC1 (remote-exec): C:\Users\Administrator>powershell -File c:/temp/xdinst/DATA/InstallPreReqsOnCC.ps1 ... ** Output shortened **... Apply complete! Resources: 18 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnAWS\_CCOnAWS-Install> This configuration completes the creation and configuration of all initial resources: Installing the needed software on the CCs Creating a Resource Location in Citrix Cloud Configuring the 2 CCs as Cloud Connectors Registering the 2 CCs in the newly created Resource Location After successful runs of all needed scripts we can see the new Resource Location in Citrix Cloud and the 2 Cloud Connectors bound to the Resource Location: The environment is now ready to deploy a Machine Catalog and a Delivery Group using Module 3. Module 3: Create all Resources in Amazon EC2 and Citrix Cloud This module is split into the following configuration parts: Creating a Hypervisor Connection to Amazon EC2 and a Hypervisor Pool Creating a Machine Catalog (MC) in the newly created Resource Location Creating a Delivery Group (DG) based on the MC in the newly created Resource Location Deploying some example policies using Terraform The Terraform configuration contains some idle time slots to make sure that background operations on Amazon EC2 or the VMs can be completed before the next configuration steps occur. We have seen different elapsed configuration times related to different loads on the Amazon EC2 systems! Before Terraform can create the Hypervisor Connection and the Hypervisor Pool, Terraform needs to retrieve the Site-ID and Zone-ID of the newly created Resource Location. As the Citrix Terraform Provider currently has no Cloud-level functionalities implemented, Terraform needs PowerShell scripts to retrieve the IDs. It created the necessary scripts with all the needed variables, saved the scripts, and ran them in Module 2. After retrieving the IDs, Terraform configures a Hypervisor Connection to Amazon EC2 and a Hypervisor Resource Pool associated with the Hypervisor Connection. When these prerequisites are completed, the Machine Catalog is created. After the successful creation of the Hypervisor Connection, the Hypervisor Resource Pool, and the Machine Catalog the last step of the deployment process starts - the creation of the Delivery Group. The Terraform configuration assumes that all machines in the created Machine Catalog are used in the Delivery Group and that Autoscale will be configured for this Delivery Group. More information about Autoscale can be found here: https://docs.citrix.com/en-us/tech-zone/learn/tech-briefs/autoscale.html The deployment of Citrix Policies is a new feature that was built-in in version 0.5.2. We need to know the internal policy name as localized policy names and descriptions are not usable. Therefore we need to use a PowerShell script to determine all internal names - some pre-requisites are necessary for the script to work. You can need any machine but the Cloud Connectors! Install the Citrix Supportability Pack Install the Citrix Group Policy Management - scroll down to Group Policy .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} Import-Module "C:\TACG\Supportability Pack\Tools\Scout\Current\Utilities\Citrix.GroupPolicy.commands.psm1" -force new-psdrive -name LocalFarmGpo -psprovider CitrixGroupPolicy -controller localhost \ Get-PSDrive cd LocalFarmGpo: Get-CtxGroupPolicyConfiguration Type: User ProfileLoadTimeMonitoring_Threshold ICALatencyMonitoring_Enable ICALatencyMonitoring_Period ICALatencyMonitoring_Threshold EnableLossless ProGraphics FRVideos_Part FRVideosPath_Part FRStartMenu_Part FRStartMenuPath_Part FRSearches_Part FRSearchesPath_Part FRSavedGames_Part FRSavedGamesPath_Part FRPictures_Part FRPicturesPath_Part FRMusic_Part FRMusicPath_Part FRLinks_Part FRLinksPath_Part FRFavorites_Part FRFavoritesPath_Part FRDownloads_Part FRDownloadsPath_Part FRDocuments_Part FRDocumentsPath_Part FRDesktop_Part FRDesktopPath_Part FRContacts_Part FRContactsPath_Part FRAdminAccess_Part FRIncDomainName_Part FRAppData_Part FRAppDataPath_Part StorefrontAccountsList AllowFidoRedirection AllowWIARedirection ClientClipboardWriteAllowedFormats ClipboardRedirection ClipboardSelectionUpdateMode DesktopLaunchForNonAdmins DragDrop LimitClipboardTransferC2H LimitClipboardTransferH2C LossTolerantModeAvailable NonPublishedProgramLaunching PrimarySelectionUpdateMode ReadonlyClipboard RestrictClientClipboardWrite RestrictSessionClipboardWrite SessionClipboardWriteAllowedFormats FramesPerSecond PreferredColorDepthForSimpleGraphics VisualQuality ExtraColorCompression ExtraColorCompressionThreshold LossyCompressionLevel LossyCompressionThreshold ProgressiveHeavyweightCompression MinimumAdaptiveDisplayJpegQuality MovingImageCompressionConfiguration ProgressiveCompressionLevel ProgressiveCompressionThreshold TargetedMinimumFramesPerSecond ClientUsbDeviceOptimizationRules UsbConnectExistingDevices UsbConnectNewDevices UsbDeviceRedirection UsbDeviceRedirectionRules USBDeviceRulesV2 UsbPlugAndPlayRedirection TwainCompressionLevel TwainRedirection LocalTimeEstimation RestoreServerTime SessionTimeZone EnableSessionWatermark WatermarkStyle WatermarkTransparency WatermarkCustomText WatermarkIncludeClientIPAddress WatermarkIncludeConnectTime WatermarkIncludeLogonUsername WatermarkIncludeVDAHostName WatermarkIncludeVDAIPAddress EnableRemotePCDisconnectTimer SessionConnectionTimer SessionConnectionTimerInterval SessionDisconnectTimer SessionDisconnectTimerInterval SessionIdleTimer SessionIdleTimerInterval LossTolerantThresholds EnableServerConnectionTimer EnableServerDisconnectionTimer EnableServerIdleTimer ServerConnectionTimerInterval ServerDisconnectionTimerInterval ServerIdleTimerInterval MinimumEncryptionLevel AutoCreationEventLogPreference ClientPrinterRedirection DefaultClientPrinter PrinterAssignments SessionPrinters WaitForPrintersToBeCreated UpsPrintStreamInputBandwidthLimit DPILimit EMFProcessingMode ImageCompressionLimit UniversalPrintingPreviewPreference UPDCompressionDefaults InboxDriverAutoInstallation UniversalDriverPriority UniversalPrintDriverUsage AutoCreatePDFPrinter ClientPrinterAutoCreation ClientPrinterNames DirectConnectionsToPrintServers GenericUniversalPrinterAutoCreation PrinterDriverMappings PrinterPropertiesRetention ClientComPortRedirection ClientComPortsAutoConnection ClientLptPortRedirection ClientLptPortsAutoConnection MaxSpeexQuality MSTeamsRedirection MultimediaOptimization UseGPUForMultimediaOptimization VideoLoadManagement VideoQuality WebBrowserRedirectionAcl WebBrowserRedirectionAuthenticationSites WebBrowserRedirectionBlacklist WebBrowserRedirectionIwaSupport WebBrowserRedirectionProxy WebBrowserRedirectionProxyAuth MultiStream AutoKeyboardPopUp ComboboxRemoting MobileDesktop TabletModeToggle ClientKeyboardLayoutSyncAndIME EnableUnicodeKeyboardLayoutMapping HideKeyboardLayoutSwitchPopupMessageBox AllowVisuallyLosslessCompression DisplayLosslessIndicator OptimizeFor3dWorkload ScreenSharing UseHardwareEncodingForVideoCodec UseVideoCodecForCompression EnableFramehawkDisplayChannel AllowFileDownload AllowFileTransfer AllowFileUpload AsynchronousWrites AutoConnectDrives ClientDriveLetterPreservation ClientDriveRedirection ClientFixedDrives ClientFloppyDrives ClientNetworkDrives ClientOpticalDrives ClientRemoveableDrives HostToClientRedirection ReadOnlyMappedDrive SpecialFolderRedirection AeroRedirection DesktopWallpaper GraphicsQuality MenuAnimation WindowContentsVisibleWhileDragging AllowLocationServices AllowBidirectionalContentRedirection BidirectionalRedirectionConfig ClientURLs VDAURLs AudioBandwidthLimit AudioBandwidthPercent ClipboardBandwidthLimit ClipboardBandwidthPercent ComPortBandwidthLimit ComPortBandwidthPercent FileRedirectionBandwidthLimit FileRedirectionBandwidthPercent HDXMultimediaBandwidthLimit HDXMultimediaBandwidthPercent LptBandwidthLimit LptBandwidthLimitPercent OverallBandwidthLimit PrinterBandwidthLimit PrinterBandwidthPercent TwainBandwidthLimit TwainBandwidthPercent USBBandwidthLimit USBBandwidthPercent AllowRtpAudio AudioPlugNPlay AudioQuality ClientAudioRedirection EnableAdaptiveAudio MicrophoneRedirection FlashAcceleration FlashBackwardsCompatibility FlashDefaultBehavior FlashEventLogging FlashIntelligentFallback FlashLatencyThreshold FlashServerSideContentFetchingWhitelist FlashUrlColorList FlashUrlCompatibilityList HDXFlashLoadManagement HDXFlashLoadManagementErrorSwf Type: Computer WemCloudConnectorList VirtualLoopbackPrograms VirtualLoopbackSupport EnableAutoUpdateOfControllers AppFailureExclusionList EnableProcessMonitoring EnableResourceMonitoring EnableWorkstationVDAFaultMonitoring SelectedFailureLevel CPUUsageMonitoring_Enable CPUUsageMonitoring_Period CPUUsageMonitoring_Threshold VdcPolicyEnable EnableClipboardMetadataCollection EnableVdaDiagnosticsCollection XenAppOptimizationDefinitionPathData XenAppOptimizationEnabled ExclusionList_Part IncludeListRegistry_Part LastKnownGoodRegistry DefaultExclusionList ExclusionDefaultReg01 ExclusionDefaultReg02 ExclusionDefaultReg03 PSAlwaysCache PSAlwaysCache_Part PSEnabled PSForFoldersEnabled PSForPendingAreaEnabled PSPendingLockTimeout PSUserGroups_Part StreamingExclusionList_Part ApplicationProfilesAutoMigration DeleteCachedProfilesOnLogoff LocalProfileConflictHandling_Part MigrateWindowsProfilesToUserStore_Part ProfileDeleteDelay_Part TemplateProfileIsMandatory TemplateProfileOverridesLocalProfile TemplateProfileOverridesRoamingProfile TemplateProfilePath DisableConcurrentAccessToOneDriveContainer DisableConcurrentAccessToProfileContainer EnableVHDAutoExtend EnableVHDDiskCompaction GroupsToAccessProfileContainer_Part PreventLoginWhenMountFailed_Part ProfileContainerExclusionListDir_Part ProfileContainerExclusionListFile_Part ProfileContainerInclusionListDir_Part ProfileContainerInclusionListFile_Part ProfileContainerLocalCache DebugFilePath_Part DebugMode LogLevel_ActiveDirectoryActions LogLevel_FileSystemActions LogLevel_FileSystemNotification LogLevel_Information LogLevel_Logoff LogLevel_Logon LogLevel_PolicyUserLogon LogLevel_RegistryActions LogLevel_RegistryDifference LogLevel_UserName LogLevel_Warnings MaxLogSize_Part LargeFileHandlingList_Part LogonExclusionCheck_Part AccelerateFolderMirroring MirrorFoldersList_Part ProfileContainer_Part SyncDirList_Part SyncFileList_Part ExclusionListSyncDir_Part ExclusionListSyncFiles_Part DefaultExclusionListSyncDir ExclusionDefaultDir01 ExclusionDefaultDir02 ExclusionDefaultDir03 ExclusionDefaultDir04 ExclusionDefaultDir05 ExclusionDefaultDir06 ExclusionDefaultDir07 ExclusionDefaultDir08 ExclusionDefaultDir09 ExclusionDefaultDir10 ExclusionDefaultDir11 ExclusionDefaultDir12 ExclusionDefaultDir13 ExclusionDefaultDir14 ExclusionDefaultDir15 ExclusionDefaultDir16 ExclusionDefaultDir17 ExclusionDefaultDir18 ExclusionDefaultDir19 ExclusionDefaultDir20 ExclusionDefaultDir21 ExclusionDefaultDir22 ExclusionDefaultDir23 ExclusionDefaultDir24 ExclusionDefaultDir25 ExclusionDefaultDir26 ExclusionDefaultDir27 ExclusionDefaultDir28 ExclusionDefaultDir29 ExclusionDefaultDir30 SharedStoreFileExclusionList_Part SharedStoreFileInclusionList_Part SharedStoreProfileContainerFileSizeLimit_Part CPEnable CPMigrationFromBaseProfileToCPStore CPPathData CPSchemaPathData CPUserGroups_Part DATPath_Part ExcludedGroups_Part MigrateUserStore_Part OfflineSupport ProcessAdmins ProcessedGroups_Part PSMidSessionWriteBack PSMidSessionWriteBackReg PSMidSessionWriteBackSessionLock ServiceActive AppAccessControl_Part CEIPEnabled CredBasedAccessEnabled DisableDynamicConfig EnableVolumeReattach FreeRatio4Compaction_Part FSLogixProfileContainerSupport LoadRetries_Part LogoffRatherThanTempProfile MultiSiteReplication_Part NDefrag4Compaction NLogoffs4Compaction_Part OneDriveContainer_Part OrderedGroups_Part OutlookEdbBackupEnabled OutlookSearchRoamingConcurrentSession OutlookSearchRoamingConcurrentSession_Part OutlookSearchRoamingEnabled ProcessCookieFiles SyncGpoStateEnabled UserGroupLevelConfigEnabled UserStoreSelection_Part UwpAppsRoaming VhdAutoExpansionIncrement_Part VhdAutoExpansionLimit_Part VhdAutoExpansionThreshold_Part VhdContainerCapacity_Part VhdStorePath_Part UplCustomizedUserLayerSizeInGb UplGroupsUsingCustomizedUserLayerSize UplRepositoryPath UplUserExclusions UplUserLayerSizeInGb ConcurrentLogonsTolerance CPUUsage CPUUsageExcludedProcessPriority DiskUsage MaximumNumberOfSessions MemoryUsage MemoryUsageBaseLoad ApplicationLaunchWaitTimeout HDXAdaptiveTransport HDXDirect HDXDirectMode HDXDirectPortRange IcaListenerPortNumber IcaListenerTimeout LogoffCheckerStartupDelay RemoteCredentialGuard RendezvousProtocol RendezvousProxy SecureHDX VdaUpgradeProxy VirtualChannelWhiteList VirtualChannelWhiteListLogging VirtualChannelWhiteListLogThrottling AcceptWebSocketsConnections WebSocketsPort WSTrustedOriginServerList SessionReliabilityConnections SessionReliabilityPort SessionReliabilityTimeout IdleTimerInterval LoadBalancedPrintServers PrintServersOutOfServiceThreshold UpcHttpConnectTimeout UpcHttpReceiveTimeout UpcHttpSendTimeout UpcSslCgpPort UpcSslCipherSuite UpcSslComplianceMode UpcSslEnable UpcSslFips UpcSslHttpsPort UpcSslProtocolVersion UpsCgpPort UpsEnable UpsHttpPort HTML5VideoRedirection MultimediaAcceleration MultimediaAccelerationDefaultBufferSize MultimediaAccelerationEnableCSF MultimediaAccelerationUseDefaultBufferSize MultimediaConferencing WebBrowserRedirection MultiPortPolicy MultiStreamAssignment MultiStreamPolicy RtpAudioPortRange UDPAudioOnServer AllowLocalAppAccess URLRedirectionBlackList URLRedirectionWhiteList IcaKeepAlives IcaKeepAliveTimeout DisplayDegradePreference DisplayDegradeUserNotification DisplayMemoryLimit DynamicPreview ImageCaching LegacyGraphicsMode MaximumColorDepth QueueingAndTossing FramehawkDisplayChannelPortRange PersistentCache EnhancedDesktopExperience IcaRoundTripCalculation IcaRoundTripCalculationInterval IcaRoundTripCalculationWhenIdle ACRTimeout AutoClientReconnect AutoClientReconnectAuthenticationRequired AutoClientReconnectLogging ReconnectionUiTransparencyLevel AppProtectionPostureCheck AdvanceWarningFrequency AdvanceWarningMessageTitle AdvanceWarningPeriod AgentTaskInterval FinalForceLogoffMessageBody FinalForceLogoffMessageTitle ForceLogoffGracePeriod ForceLogoffMessageTitle ImageProviderIntegrationEnabled RebootMessageBody Using these names allows the deployment of policies by using the Terraform provider. Caution: Before running Terraform no Terraform-related entities are available: The configuration can be started by following the normal Terraform workflow: terraform init, terraform plan and if no errors occur terraform apply .tg {border-collapse:collapse;border-spacing:0;width:100%} .tg .tg-5jvq{background-color:#0000b0;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top} PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> terraform init Initializing the backend... Initializing provider plugins... - Finding citrix/citrix versions matching ">= 0.5.2"... - Finding hashicorp/aws versions matching ">= 5.4.0"... - Finding mastercard/restapi versions matching "1.18.2"... - Finding latest version of hashicorp/local... - Installing hashicorp/aws v5.41.0... - Installed hashicorp/aws v5.41.0 (signed by HashiCorp) - Installing mastercard/restapi v1.18.2... - Installed mastercard/restapi v1.18.2 (self-signed, key ID DCB8C431D71C30AB) - Installing hashicorp/local v2.5.1... - Installed hashicorp/local v2.5.1 (signed by HashiCorp) - Installing citrix/citrix v0.5.2... - Installed citrix/citrix v0.5.2 (signed by a HashiCorp partner, key ID 25D62DD8407EA386) Partner and community providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://www.terraform.io/docs/cli/plugins/signing.html Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run "terraform init" in the future. Terraform has been successfully initialized! You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work. If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary. PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> terraform plan data.local_file.LoadZoneID: Reading... data.local_file.LoadZoneID: Read complete after 0s [id=a7592ebe91057eab80084fc014fa06ca52453732] data.aws_vpc.AWSAZ: Reading... data.aws_vpc.AWSVPC: Reading... data.aws_subnet.AWSSubnet: Reading... data.aws_subnet.AWSSubnet: Read complete after 0s [id=subnet-07e168f0c2a28edf3] data.aws_vpc.AWSVPC: Read complete after 0s [id=vpc-0f9ac384f3bf8cb3a] data.aws_vpc.AWSAZ: Read complete after 0s [id=vpc-0f9ac384f3bf8cb3a] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_ami_from_instance.CreateAMIFromWMI will be created + resource "aws_ami_from_instance" "CreateAMIFromWMI" { + architecture = (known after apply) + arn = (known after apply) + boot_mode = (known after apply) + ena_support = (known after apply) + hypervisor = (known after apply) + id = (known after apply) + image_location = (known after apply) + image_owner_alias = (known after apply) + image_type = (known after apply) + imds_support = (known after apply) + kernel_id = (known after apply) + manage_ebs_snapshots = (known after apply) + name = "TACG-AWS-TF-AMIFromWMI" + owner_id = (known after apply) + platform = (known after apply) + platform_details = (known after apply) + public = (known after apply) + ramdisk_id = (known after apply) + root_device_name = (known after apply) + root_snapshot_id = (known after apply) + source_instance_id = "i-024f77470f3f63c08" + sriov_net_support = (known after apply) + tags_all = (known after apply) + tpm_support = (known after apply) + usage_operation = (known after apply) + virtualization_type = (known after apply) + timeouts { + create = "45m" } } # citrix_aws_hypervisor.CreateHypervisorConnection will be created + resource "citrix_aws_hypervisor" "CreateHypervisorConnection" { + api_key = (sensitive value) + id = (known after apply) + name = "TACG-AWS-TF-HypConn" + region = (sensitive value) + secret_key = (sensitive value) + zone = "8d5dXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" } # citrix_aws_hypervisor_resource_pool.CreateHypervisorPool will be created + resource "citrix_aws_hypervisor_resource_pool" "CreateHypervisorPool" { + availability_zone = "eu-central-1a" + hypervisor = (known after apply) + id = (known after apply) + name = "TACG-AWS-TF-HypConnPool" + subnets = [ + "172.31.16.0/20", ] + vpc = "TACG-VPC" } # citrix_machine_catalog.CreateMCSCatalog will be created + resource "citrix_machine_catalog" "CreateMCSCatalog" { + allocation_type = "Random" + description = "Terraform-based Machine Catalog" + id = (known after apply) + is_power_managed = true + is_remote_pc = false + name = "MC-TACG-AWS-TF" + provisioning_scheme = { + aws_machine_config = { + image_ami = (known after apply) + master_image = "TACG-AWS-TF-AMIFromWMI" + service_offering = "T2 Large Instance" } + hypervisor = (known after apply) + hypervisor_resource_pool = (known after apply) + identity_type = "ActiveDirectory" + machine_account_creation_rules = { + naming_scheme = "TACG-AWS-WM-#" + naming_scheme_type = "Numeric" } + machine_domain_identity = { + domain = "aws.the-austrian-citrix-guy.at" + domain_ou = "CN=Computers,DC=aws,DC=the-austrian-citrix-guy,DC=at" + service_account = (sensitive value) + service_account_password = (sensitive value) } + network_mapping = { + network = "172.31.16.0/20" + network_device = "0" } + number_of_total_machines = 1 } + provisioning_type = "MCS" + session_support = "MultiSession" + zone = "8d5dXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" + minimum_functional_level = "L7_20" } # citrix_delivery_group.CreateDG will be created + resource "citrix_delivery_group" "CreateDG" { + associated_machine_catalogs = [ + { + machine_catalog = "f4e34a11-6e31-421f-8cb4-060bc4a13fef" + machine_count = 1 }, ] + autoscale_settings = { + autoscale_enabled = true + disconnect_off_peak_idle_session_after_seconds = 0 + disconnect_peak_idle_session_after_seconds = 300 + log_off_off_peak_disconnected_session_after_seconds = 0 + log_off_peak_disconnected_session_after_seconds = 300 + off_peak_buffer_size_percent = 0 + off_peak_disconnect_action = "Nothing" + off_peak_disconnect_timeout_minutes = 0 + off_peak_extended_disconnect_action = "Nothing" + off_peak_extended_disconnect_timeout_minutes = 0 + off_peak_log_off_action = "Nothing" + peak_buffer_size_percent = 0 + peak_disconnect_action = "Nothing" + peak_disconnect_timeout_minutes = 0 + peak_extended_disconnect_action = "Nothing" + peak_extended_disconnect_timeout_minutes = 0 + peak_log_off_action = "Nothing" + power_off_delay_minutes = 30 + power_time_schemes = [ + { + days_of_week = [ + "Monday", + "Tuesday", + "Wednesday", + "Thursday", + "Friday", ] + display_name = "TACG-AWS-TF-AS-Weekdays" + peak_time_ranges = [ + "09:00-17:00", ] + pool_size_schedules = [ + { + pool_size = 1 + time_range = "09:00-17:00" }, ] + pool_using_percentage = false }, ] } + desktops = [ + { + description = "Terraform-based Delivery Group running on AWS EC2" + enable_session_roaming = true + enabled = true + published_name = "DG-TF-TACG-AWS" + restricted_access_users = { + allow_list = [ + "TACG-AWS\\vdaallowed", ] } }, ] + id = (known after apply) + name = "DG-TF-TACG-AWS" + reboot_schedules = [ + { + days_in_week = [ + "Sunday", ] + frequency = "Weekly" + frequency_factor = 1 + ignore_maintenance_mode = true + name = "TACG-AWS-Reboot Schedule" + natural_reboot_schedule = false + reboot_duration_minutes = 0 + reboot_schedule_enabled = true + start_date = "2024-01-01" + start_time = "02:00" }, ] + restricted_access_users = { + allow_list = [ + "TACG-AWS\\vdaallowed", ] } + total_machines = (known after apply) } # time_sleep.Wait_60_Seconds_2 will be created + resource "time_sleep" "Wait_60_Seconds_2" { + create_duration = "60s" + id = (known after apply) } # time_sleep.wait_60_seconds_1 will be created + resource "time_sleep" "wait_60_seconds_1" { + create_duration = "60s" + id = (known after apply) } Plan: 8 to add, 0 to change, 0 to destroy. ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now. PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> terraform apply data.local_file.LoadZoneID: Reading... data.local_file.LoadZoneID: Read complete after 0s [id=a7592ebe91057eab80084fc014fa06ca52453732] data.aws_subnet.AWSSubnet: Reading... data.aws_vpc.AWSVPC: Reading... data.aws_vpc.AWSAZ: Reading... data.aws_subnet.AWSSubnet: Read complete after 0s [id=subnet-07e168f0c2a28edf3] data.aws_vpc.AWSAZ: Read complete after 0s [id=vpc-0f9ac384f3bf8cb3a] data.aws_vpc.AWSVPC: Read complete after 0s [id=vpc-0f9ac384f3bf8cb3a] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_ami_from_instance.CreateAMIFromWMI will be created + resource "aws_ami_from_instance" "CreateAMIFromWMI" { + architecture = (known after apply) + arn = (known after apply) + boot_mode = (known after apply) + ena_support = (known after apply) + hypervisor = (known after apply) + id = (known after apply) + image_location = (known after apply) + image_owner_alias = (known after apply) + image_type = (known after apply) + imds_support = (known after apply) + kernel_id = (known after apply) + manage_ebs_snapshots = (known after apply) + name = "TACG-AWS-TF-AMIFromWMI" + owner_id = (known after apply) + platform = (known after apply) + platform_details = (known after apply) + public = (known after apply) + ramdisk_id = (known after apply) + root_device_name = (known after apply) + root_snapshot_id = (known after apply) + source_instance_id = "i-024f77470f3f63c08" + sriov_net_support = (known after apply) + tags_all = (known after apply) + tpm_support = (known after apply) + usage_operation = (known after apply) + virtualization_type = (known after apply) + timeouts { + create = "45m" } } # citrix_aws_hypervisor.CreateHypervisorConnection will be created + resource "citrix_aws_hypervisor" "CreateHypervisorConnection" { + api_key = (sensitive value) + id = (known after apply) + name = "TACG-AWS-TF-HypConn" + region = (sensitive value) + secret_key = (sensitive value) + zone = "8d5d77ba-4803-4b71-9a6b-6e28071304c1" } # citrix_aws_hypervisor_resource_pool.CreateHypervisorPool will be created + resource "citrix_aws_hypervisor_resource_pool" "CreateHypervisorPool" { + availability_zone = "eu-central-1a" + hypervisor = (known after apply) + id = (known after apply) + name = "TACG-AWS-TF-HypConnPool" + subnets = [ + "172.31.16.0/20", ] + vpc = "TACG-VPC" } # citrix_machine_catalog.CreateMCSCatalog will be created + resource "citrix_machine_catalog" "CreateMCSCatalog" { + allocation_type = "Random" + description = "Terraform-based Machine Catalog" + id = (known after apply) + is_power_managed = true + is_remote_pc = false + name = "MC-TACG-AWS-TF" + provisioning_scheme = { + aws_machine_config = { + image_ami = (known after apply) + master_image = "TACG-AWS-TF-AMIFromWMI" + service_offering = "T2 Large Instance" } + hypervisor = (known after apply) + hypervisor_resource_pool = (known after apply) + identity_type = "ActiveDirectory" + machine_account_creation_rules = { + naming_scheme = "TACG-AWS-WM-#" + naming_scheme_type = "Numeric" } + machine_domain_identity = { + domain = "aws.the-austrian-citrix-guy.at" + domain_ou = "CN=Computers,DC=aws,DC=the-austrian-citrix-guy,DC=at" + service_account = (sensitive value) + service_account_password = (sensitive value) } + network_mapping = { + network = "172.31.16.0/20" + network_device = "0" } + number_of_total_machines = 1 } + provisioning_type = "MCS" + session_support = "MultiSession" + zone = "8d5dXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" } # citrix_delivery_group.CreateDG will be created + resource "citrix_delivery_group" "CreateDG" { + associated_machine_catalogs = [ + { + machine_catalog = "f4e3XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" + machine_count = 1 }, ] + autoscale_settings = { + autoscale_enabled = true + disconnect_off_peak_idle_session_after_seconds = 0 + disconnect_peak_idle_session_after_seconds = 300 + log_off_off_peak_disconnected_session_after_seconds = 0 + log_off_peak_disconnected_session_after_seconds = 300 + off_peak_buffer_size_percent = 0 + off_peak_disconnect_action = "Nothing" + off_peak_disconnect_timeout_minutes = 0 + off_peak_extended_disconnect_action = "Nothing" + off_peak_extended_disconnect_timeout_minutes = 0 + off_peak_log_off_action = "Nothing" + peak_buffer_size_percent = 0 + peak_disconnect_action = "Nothing" + peak_disconnect_timeout_minutes = 0 + peak_extended_disconnect_action = "Nothing" + peak_extended_disconnect_timeout_minutes = 0 + peak_log_off_action = "Nothing" + power_off_delay_minutes = 30 + power_time_schemes = [ + { + days_of_week = [ + "Monday", + "Tuesday", + "Wednesday", + "Thursday", + "Friday", ] + display_name = "TACG-AWS-TF-AS-Weekdays" + peak_time_ranges = [ + "09:00-17:00", ] + pool_size_schedules = [ + { + pool_size = 1 + time_range = "09:00-17:00" }, ] + pool_using_percentage = false }, ] } + desktops = [ + { + description = "Terraform-based Delivery Group running on AWS EC2" + enable_session_roaming = true + enabled = true + published_name = "DG-TF-TACG-AWS" + restricted_access_users = { + allow_list = [ + "TACG-AWS\\vdaallowed", ] } }, ] + id = (known after apply) + name = "DG-TF-TACG-AWS" + reboot_schedules = [ + { + days_in_week = [ + "Sunday", ] + frequency = "Weekly" + frequency_factor = 1 + ignore_maintenance_mode = true + name = "TACG-AWS-Reboot Schedule" + natural_reboot_schedule = false + reboot_duration_minutes = 0 + reboot_schedule_enabled = true + start_date = "2024-01-01" + start_time = "02:00" }, ] + restricted_access_users = { + allow_list = [ + "TACG-AWS\\vdaallowed", ] } + total_machines = (known after apply) } # time_sleep.Wait_60_Seconds_2 will be created + resource "time_sleep" "Wait_60_Seconds_2" { + create_duration = "60s" + id = (known after apply) } # time_sleep.wait_60_seconds_1 will be created + resource "time_sleep" "wait_60_seconds_1" { + create_duration = "60s" + id = (known after apply) } Plan: 8 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes aws_ami_from_instance.CreateAMIFromWMI: Creating... citrix_aws_hypervisor.CreateHypervisorConnection: Creating... aws_ami_from_instance.CreateAMIFromWMI: Still creating... [10s elapsed] citrix_aws_hypervisor.CreateHypervisorConnection: Still creating... [10s elapsed] citrix_aws_hypervisor.CreateHypervisorConnection: Creation complete after 10s [id=706c408a-6eed-42b1-8102-93888db8a0eb] citrix_aws_hypervisor_resource_pool.CreateHypervisorPool: Creating... aws_ami_from_instance.CreateAMIFromWMI: Still creating... [20s elapsed] citrix_aws_hypervisor_resource_pool.CreateHypervisorPool: Still creating... [10s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [30s elapsed] citrix_aws_hypervisor_resource_pool.CreateHypervisorPool: Still creating... [20s elapsed] citrix_aws_hypervisor_resource_pool.CreateHypervisorPool: Creation complete after 22s [id=b460dfc2-f760-4b52-bc94-fad9c85a0b8f] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [40s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [50s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m0s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m10s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m20s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m30s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m40s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [1m50s elapsed] ... ** Output shortened **... aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m10s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m20s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m30s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m40s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [6m50s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Still creating... [7m0s elapsed] aws_ami_from_instance.CreateAMIFromWMI: Creation complete after 7m7s [id=ami-0e01b8c8d09a5fbe5] time_sleep.wait_30_seconds: Creating... time_sleep.wait_30_seconds: Still creating... [10s elapsed] time_sleep.wait_30_seconds: Still creating... [20s elapsed] time_sleep.wait_30_seconds: Creation complete after 30s [id=2024-03-15T17:40:18Z] citrix_machine_catalog.CreateMCSCatalog: Creating... citrix_machine_catalog.CreateMCSCatalog: Still creating... [10s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [20s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [30s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [40s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [50s elapsed] ... ** Output shortened **... citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m1s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m11s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m21s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m31s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m41s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [16m51s elapsed] citrix_machine_catalog.CreateMCSCatalog: Still creating... [17m1s elapsed] citrix_machine_catalog.CreateMCSCatalog: Creation complete after 17m4s [id=f4e34a11-6e31-421f-8cb4-060bc4a13fef] time_sleep.wait_60_seconds_1: Creating... time_sleep.wait_60_seconds_1: Still creating... [10s elapsed] time_sleep.wait_60_seconds_1: Still creating... [20s elapsed] time_sleep.wait_60_seconds_1: Still creating... [30s elapsed] time_sleep.wait_60_seconds_1: Still creating... [40s elapsed] time_sleep.wait_60_seconds_1: Still creating... [50s elapsed] time_sleep.wait_60_seconds_1: Creation complete after 1m0s [id=2024-03-20T08:06:45Z] time_sleep.Wait_60_Seconds_2: Creating... time_sleep.Wait_60_Seconds_2: Still creating... [10s elapsed] time_sleep.Wait_60_Seconds_2: Still creating... [20s elapsed] time_sleep.Wait_60_Seconds_2: Still creating... [30s elapsed] time_sleep.Wait_60_Seconds_2: Still creating... [40s elapsed] time_sleep.Wait_60_Seconds_2: Still creating... [50s elapsed] time_sleep.Wait_60_Seconds_2: Creation complete after 1m0s [id=2024-03-20T08:07:45Z] citrix_delivery_group.CreateDG: Creating... citrix_delivery_group.CreateDG: Still creating... [10s elapsed] citrix_delivery_group.CreateDG: Creation complete after 12s [id=7e2c73bf-f8b1-4e37-8cd8-efa1338304dc] Apply complete! Resources: 8 added, 0 changed, 0 destroyed. PS C:\TACG\_CCOnAWS\_CCOnAWS-CCStuff> This configuration completes the full deployment of a Citrix Cloud Resource Location in Microsoft Amazon EC2. The environment created by Terraform is now ready for usage. All entities are in place: The Resource Location: The Hypervisor Connection and the Hypervisor Pool: The Machine Catalog: The Worker VM in the Machine Catalog: The Delivery Group: The AutoScale settings of the Delivery Group: The Desktop in the Library: The Desktop in the Library: Connection to the Worker VM´s Desktop: Appendix Examples of the Terraform scripts Module 1: CConAWS-Creation These are the Terraform configuration files for Module 1 (excerpts): _CCOnAWS-Creation-Provider.tf # Terraform deployment of Citrix DaaS on Amazon AWS EC2 ## Definition of all required Terraform providers terraform { required_version = ">= 1.4.0" required_providers { aws = { source = "hashicorp/aws" version = ">= 5.4.0" } restapi = { source = "Mastercard/restapi" version = "1.18.2" } citrix = { source = "citrix/citrix" version = ">=0.5.3" } } } # Configure the AWS Provider provider "aws" { region = "${var.AWSEC2_Region}" access_key = "${var.AWSEC2_AccessKey}" secret_key = "${var.AWSEC2_AccessKeySecret}" } # Configure the Citrix Provider provider "citrix" { customer_id = "${var.CC_CustomerID}" client_id = "${var.CC_APIKey-ClientID}" client_secret = "${var.CC_APIKey-ClientSecret}" } # Configure the REST-API provider provider "restapi" { alias = "restapi_rl" uri = "${var.CC_RestAPIURI}" create_method = "POST" write_returns_object = true debug = true headers = { "Content-Type" = "application/json", "Citrix-CustomerId" = "${var.CC_CustomerID}", "Accept" = "application/json", "Authorization" = "${var.CC_APIKey-Bearer}" } } _CCOnAWS-Creation-Create.tf # Terraform deployment of Citrix DaaS on Amazon AWS EC2 ## Creation of all required VMs - two Cloud Connectors and one Worker Master image locals { } ### Create needed IAM roles #### IAM EC2 Policy with Assume Role data "aws_iam_policy_document" "ec2_assume_role" { statement { actions = ["sts:AssumeRole"] principals { type = "Service" identifiers = ["ec2.amazonaws.com"] } } } #### Create EC2 IAM Role resource "aws_iam_role" "ec2_iam_role" { name = "ec2-iam-role" path = "/" assume_role_policy = data.aws_iam_policy_document.ec2_assume_role.json } #### Create EC2 IAM Instance Profile resource "aws_iam_instance_profile" "ec2_profile" { name = "ec2-profile" role = aws_iam_role.ec2_iam_role.name } #### Attach Policies to Instance Role resource "aws_iam_policy_attachment" "ec2_attach1" { name = "ec2-iam-attachment" roles = [aws_iam_role.ec2_iam_role.id] policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" } resource "aws_iam_policy_attachment" "ec2_attach2" { name = "ec2-iam-attachment" roles = [aws_iam_role.ec2_iam_role.id] policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM" } #### Create Secret Manager IAM Policy resource "aws_iam_policy" "secret_manager_ec2_policy" { name = "secret-manager-ec2-policy" description = "Secret Manager EC2 policy" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = [ "secretsmanager:*" ] Effect = "Allow" Resource = "*" }, ] }) } #### Attach Secret Manager Policies to Instance Role resource "aws_iam_policy_attachment" "api_secret_manager_ec2_attach" { name = "secret-manager-ec2-attachment" roles = [aws_iam_role.ec2_iam_role.id] policy_arn = aws_iam_policy.secret_manager_ec2_policy.arn } data "template_file" "Add-EC2InstanceToDomainScriptCC1" { template = file("${path.module}/Add-EC2InstanceToDomainCC1.ps1") vars = { ad_secret_id = "AD/SA/DomainJoin" ad_domain = "aws.the-austrian.citrix.guy.at" } } data "template_file" "Add-EC2InstanceToDomainScriptCC2" { template = file("${path.module}/Add-EC2InstanceToDomainCC2.ps1") vars = { ad_secret_id = "AD/SA/DomainJoin" ad_domain = "aws.the-austrian.citrix.guy.at" } } data "template_file" "Add-EC2InstanceToDomainScriptWMI" { template = file("${path.module}/Add-EC2InstanceToDomainWMI.ps1") vars = { ad_secret_id = "AD/SA/DomainJoin" ad_domain = "aws.the-austrian.citrix.guy.at" } } data "template_file" "Add-EC2InstanceToDomainScriptAdminVM" { template = file("${path.module}/Add-EC2InstanceToDomainAdminVM.ps1") vars = { ad_secret_id = "AD/SA/DomainJoin" ad_domain = "aws.the-austrian.citrix.guy.at" } } #### Create DHCP settings resource "aws_vpc_dhcp_options" "vpc-dhcp-options" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptCC1 ] domain_name_servers = [ var.AWSEC2_DC-IP ] } resource "aws_vpc_dhcp_options_association" "dns_resolver" { vpc_id = var.AWSEC2_VPC-ID dhcp_options_id = aws_vpc_dhcp_options.vpc-dhcp-options.id } ### Create CC1-VM resource "aws_instance" "CC1" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptCC1 ] ami = var.AWSEC2_AMI subnet_id = var.AWSEC2_Subnet-ID private_ip = var.AWSEC2_PrivIP-CC1 instance_type = var.AWSEC2_Instance-Type key_name = var.AWSEC2_AMI-KeyPairName vpc_security_group_ids = [ var.AWSEC2_SecurityGroup ] tags = { Name = var.AWSEC2_InstanceName-CC1 } user_data = data.template_file.Add-EC2InstanceToDomainScriptCC1.rendered iam_instance_profile = aws_iam_instance_profile.ec2_profile.id } ### Create CC2-VM resource "aws_instance" "CC2" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptCC2 ] ami = var.AWSEC2_AMI subnet_id = var.AWSEC2_Subnet-ID private_ip = var.AWSEC2_PrivIP-CC2 instance_type = var.AWSEC2_Instance-Type key_name = var.AWSEC2_AMI-KeyPairName vpc_security_group_ids = [ var.AWSEC2_SecurityGroup ] tags = { Name = var.AWSEC2_InstanceName-CC2 } user_data = data.template_file.Add-EC2InstanceToDomainScriptCC2.rendered iam_instance_profile = aws_iam_instance_profile.ec2_profile.id } ### Create Admin-VM resource "aws_instance" "AdminVM" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptAdminVM ] ami = var.AWSEC2_AMI subnet_id = var.AWSEC2_Subnet-ID private_ip = var.AWSEC2_PrivIP-AdminVM instance_type = var.AWSEC2_Instance-Type-Worker key_name = var.AWSEC2_AMI-KeyPairName vpc_security_group_ids = [ var.AWSEC2_SecurityGroup ] tags = { Name = var.AWSEC2_InstanceName-AdminVM } user_data = data.template_file.Add-EC2InstanceToDomainScriptAdminVM.rendered iam_instance_profile = aws_iam_instance_profile.ec2_profile.id } ### Create WMI-VM resource "aws_instance" "WMI" { depends_on = [ data.template_file.Add-EC2InstanceToDomainScriptWMI ] ami = var.AWSEC2_AMI subnet_id = var.AWSEC2_Subnet-ID private_ip = var.AWSEC2_PrivIP-WMI instance_type = var.AWSEC2_Instance-Type key_name = var.AWSEC2_AMI-KeyPairName vpc_security_group_ids = [ var.AWSEC2_SecurityGroup ] tags = { Name = var.AWSEC2_InstanceName-WMI } user_data = data.template_file.Add-EC2InstanceToDomainScriptWMI.rendered iam_instance_profile = aws_iam_instance_profile.ec2_profile.id } _CCOnAWS-Creation-GetBearerToken.tf ### Create PowerShell file for retrieving the Bearer Token resource "local_file" "GetBearerToken" { content = <<-EOT asnp Citrix* $key= "${var.CC_APIKey-ClientID}" $secret= "${var.CC_APIKey-ClientSecret}" $customer= "${var.CC_CustomerID}" $XDStoredCredentials = Set-XDCredentials -StoreAs default -ProfileType CloudApi -CustomerId $customer -APIKey $key -SecretKey $secret $auth = Get-XDAuthentication $BT = $GLOBAL:XDAuthToken | Out-File "${path.module}/GetBT.txt" EOT filename = "${path.module}/GetBT.ps1" } ### Running GetBearertoken-Script to retrieve the Bearer Token resource "terraform_data" "GetBT" { depends_on = [ local_file.GetBearerToken ] provisioner "local-exec" { command = "${path.module}/GetBT.ps1" interpreter = ["PowerShell", "-File"] } } ### Retrieving the Bearer Token data "local_file" "Retrieve_BT" { depends_on = [ terraform_data.GetBT ] filename = "${path.module}/GetBT.txt" } output "terraform_data_BR_Read" { value = data.local_file.Retrieve_BT.content } Module 2: CConAWS-Install These are the Terraform configuration files for Module 2 (excerpts): _CCOnAWS-Install-CreatePreReqs.tf # Terraform deployment of Citrix DaaS on Amazon AWS EC2 ## Creating a dedicated Resource Location on Citrix Cloud resource "random_uuid" "IDforCCRL" { } ### Create local directory resource "local_file" "Log" { content = "Directory created." filename = "${var.CC_Install_LogPath}/log.txt" } resource "local_file" "LogData" { depends_on = [ local_file.Log ] content = "Directory created." filename = "${var.CC_Install_LogPath}/DATA/log.txt" } ### Create PowerShell command to be run on the CC machines locals { randomuuid = random_uuid.IDforCCRL.result } ### Create a dedicated Resource Location in Citrix Cloud resource "restapi_object" "CreateRL" { depends_on = [ local_file.log ] provider = restapi.restapi_rl path="/resourcelocations" data = jsonencode( { "id" = "${local.randomuuid}", "name" = "${var.CC_RestRLName}", "internalOnly" = false, "timeZone" = "GMT Standard Time", "readOnly" = false } ) } ### Create PowerShell files with configuration and next steps and save it into Transfer directory #### Create CWC-Installer configuration file based on variables and save it into Transfer directory resource "local_file" "CWC-Configuration" { depends_on = [restapi_object.CreateRL] content = jsonencode( { "customerName" = "${var.CC_CustomerID}", "clientId" = "${var.CC_APIKey-ClientID}", "clientSecret" = "${var.CC_APIKey-ClientSecret}", "resourceLocationId" = "XXXXXXXXXX", "acceptTermsOfService" = true } ) filename = "${var.CC_Install_LogPath}/DATA/cwc.json" } #### Wait 5 mins after RL creation to settle Zone creation resource "time_sleep" "wait_300_seconds" { create_duration = "300s" } ### Create PowerShell file for determining the SiteID resource "local_file" "GetSiteIDScript" { depends_on = [restapi_object.CreateRL, time_sleep.wait_300_seconds] content = <<-EOT $requestUri = "https://api-eu.cloud.com/cvad/manage/me" $headers = @{ "Accept"="application/json"; "Authorization" = "${var.CC_APIKey-Bearer}"; "Citrix-CustomerId" = "${var.CC_CustomerID}" } $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Select-Object Customers $responsetojson = $response | Convertto-Json -Depth 3 $responsekorr = $responsetojson -replace("null","""empty""") $responsefromjson = $responsekorr | Convertfrom-json $SitesObj=$responsefromjson.Customers[0].Sites[0] $Export1 = $SitesObj -replace("@{Id=","") $SplittedString = $Export1.Split(";") $SiteID= $SplittedString[0] $PathCompl = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" Set-Content -Path $PathCompl -Value $SiteID EOT filename = "${path.module}/GetSiteID.ps1" } ### Running the SiteID-Script to generate the SiteID resource "terraform_data" "SiteID" { depends_on = [ local_file.GetSiteIDScript ] provisioner "local-exec" { command = "GetSiteID.ps1" interpreter = ["PowerShell", "-File"] } } ### Retrieving the SiteID data "local_file" "input_site" { depends_on = [ terraform_data.SiteID ] filename = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" } ### Create PowerShell file for determining the ZoneID resource "local_file" "GetZoneIDScript" { depends_on = [ data.local_file.input_site ] content = <<-EOT $requestUri = "https://api-eu.cloud.com/cvad/manage/Zones" $headers = @{ "Accept"="application/json"; "Authorization" = "${var.CC_APIKey-Bearer}"; "Citrix-CustomerId" = "${var.CC_CustomerID}"; "Citrix-InstanceId" = "${data.local_file.input_site.content}" } $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json $responsedejson = $response | ConvertFrom-Json $ZoneId = $responsedejson.Items | Where-Object { $_.Name -eq "${var.CC_RestRLName}" } | Select-Object id $Export1 = $ZoneId -replace("@{Id=","") $ZoneID = $Export1 -replace("}","") $PathCompl = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" Set-Content -Path $PathCompl -Value $ZoneID EOT filename = "${path.module}/GetZoneID.ps1" } ### Running the ZoneID-Script to generate the ZoneID resource "terraform_data" "ZoneID" { depends_on = [ local_file.GetZoneIDScript ] provisioner "local-exec" { command = "GetZoneID.ps1" interpreter = ["PowerShell", "-File"] } } #### Create PowerShell file for installing the Citrix Cloud Connector - we need to determine the correct RL-ID resource "local_file" "InstallPreReqsOnCC-ps1" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" If(!(test-path -PathType container $path)) { New-Item -ItemType Directory -Path $path } Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript started." # Download the Citrix Cloud Connector-Software to CC Invoke-WebRequest ${var.CC_Install_CWCURI} -OutFile '${var.CC_Install_LogPath}/DATA/CWCConnector.exe' # Install Citrix Cloud Controller based on the cwc.json configuration file Add-Content ${var.CC_Install_LogPath}/log.txt "`nInstalling Cloud Connector." Start-Process -Filepath "${var.CC_Install_LogPath}/DATA/CWCConnector.exe" -ArgumentList "/q /ParametersFilePath:${var.CC_Install_LogPath}/DATA/cwc.json" Add-Content ${var.CC_Install_LogPath}/log.txt "`nInstalled Cloud Connector." Restart-Computer -Force -Timeout 1800 } EOT filename = "${path.module}/DATA/InstallPreReqsOnCC.ps1" } #### Create PowerShell file for installing the Citrix Remote PoSH SDK on AVM - we need to determine the correct RL-ID resource "local_file" "InstallPreReqsOnAVM1-ps1" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" If(!(test-path -PathType container $path)) { New-Item -ItemType Directory -Path $path } Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript started." # Download Citrix Remote PowerShell SDK Invoke-WebRequest '${var.CC_Install_RPoSHURI}' -OutFile '${var.CC_Install_LogPath}/DATA/CitrixPoshSdk.exe' Add-Content ${var.CC_Install_LogPath}/log.txt "`nPowerShell SDK downloaded." # Install Citrix Remote PowerShell SDK Start-Process -Filepath "${var.CC_Install_LogPath}/DATA/CitrixPoshSdk.exe" -ArgumentList "-quiet" Add-Content ${var.CC_Install_LogPath}/log.txt "`nPowerShell SDK installed." # Timeout to settle all processes Start-Sleep -Seconds 60 Add-Content ${var.CC_Install_LogPath}/log.txt "`nTimeout elapsed." } EOT filename = "${path.module}/DATA/InstallPreReqsOnAVM1.ps1" } #### Create PowerShell file for installing the Citrix Remote PoSH SDK on AVM - we need to determine the correct RL-ID resource "local_file" "InstallPreReqsOnAVM2-ps1" { content = <<-EOT #### Need to invoke PowerShell as Domain User as the provisioner does not allow to be run in a Domain Users-context $PSUsername = '${var.Provisioner_DomainAdmin-Username}' $PSPassword = '${var.Provisioner_DomainAdmin-Password}' $PSSecurePassword = ConvertTo-SecureString $PSPassword -AsPlainText -Force $PSCredential = New-Object System.Management.Automation.PSCredential ($PSUsername, $PSSecurePassword) Invoke-Command -ComputerName localhost -Credential $PSCredential -ScriptBlock { $path = "${var.CC_Install_LogPath}" # Correct the Resource Location ID in cwc.json file $requestUri = "https://api-eu.cloud.com/resourcelocations" $headers = @{ "Accept"="application/json"; "Authorization" = "${var.CC_APIKey-Bearer}"; "Citrix-CustomerId" = "${var.CC_CustomerID}"} $response = Invoke-RestMethod -Uri $requestUri -Method GET -Headers $headers | Convertto-Json $RLs = ConvertFrom-Json $response $RLFiltered = $RLs.items | Where-Object name -in "${var.CC_RestRLName}" Add-Content ${var.CC_Install_LogPath}/log.txt $RLFiltered $RLID = $RLFiltered.id $OrigContent = Get-Content ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/log.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt $OrigContent $CorrContent = $OrigCOntent.Replace('XXXXXXXXXX', $RLID) | Out-File -FilePath ${var.CC_Install_LogPath}/DATA/cwc.json Add-Content ${var.CC_Install_LogPath}/DATA/GetRLID.txt $RLID Add-Content ${var.CC_Install_LogPath}/log.txt "`ncwc.json corrected." Add-Content ${var.CC_Install_LogPath}/log.txt "`nScript completed." } EOT filename = "${path.module}/DATA/InstallPreReqsOnAVM2.ps1" } ### Upload required components to AVM #### Set the Provisioner-Connection resource "null_resource" "UploadRequiredComponentsToAVM" { depends_on = [ local_file.InstallPreReqsOnAVM1-ps1, local_file.GetSiteIDScript ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } ###### Upload PreReqs script to AVM provisioner "file" { source = "${path.module}/DATA/InstallPreReqsOnAVM1.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallPreReqsOnAVM1.ps1" } provisioner "file" { source = "${path.module}/DATA/InstallPreReqsOnAVM2.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallPreReqsOnAVM2.ps1" } } ### Call the required scripts on AVM #### Set the Provisioner-Connection resource "null_resource" "CallRequiredScriptsOnAVM1" { depends_on = [ null_resource.UploadRequiredComponentsToAVM ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } ###### Execute the PreReqs script on AVM provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallPreReqsOnAVM1.ps1" ] } } resource "null_resource" "CallRequiredScriptsOnAVM2" { depends_on = [ null_resource.CallRequiredScriptsOnAVM1 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } ###### Execute the PreReqs script on AVM provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallPreReqsOnAVM2.ps1" ] } } ############################################################################################################################## ### Upload required components to CC1 #### Set the Provisioner-Connection resource "null_resource" "UploadRequiredComponentsToCC1" { depends_on = [ local_file.InstallPreReqsOnCC-ps1, null_resource.CallRequiredScriptsOnAVM2] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_DDC1-IP timeout = var.Provisioner_Timeout } ###### Upload Cloud Connector configuration file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/cwc.json" destination = "${var.CC_Install_LogPath}/DATA/cwc.json" } ###### Upload SiteID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" } ###### Upload ZoneID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } ###### Upload RLID file to CC1 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" } ###### Upload PreReqs script to CC1 provisioner "file" { source = "${path.module}/DATA/InstallPreReqsOnCC.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallPreReqsOnCC.ps1" } } ###### Execute the PreReqs script on CC1 resource "null_resource" "CallRequiredScriptsOnCC1" { depends_on = [ null_resource.UploadRequiredComponentsToCC1 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallPreReqsOnCC.ps1" ] } } ### Upload required components to CC2 #### Set the Provisioner-Connection resource "null_resource" "UploadRequiredComponentsToCC2" { depends_on = [ local_file.InstallPreReqsOnCC-ps1,null_resource.CallRequiredScriptsOnAVM2 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_DDC2-IP timeout = var.Provisioner_Timeout } ###### Upload Cloud Connector configuration file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/cwc.json" destination = "${var.CC_Install_LogPath}/DATA/cwc.json" } ###### Upload SiteID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetSiteID.txt" } ###### Upload ZoneID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } ###### Upload RLID file to CC2 provisioner "file" { source = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" destination = "${var.CC_Install_LogPath}/DATA/GetRLID.txt" } ###### Upload PreReqs script to CC2 provisioner "file" { source = "${path.module}/DATA/InstallPreReqsOnCC.ps1" destination = "${var.CC_Install_LogPath}/DATA/InstallPreReqsOnCC.ps1" } } ###### Execute the PreReqs script on CC2 resource "null_resource" "CallRequiredScriptsOnCC2" { depends_on = [ null_resource.UploadRequiredComponentsToCC2 ] connection { type = var.Provisioner_Type user = var.Provisioner_Admin-Username password = var.Provisioner_Admin-Password host = var.Provisioner_AVM-IP timeout = var.Provisioner_Timeout } provisioner "remote-exec" { inline = [ "powershell -File ${var.CC_Install_LogPath}/DATA/InstallPreReqsOnCC.ps1" ] } } Module 3: CConAWS-CCStuff These are the Terraform configuration files for Module 3 (excerpts): _CCOnAWS-CCStuff-CreateCCEntities.tf # Terraform deployment of Citrix DaaS on Amazon AWS EC2 ## Creating all Citrix Cloud-related entities ### Creating a Hypervisor Connection #### Retrieving the ZoneID data "local_file" "LoadZoneID" { filename = "${var.CC_Install_LogPath}/DATA/GetZoneID.txt" } #### Creating the Hypervisor Connection resource "citrix_aws_hypervisor" "CreateHypervisorConnection" { depends_on = [ data.local_file.LoadZoneID ] name = "${var.CC_AWSEC2-HypConn-Name}" zone = data.local_file.LoadZoneID.content api_key = "${var.AWSEC2_AccessKey}" secret_key = "${var.AWSEC2_AccessKeySecret}" region = "${var.AWSEC2_Region}" } ### Creating a Hypervisor Resource Pool #### Retrieving the VPC name based on the VPC ID data "aws_vpc" "AWSVPC" { id = "${var.AWSEC2_VPC-ID}" } #### Retrieving the Availability Zone data "aws_vpc" "AWSAZ" { id = "${var.AWSEC2_VPC-ID}" } #### Retrieving the Subnet Mask based on the Subnet ID data "aws_subnet" "AWSSubnet" { id = "${var.AWSEC2_Subnet-ID}" } #### Create the Hypervisor Resource Pool resource "citrix_aws_hypervisor_resource_pool" "CreateHypervisorPool" { depends_on = [ citrix_aws_hypervisor.CreateHypervisorConnection ] name = "${var.CC_AWSEC2-HypConnPool-Name}" hypervisor = citrix_aws_hypervisor.CreateHypervisorConnection.id subnets = [ "${data.aws_subnet.AWSSubnet.cidr_block}", ] vpc = "${var.AWSEC2_VPC-Name}" availability_zone = data.aws_subnet.AWSSubnet.availability_zone } #### Create AMI from WMI instance resource "aws_ami_from_instance" "CreateAMIFromWMI" { name = "TACG-AWS-TF-AMIFromWMI" source_instance_id = "${var.AWSEC2_AMI-ID}" timeouts { create = "45m" } } #### Sleep 60s to let AWS Background processes settle resource "time_sleep" "wait_60_seconds" { depends_on = [ citrix_aws_hypervisor_resource_pool.CreateHypervisorPool, aws_ami_from_instance.CreateAMIFromWMI ] create_duration = "60s" } #### Create the Machine Catalog resource "citrix_machine_catalog" "CreateMCSCatalog" { depends_on = [ time_sleep.wait_60_seconds ] name = "${var.CC_AWSEC2-MC-Name}" description = "${var.CC_AWSEC2-MC-Description}" allocation_type = "${var.CC_AWSEC2-MC-AllocationType}" session_support = "${var.CC_AWSEC2-MC-SessionType}" is_power_managed = true is_remote_pc = false provisioning_type = "MCS" zone = data.local_file.LoadZoneID.content provisioning_scheme = { hypervisor = citrix_aws_hypervisor.CreateHypervisorConnection.id hypervisor_resource_pool = citrix_aws_hypervisor_resource_pool.CreateHypervisorPool.id identity_type = "${var.CC_AWSEC2-MC-IDPType}" machine_domain_identity = { domain = "${var.CC_AWSEC2-MC-Domain}" #domain_ou = "${var.CC_AWSEC2-MC-DomainOU}" service_account = "${var.Provisioner_DomainAdmin-Username-UPN}" service_account_password = "${var.Provisioner_DomainAdmin-Password}" } aws_machine_config = { image_ami = aws_ami_from_instance.CreateAMIFromWMI.id master_image = aws_ami_from_instance.CreateAMIFromWMI.name service_offering = "${var.CC_AWSEC2-MC-Service_Offering}" } number_of_total_machines = "${var.CC_AWSEC2-MC-Machine_Count}" network_mapping = { network_device = "0" network = "${data.aws_subnet.AWSSubnet.cidr_block}" } machine_account_creation_rules = { naming_scheme = "${var.CC_AWSEC2-MC-Naming_Scheme_Name}" naming_scheme_type = "${var.CC_AWSEC2-MC-Naming_Scheme_Type}" } } } #### Sleep 60s to let CC Background processes settle resource "time_sleep" "wait_60_seconds_1" { #depends_on = [ citrix_machine_catalog.CreateMCSCatalog ] create_duration = "60s" } #### Create an Example-Policy Set resource "citrix_policy_set" "SetPolicies" { count = var.CC_AWSEC2-Policy-IsNotDaaS ? 1 : 0 depends_on = [ time_sleep.wait_60_seconds_1 ] name = "${var.CC_AWSEC2-Policy-Name}" description = "${var.CC_AWSEC2-Policy-Description}" type = "DeliveryGroupPolicies" scopes = [ "All" ] policies = [ { name = "TACG-AWS-TF-Pol1" description = "Policy to enable use of Universal Printer" is_enabled = true policy_settings = [ { name = "UniversalPrintDriverUsage" value = "Use universal printing only" use_default = false }, ] policy_filters = [ { type = "DesktopGroup" is_enabled = true is_allowed = true }, ] }, { name = "TACG-AWS-TF-Pol2" description = "Policy to enable Client Drive Redirection" is_enabled = true policy_settings = [ { name = "UniversalPrintDriverUsage" value = "Prohibited" use_default = false }, ] policy_filters = [ { type = "DesktopGroup" is_enabled = true is_allowed = true }, ] } ] } #### Sleep 60s to let CC Background processes settle resource "time_sleep" "Wait_60_Seconds_2" { depends_on = [ citrix_policy_set.SetPolicies ] create_duration = "60s" } #### Create the Delivery Group based on the Machine Catalog resource "citrix_delivery_group" "CreateDG" { depends_on = [ time_sleep.Wait_60_Seconds_2] name = "${var.CC_AWSEC2-DG-Name}" associated_machine_catalogs = [ { #machine_catalog = citrix_machine_catalog.CreateMCSCatalog.id machine_catalog ="f4e34a11-6e31-421f-8cb4-060bc4a13fef" machine_count = "${var.CC_AWSEC2-MC-Machine_Count}" } ] desktops = [ { published_name = "${var.CC_AWSEC2-DG-PublishedDesktopName}" description = "${var.CC_AWSEC2-DG-Description}" restricted_access_users = { allow_list = [ "TACG-AWS\\vdaallowed" ] } enabled = true enable_session_roaming = var.CC_AWSEC2-DG-SessionRoaming } ] autoscale_settings = { autoscale_enabled = true disconnect_peak_idle_session_after_seconds = 300 log_off_peak_disconnected_session_after_seconds = 300 peak_log_off_action = "Nothing" power_time_schemes = [ { days_of_week = [ "Monday", "Tuesday", "Wednesday", "Thursday", "Friday" ] name = "${var.CC_AWSEC2-DG-AS-Name}" display_name = "${var.CC_AWSEC2-DG-AS-Name}" peak_time_ranges = [ "09:00-17:00" ] pool_size_schedules = [ { time_range = "09:00-17:00", pool_size = 1 } ] pool_using_percentage = false }, ] } restricted_access_users = { allow_list = [ "TACG-AWS\\vdaallowed" ] } reboot_schedules = [ { name = "TACG-AWS-Reboot Schedule" reboot_schedule_enabled = true frequency = "Weekly" frequency_factor = 1 days_in_week = [ "Sunday", ] start_time = "02:00" start_date = "2024-01-01" reboot_duration_minutes = 0 ignore_maintenance_mode = true natural_reboot_schedule = false } ] policy_set_id = citrix_policy_set.SetPolicies.id } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #0000ff; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #0000ff; background-color: #0000c8; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #0000ff; background-color: #0000ff; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .table_component { overflow: auto; width: 100%; } .table_component table { border: 1px solid #323232; border-radius: 3px; height: 100%; width: 90%; table-layout: fixed; border-collapse: collapse; border-spacing: 1px; text-align: left; } .table_component caption { caption-side: top; text-align: left; } .table_component th { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; } .table_component td { border: 1px solid #323232; background-color: #323232; color: #ffffff; padding: 10px; font-family: Consolas,Courier New; font-size:12px; } .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-u3wl{background-color:#1f1f1f;color:#ffffff;font-family:Consolas, "Courier New", monospace !important;font-size:12px; text-align:left;vertical-align:top}
  5. Earlier
  6. This is an amazing article. Thank you
  7. .noteBoxes { border: 1px solid; border-radius: 5px; padding: 10px; margin: 10px 0; width: 700px; } .type3 { border-color: #003098; background-color: #003098; color:white; } Overview Citrix DaaS service supports zone selection on Google Cloud Platform (GCP) to enable sole-tenant node functionality. You specify the zones where you want to create VMs in Citrix Studio. Then use Sole-tenant nodes allow to group your VMs together on the same hardware or separate your them from the other projects. Sole-tenant nodes also enable you to comply with network access control policy, security, and privacy requirements such as HIPAA. This document covers: Configuring a Google Cloud environment to support zone selection on the Google Cloud Platform in Citrix DaaS environments. Provisioning Virtual Machines on Sole Tenant nodes. Common error conditions and how to resolve them. Note: Please note that the GCP console screenshots in this article may not be up to date. But functionality-wise there is no difference! Prerequisites You must have existing knowledge of Google Cloud and Citrix DaaS for provisioning machine catalogs in a Google Cloud Project. To set up a GCP project for Citrix DaaS, follow the instructions here. Google Cloud sole tenant Sole tenancy provides exclusive access to a sole tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's VMs. Sole Tenant nodes allow you to group your VMs together on the same hardware or separate your VMs. These nodes can help you meet dedicated hardware requirements for Bring Your Own License (BYOL) scenarios. Sole Tenant nodes enable customers to comply with network access control policy, security, and privacy requirements such as HIPAA. Customers can create VMs in desired locations where Sole Tenant nodes are allocated. This functionality supports Win-10 based VDI deployments. A detailed description regarding Sole Tenant can be found on the Google documentation site. Reserving a Google Cloud sole tenant node 1. To reserve a Sole Tenant Node, access the Google Cloud Console menu, select Compute Engine, and then select Sole-tenant-nodes: 2. Sole tenants in Google Cloud are captured in Node Groups. The first step in reserving a sole tenant platform is to create a node group. In the GCP Console, select Create Node Group: 3. Start by configuring the new node group. Citrix recommends that the Region and Zone selected for your new node group allow access to your domain controller and the subnets utilized for provisioning catalogs. Consider the following: Fill in a name for the node group. In this example, we used mh-sole-tenant-node-group-1. Select a Region. For example, us-east1. Select a Zone where the reserved system resides. For example, us-east1-b. All node groups are associated with a node template, which indicates the performance characteristics of the systems reserved in the node group. These characteristics include the number of virtual CPUs, the quantity of memory dedicated to the node, and the machine type used for machines created on the node. Select the drop-down menu for the Node template. Then select Create node template: 4. Enter a name for the new template. For example: mh-sole-tenant-node-group-1-template-n1. 5. The next step is to select a Node Type. Select the Node type most applicable to your needs in the drop-down menu. Note: You can refer to this Google documentation page for more information on different node types. 5. Once you have chosen a node type, click Create: 6. The Create node group screen reappears after creating the node template. Click Create: Creating the VDA master image To deploy machines on the sole-tenant node, the catalog creation process requires extra steps when creating and preparing the machine image from the provisioned catalog. Machine Instances in Google Cloud have a property called Node affinity labels. Instances used as master images for catalogs deployed to sole-tenant environments need to have a Node affinity label that matches the name of the target node group. There are two ways to apply the affinity label: Set the label in the Google Cloud Console when creating an Instance. Using the gcloud command line to set the label on existing instances. An example of both approaches follows. Set the node affinity label at instance creation This section does not cover all the steps necessary for creating a GCP Instance. It provides sufficient information and context to understand the process of setting the Node affinity label. Recall that in the examples above, the node group was named mh-sole-tenant-node-grouop-1. This is the node group we need to apply to the Node affinity label on the Instance. New instance screen The new instance screen appears. At the bottom of the screen, a section for managing settings related to management, security, disks, networking, and sole tenancy appears. To set a new Instance: Click the section once to open the Management settings panel. Then click Sole Tenancy to see the related settings panel. 3. The panel for setting the Node affinity label appears. Click Browse to see the available Node Groups in the currently selected Google Cloud project: 4. The Google Cloud Project used for these examples contains one node group, the one that was created in the earlier example. To select the node group: Click the desired node group from the list. Then click Select at the bottom of the panel. 5. After clicking Select in the previous step, you will be returned to the Instance creation screen. The Node affinity labels field contains the needed value to ensure catalogs created from this master image are deployed to the indicated node group: Set the node affinity label for an existing instance 1. To set the Node affinity label for an existing Instance, access the Google Cloud Shell and use the gcloud compute instances command. More information about the gcloud compute instances command can be found on the Google Developer Tools page. Include three pieces of information with the gcloud command: Name of the VM. This example uses an existing VM named s2019-vda-base. Name of the Node group. The node group name, previously defined, is mh-sole-tenant-node-grouop-1. The Zone where the Instance resides. In this example, the VM resides in the us-east-1b zone. 2. The following image's buttons are at the top right of the Google Cloud Console window. Click the Cloud Shell button: 3. When the Cloud Shell first opens, it looks similar to the following: 4. Run this command in the Cloud Shell window: gcloud compute instances set-scheduling "s2019-vda-base" --node-group="mh-sole-tenant-node-group-1" --zone="us-east1-b" 5. Finally, verify the details for the s2019-vda-base instance: Google shared VPCs If you intend to use Google Sole-tenants with a Shared VPC, refer to the GCP Shared VPC Support with Citrix DaaS document. Shared VPC support requires extra configuration steps for Google Cloud permissions and service accounts. Create a Machine Catalog You can create a machine catalog after performing the previous steps in this document. Use the following steps to access Citrix Cloud and navigate to the Citrix Studio Console. 1. In Citrix Studio, select Machine Catalogs: 2. Select Create Machine Catalog: 3. Click Next to begin the configuration process: 4. Select an operating system type for the machines in the catalog. Click Next: 5. Accept the default setting that the catalog utilizes power-managed machines. Then, select MCS resources. In this example case, we are using the Resources named GCP1-useast1(Zone:My Resource Location). Click Next: Note: These resources come from a previously created host connection, representing the network and other resources like the domain controller and reserved sole tenants. These elements are used when deploying the catalog. The process of creating the host connection is not covered in this document. More information can be found on the Connections and resources page. 6. The next step is to select the master image for the catalog. Recall that to utilize the reserved Node Group. We must select an image with the Node affinity value set accordingly. For this example, we use the image from the previous example, s2019-vda-base. 7. Click Next: 8. This screen indicates the storage type used for the virtual machines in the machine catalog. For this example, we use the Standard Persistent Disk. Click Next: 9. This screen indicates the number of virtual machines and the zones to which the machines are deployed. In this example, we have specified three machines in the catalog. When using Sole-tenant node groups for machine catalogs, you must only select Zones containing reserved node Groups. In our example, we have a single node group that resides in Zone: us-central1-a, so that is the only zone selected. Click Next: 10. This screen provides the option to enable Write-back cache. For this example, we are not enabling this setting. Click Next: 11. During the provisioning process, MCS communicates with the domain controller to create hostnames for all the machines being created: Select the Domain into which the machines are created. Specify the Account naming scheme when generating the machine names. Since the catalog in this example has three machines, we have specified a naming scheme for MySTVms-\#\# the machines: MySTVms-01 MySTVms-02 MySTVms-03 12. Click Next: 13. Specify the credentials used to communicate with the domain controller, as mentioned in the previous step: Select Enter Credentials. Supply the credentials, then click OK. 14. This screen displays a summary of key information during the catalog creation process. The final step is to enter a catalog name and an optional description. In this example, the catalog name is My Sole Tenant Catalog. Enter the catalog name and click Finish: 15. When the catalog creation process finishes, the Citrix Studio Console resembles: 16. Use the Google Console to verify that the machines were created on the node group as expected: Currently, migrating machine catalogs from Google Cloud general/shared space to sole tenant nodes is impossible. Commonly encountered issues and errors Working with any complex system containing interdependencies results in unexpected situations. This section shows a few common issues and errors encountered when setting up and configuring CVAD and GCP Sole-tenants. The catalog was created successfully, but machines are not provisioned to the reserved node group If you have successfully created a catalog, the most likely reasons are: The node affinity label was not set on the master image. The node affinity label value does not match the name of the Node group. Incorrect zones were selected in the Virtual Machines screen during the catalog creation. Catalog creation fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone.’ This situation presents itself with this error when View details are selected in the Citrix Studio dialog window: System.Reflection.TargetInvocationException: One or more errors occurred. Citrix.MachineCreationAPI.MachineCreationException: One or more errors occurred. System.AggregateException: One or more errors occurred. Citrix.Provisioning.Gcp.Client.Exceptions.OperationException: Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone. One or more of the following are the likely causes of receiving this message: You are attempting to provision a new catalog to a zone without a reserved Sole Tenant Node. Ensure the zone selection is correct on the Virtual Machines screen. You have a Sole Tenant Node reserved, but the value of the VDA Master Node Affinity Label does not match the name of the reserved Node. Refer to these two sections: Set Node Affinity Label at Instance Creation and Setting the Node affinity label for an existing Instance. Upgrading an existing catalog fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone’ There are two cases in which this occurs: You are upgrading an existing sole tenant catalog that has already been provisioned using Sole Tenancy and Zone Selection. The causes of this are the same as those found in the earlier entry Catalog creation fails with ‘Instance could not be scheduled due to absence of sole-tenant nodes in specified project and zone’. You are upgrading an existing non-sole tenant catalog and do not have a sole tenant node reserved in each zone that is already provisioned with machines for the catalog. This case is considered a migration, intending to migrate machines from Google Cloud Common/Shared runtime space to a Sole Tenant Group. As noted in Migrating Non-Sole Tenant Catalogs, this is not possible. Unknown errors during catalog provisioning If you encounter a dialog like this when creating the catalog: Selecting View details produces a screen resembling: There are a few things you can check: Ensure that the Machine Type specified in the Node Group Template matches the Machine Type for the master image Instance. Ensure that the Machine Type for the master image has 2 or more CPUs. Test plan This section contains some exercises you may want to consider trying to get a feel for CVAD support of Google Cloud Sole-tenants. Single tenant catalog Reserve a group node in a single zone and provide both a persistent and non-persistent catalog. During the steps below, monitor the node group using the Google Cloud Console to ensure proper behavior: Power off the machines. Add machines. Power all machines on. Power all machines off. Delete some machines. Delete the machine catalog. Update the catalog Update the catalog from Non-Sole Tenant template to Sole Tenant Template Update the catalog from Sole Tenant template to Non-Sole Tenant Template Two zone catalog Like the exercise above, reserve two node groups and provide a persistent catalog in one zone and a non-persistent catalog in another. During the steps below, monitor the Node Group using the Google Cloud Console to ensure proper behavior: Power off the machines. Add machines. Power all machines on. Power all machines off. Delete some machines. Delete the machine catalog. Update the catalog. Update the catalog from the non-sole tenant template to the sole tenant template. Update the catalog from sole tenant template to non-sole tenant template.
  8. .noteBoxes { border: 1px solid; border-radius: 5px; padding: 10px; margin: 10px 0; width: 700px; } .type3 { border-color: #003098; background-color: #003098; color:white; } Overview This document describes the steps required to create an MCS Machine Catalog by using a Windows 10 VDA, Google Cloud Shared VPC, and Google Cloud Sole Tenant Nodes. Prerequisites Citrix DaaS and Google Cloud. For details, see the product documentation. GCP Zone Selection Support with Citrix DaaS. GCP Windows 10 VDA with Citrix DaaS. The following prerequisite is for users who want to use a Shared VPC in addition to using Sole Tenancy. GCP Shared VPC Support with Citrix DaaS. Once you meet all prerequisites, you must set up and configure the following environment and technical items: Google Cloud Service Project with permissions to use the Shared VPC Sole Tenant Node Group Reservation that resides in the Service Project Windows 10 VDA (Optional) Google Cloud Host Project with a Shared VPC and required firewall rules Example environment Creating the desired Windows 10-based MCS Machine Catalog in Google Cloud is similar to creating other catalogs. You can do more sophisticated work after you complete the full prerequisites as described in the preceding section. Then, you select the proper VDA and network resources. For this example, the following elements are in place: Host Connection The Host Connection in this example uses Google Cloud Shared VPC resources. This is not mandatory when using Zone Selection, a standard Local VPC-based Host Connection, can be used. Connection Name Shared VPC Resources Connection Resources SharedVPCSubnet Virtual Shared VPC Network gcp-test-vpc Shared VPC Subnet subnet-good Sole Tenant Node Reservation A Sole Tenant Node Group named mh-windows10-node-group located in Zone us-east1-b. Windows 10 VDA Image A Windows 10-based VDA that resides in a local project named ‘windows10-1909-vda-base’, also in zone us-east1-b. Catalog Creation The following steps cover creation of the Windows 10-based Machine Catalog that uses a Google Cloud Shared VPC and Zone Selection. The final steps describe how to validate that the resulting machines are using the desired resources. Start with Full Configuration, and Select Machine Catalogs The Machine Catalogs screen opens. Click Create Machine Catalog. The standard Catalog Creation Introduction screen may appear. Click Next. On the screen that appears, you specify the type of operating system the catalog will be based upon: Multi-Session OS, which indicates a Windows Server-based catalog Single-Session OS, which indicates a Windows Client-based catalog Remote PC Access, which indicates a catalog that includes physical machines This will be a Windows 10-based catalog, in which a Single-Session OS is used. Select Single-Session OS and then click Next. The next screen is used to indicate if the machines are power managed. The machines are power managed in this example. The screen also indicates the technology used to deploy the machines. Because MCS is being used, you must indicate the network resources to be used when deploying the machines. Note that in the following case, the Shared VPC SharedVPCSubnet noted in Example Environment has been selected for the resources to be used. Select the resources associated with your Shared VPC on the following screen and then click Next. Consider if users connect to a random desktop each time they log in or the same (static) desktop. Here we choose the Random desktop type. This option means that all changes that users make to the machine are discarded. Click Next. Select the image to be used as the base disk in the catalog. Here, we select windows10-1909-vda-base as noted in the Example Environment. Click Next Leave the defaults selected for Storage Click Next The Virtual Machine is another critical screen. Zone Selection is what enabled MCS to use the reserved Sole Tenant Node for placement of the provisioned Windows 10 virtual machine. The Example Environment section noted that both the Sole Tenant Node resides in Zone us-east1-b. Because we have a single Sole Tenant Node reserved, this is the only zone that should be selected. To distribute your machines across zones, reserve a Sole Tenant in each zone to be used. Click Next The key thing to ensure on the Active Directory Computer Accounts screen is that the AD Domain you select is the correct domain for provisioning machines in the Shared VPC network. Select The desired AD Domain, enter Account naming scheme and then click Next. On the Domain Credentials screen, enter credentials with sufficient privileges to create and delete computer accounts in the domain. Enter Credentials and then click Next. The Catalog Summary and Name screen shows a summary of the catalog to be created. You can also provide a name for the catalog. In this case, the catalog name is Windows 10 Shared VPC and Sole Tenant. Click Finish It may take a few minutes for the catalog creation to complete. Then, you can view machines in the catalog through the Search node on the tree. Validate Resource Utilization To validate resource utilization and ensure that the newly provisioned machines are using the expected resources, check the following: Are the machines running on the reserved Sole Tenant Node? Are the machines on the desired Shared VPC subnet? Remember that use of a Shared VPC is optional, so this validation step may not be applicable to your configuration. Machines Running on Sole Tenant Node The following figure shows that the three newly provisioned machines are running on the reserved Sole Tenant Node. Instance Details The details for the first Instance confirm the following: The proper Node Affinity Label tag is in place. The correct network gcp-test-vpc is being used. The correct subnet subnet-good is being used.
  9. .noteBoxes { border: 1px solid; border-radius: 5px; padding: 10px; margin: 10px 0; width: 700px; } .type3 { border-color: #003098; background-color: #003098; color:white; } Overview Citrix DaaS supports Google Cloud Platform (GCP) Shared VPC. This document covers: An overview of Citrix support for Google Cloud Shared VPCs. An overview of terminology related to Google Cloud Shared VPCs. Configuring a Google Cloud environment to support use of Shared VPCs. Use of Google Shared VPCs for host connections and machine catalog provisioning. Common error conditions and how to resolve them. Prerequisites This document assumes knowledge of Google Cloud and the use of Citrix DaaS for provisioning machine catalogs in a Google Cloud Project. To set up a GCP project for Citrix DaaS, see the product documentation. Summary Citrix MCS support for provisioning and managing machine catalogs deployed to Shared VPCs is functionally equivalent to what is supported in Local VPC's today. There are two ways in which they differ: A few more permissions must be granted to the Service Account used to create the Host Connection to allow MCS to access and use the Shared VPC Resources. The site administrator must create two firewall rules, one each for ingress and egress, to be used during the image mastering process. Both of these will be discussed in greater detail later in this document. Note: Please note that the GCP console screenshots in this article may not be up to date. But functionality-wise there is no difference! Google Cloud Shared VPCs GCP Shared VPCs comprise a Host Project, from which the shared subnets are made available, and one or more service projects that use the resources. Use of Shared VPCs is a good option for larger installations because they provide for more centralized control, usage, and administration of shared corporate Google cloud resources. Google Cloud describes it this way: "Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network." This paragraph was taken from the Google Documentation site. New Permissions Required When working with Citrix DaaS and Google Cloud, a GCP Service Account with specific permissions must be provided when creating the Host Connection. As noted earlier, to use GCP Shared VPCs some additional permissions must be granted to any service accounts used to create Shared VPC-based Host Connections. Technically speaking, the permissions required are not “new”, since they are already necessary to use Citrix DaaS with GCP and Local VPCs. The change is that the permissions must be granted to allow access to the Shared VPC resources. This is accomplished by adding the Service Account to the IAM Roles for the Host Project and will be covered in detail in the “How To” section of this document. Note: To review the permissions required for the currently shipping Citrix DaaS product, see the Citrix documentation site describing resource locations. In total, a maximum of four other permissions must be granted to the Service Account associated with the Host Connection: compute.firewalls.list - Mandatory This permission is necessary to allow Citrix MCS to retrieve the list of firewalls rules present on the Shared VPC (discussed in detail below). compute.networks.list - Mandatory This permission is necessary to allow Citrix MCS to identify the Shared VPC networks available to the Service Account. compute.subnetworks.list – May be Mandatory (see below) This permission is necessary to allow MCS to identify the subnets within the visible Shared VPCs. Note: This permission is already required for using Local VPCs but must also be assigned in the Shared VPC Host project. compute.subnetworks.use - May be Mandatory (see below) This permission is necessary to use the subnet resources in the provisioned machine catalogs. Note: This permission is already required for using Local VPCs but must also be assigned in the Shared VPC Host Project. The last two items are noted as “May be Mandatory” because there are two different approaches to be considered when dealing with these permissions: Project-level permissions Allows access to all Shared VPCs within the host project. Requires that the permissions #3 and #4 must be assigned to the Service Account. Subnet-level permissions Allow access to specific subnets within the Shared VPC. Permissions #3 and #4 are intrinsic to the subnet level assignment and therefore do not need to be assigned directly to the Service Account. Examples of both approaches are provided below in the “How To” section of this document. Either approach works equally well. Select the model more in tune with your organizational needs and security standards. More detailed information regarding the difference between Project-Level and Subnet-level permissions can be obtained from the Google Cloud documentation. Host Project To use Shared VPCs in Google Cloud, first choose and enable one Google Cloud Project to be the Host Project. This Host Project contains one or more Shared VPC Networks used by other Google Cloud Projects within the organization. Configuring the Shared VPC Host Project, creating subnets, and sharing either the entire project or specific subnets with other Google Cloud Projects are purely Google Cloud related activities and not included within the scope of this document. The Google Cloud documentation related to creating and working with Shared VPCs can be found here. Firewall Rules A key step in behind-the-scenes processing that occurs when provisioning or updating a machine catalog is called mastering. This is when the selected machine image is copied and prepared to be the master image system disk for the catalog. During mastering, this disk is attached to a temporary virtual machine, the prep machine, and started up to allow preparation scripts to run. This virtual machine needs to run in an isolated environment that prevents all inbound and outbound network traffic. This is accomplished through a pair of Deny-All firewall rules; one for ingress and one for egress. When using GCP Local VPCs, MCS creates this firewall rule pair on the fly in the local network, applies them to the machine for mastering, and removes them when mastering has completed. Citrix recommends keeping the number of new permissions required to use Shared VPCs to a minimum, because Shared VPCs are higher level corporate resources and typically have more rigid security protocols in place. For this reason, the site administrator must create a pair of firewall rules (one ingress, and one egress) on each Shared VPC with the highest priority and apply a new Target Tag to each of the rules. The Target Tag value is: citrix-provisioning-quarantine-firewall When MCS is creating or updating a machine catalog, it searches for firewall rules containing this Target Tag, examine the rules for correctness, and apply them to the prep machine. If the firewall rules are not found, or the rules are found, but the rules or priority are incorrect, a message of this form will be returned: Unable to find valid INGRESS and EGRESS quarantine firewall rules for VPC \<name\> in project \<project\>. Please ensure you have created deny all firewall rules with the network tagcitrix-provisioning-quarantine-firewall and proper priority. Refer to Citrix Documentation for details. Cloud Connectors When using a Shared VPC for Citrix DaaS machine catalogs, you create two or more Cloud Connectors to access the Domain Controller that resides within the Shared VPC. The recommendation in this case is to create a GCP machine instance in your local project and add an additional network interface to the instance. The first interface would be connected to a subnet in the Shared VPC. The second network interface would connect to a subnet in your Local VPC to allow access for administrative control and maintenance via your Local VPC Bastion Server. Unfortunately, you cannot add a network interface to a GCP instance after it has been created. It is a simple process and is covered below in one of the How To entries. How To Section The following section contains instructional examples to help you understand the steps to perform the configuration changes necessary to use Google Shared VPCs with Citrix DaaS. The examples presented in Google Console screenshots will all occur in a hypothetical Google Project named Shared VPC Project 1. How To: Create a New IAM Role Some additional permissions need to be granted to the Service Account used when creating the Host Connection. Since the intent of deploying to Shared VPCs is to allow multiple projects to deploy to the same Shared VPC, the most efficient approach is to create a role in the Host Project with the desired permissions and then assign that role to any Service Account that requires access to the Shared VPC. Below, we create the Project-Level role named Citrix-ProjectLevel-SharedVpcRole. The Subnet-Level role follows the same steps for the first two sets of permissions assigned. 1. Access the IAM & Admin configuration option in the Google Cloud Console: 2. Select Create Role: 3. A screen resembling the following appears: 4. Specify the Role Name. Click ADD PERMISSIONS to apply the update: 5. After clicking ADD PERMISSIONS, a screen resembling the one below appears. In this image, the “Filter Table” text entry field has been highlighted: 6. Clicking the Filter Table text entry field displays a contextual menu: 7. Copy and paste (or type) the string compute.firewalls.list into the text field, as shown below: 8. Selecting the compute.firewalls.list entry that has been filtered out of the table of permissions results in this dialog: 9. Click the toggle box to enable the permission: 10. Click ADD. The Create Role screen reappears. The compute.firewalls.list permission has been added to the role: 11. add the compute.networks.list permission using the same steps as above. However, make sure to select the proper rule. As you can see below, two permissions are listed when the permission text is entered into the filter table field. Choose the compute.networks.list entry: 12. Click ADD. 13. The two Mandatory permissions added to our role: 14. Determine what level of access the Role has, such as Project-Level access or a more restricted model using Subnet-Level access. For the purposes of this document, we are currently creating the role named Citrix-ProjectLevel-SharedVpc Role, so we add the compute.subnetworks.list and compute.subnetworks.use permissions using the same steps used above. The resulting screen looks like this, with the four permissions granted, just before clicking Create: 15. Click CREATE. Note: If the Subnet-Level Role were created here, we would have clicked CREATE rather than add the two other compute.subnetworks.list and compute.subnetworks.use permissions. How To: Add Service Account to Host Project IAM Role Now that we have created the Citrix-ProjectLevel-SharedVpc Role, we need to add a Service Account within the Host Project. For this example, we use a Service Account named citrix-shared-vpc-service-account. 1. The first step is to navigate to the IAM & Roles screen for the project. In the console, select IAM and Admin. Select IAM: 2. Add members with the specified permissions. Click ADD to display the list of members: 3, Clicking ADD displays a small panel as shown in the image below. Data is entered in the following step. 4. Start typing the name of your Service Account into the field. As you type, Google Cloud searches the projects you have permission to access and presents a narrowed list of possible matches. In this case, we have one match (displayed directly below the fill-in), so we select that entry: 5. After specifying the Member Name (in our case, the Service Account), select a role for the Service Account to function as in the Shared VPC Project. Start this process by clicking the indicated list: /p> 6. The Select a Role process is similar to those used in the previous How To - Create a New IAM Role. In this case, several more options are displayed, including the fill-in. 7. Since we know the Role we want to apply for, we can start typing. Once the intended Role appears, select the role: 8. After selecting the Role, click Save: We have now successfully added the Service Account to the Host Project. How To: Subnet-Level Permissions If you have chosen to use Subnet-Level access rather than Project-Level access, you must add the Service Accounts to be used with the Shared VPC as members for each subnet representing the resources to be accessed. For this How To section, we are going to provide the Service Account named sharedvpc-sa\@citrix-mcs-documentation.iam.gserviceaccount.com with access to a single subnet in our Shared VPC. 1. The first step is to navigate to the Shared VPC screen in Google Console: 2. This is the landing page for the Google Cloud Console Shared VPC screen. This project shows five subnets. The Service Account in this example requires access to the second subnet-good subnet (the last subnet on the list below). Select the checkbox next to the second subnet-good subnet: 3. Now that the checkbox for the last subnet has been selected, note that the ADD MEMBER option appears on the upper right of the screen. It is also useful for this exercise to take note of the number of users this subnet has been shared with. As indicated, one user has access to this subnet. 4. Click ADD MEMBER: 5. Similar to the steps required to add the Service Account to the Host Project in the preceding How To section, the New member name must also be provided here. After filling in the name, Google Cloud lists all related items (as before) so that we can select the relevant Service Account. In this case, it is a single entry. Double-click the Service Account to select it: 6. After a Service Account has been selected, a Role for the new Member must also be chosen. In the list, click Select a role, then Double-click the Compute Network User Role. 7. The image shows that the Service Account and Role have been specified. The only remaining step is to click SAVE to commit the changes: 8. After the changes have been saved, the main Shared VPC screen appears. Observe that the number of users who have access to the last subnet has, as expected, increased to two: How To: Add Project CloudBuild Service Account to the Shared VPC Every Google Cloud Subscription has a service account named after the project ID number followed by cloudbuild.gserviceaccount. A full name example (using a made-up project ID) is: 705794712345\@ cloudbuild.gserviceaccount. This cloudbuild Service Account also needs to be added as a member of the Shared VPC, just as the Service Account you use for creating Host Connections was in Step 3 of How To: Add Service Account to Host Project IAM Role. 1. You can determine what the Project ID number is for your project by selecting Home and Dashboard in the Google Cloud Console menu: 2. Find the Project Number under the Project Info area of the screen. 3. Enter the project number/cloudbuild.gserviceaccount combination into the Add Member field. Assign a Role of Computer Network User: 4. Select Save. How To: Firewall Rules Creating the needed firewall rules is a bit easier than creating the Roles. As noted in this document, the two firewall rules must be created in the Host Project. 1. Make certain you have selected the Host Project. 2. From the Google Console menu, navigate to VPC > Firewall, as shown below: 3. The top of the Firewall screen in Google Console includes a button to create a rule. Click CREATE FIREWALL RULE: 4. The screen used to create a firewall rule is shown below: 5. First, create the needed Deny-All Ingress rule by adding or changing values to the following fields: Name Give your Deny-All Ingress firewall rule a name. For example, citrix-deny-all-ingress-rule. Network Select the Shared VPC network. This ingress firewall rule is applied. For example, gcp-test-vpc. Priority The value in this field is critical. In the world of firewall rules, the lower the priority value, the higher the rule's priority. This is why all the default rules have a value of 66536, so that any custom rules with a value lower than 65536 will take priority over any of the default rules. We need these two rules to be the highest priority rules on the network. We use a value of 10. Direction of traffic The default for creating a rule is Ingress, which should already be selected. Action on match This value defaults to Allow. We must change it to Deny. Targets This is the other critical field. The default target type is Specified target tags, precisely what we want. In the text box labeled Target Tags enter the value citrix-provisioning-quarantine-firewall. Source Filter For the source filter, we will retain the default IP range filter type and enter a range that matches all traffic. We use a value of 0.0.0.0/0. Protocols and ports Under Protocols and ports, select Deny All. 6. The completed screen should look like this: 7. Click CREATE and generate the new rule. 8. The egress rule is almost identical to the previously created ingress rule. Use the CREATE FIREWALL RULE again as was done above, and fill in the fields as detailed below: Name Give your Deny-All Egress firewall rule a name. Here, we call it citrix-deny-all-egress-rule. Network Select the Shared VPC network used here when creating the ingress firewall rule above. For example, gcp-test-vpc. Priority As noted above, we use a value of 10. Direction of traffic For this rule, we must change from the default and select Egress. Action on match This value defaults to Allow. We must change it to Deny. Targets Enter the value citrix-provisioning-quarantine-firewall into the Target Tags field. Source Filter For the source filter, we will retain the default IP range filter type and enter a range that matches all traffic. We use 0.0.0.0/0 for that. Protocols and ports Under Protocols and ports, select Deny All. 9. The completed screen should look like this: 10. Click CREATE to generate the new rule. Both of the necessary firewall rules have been created. If multiple Shared VPCs are used when deploying machine catalogs, repeat the above steps. Create two rules for each identified Shared VPCs in their respective Host Projects. How To: Add Network Interface to Cloud Connector Instances When creating Cloud Connectors for use with the Shared VPC, an additional network interface must be added to the instance when it is being created. Additional network interfaces cannot be added once the instance exists. To add the second network interface: 1. This is the initial panel for network settings presented when first creating a network instance 2. Since we want to use the first network instance for the Shared VPC, click the Pencil icon to enter Edit mode. The expanded network settings screen is below. A key item to note is that we can now see the option for Networks shared with me (from host project: citrix-shared-vpc-project-1) directly beneath the Network Interface banner: 3. The Network Settings panel with Shared VPC selected panel shows: Selected the Shared VPC network. Selected the subnet-good subnet. Modified the setting to an external IP address to None. 4. Click Done to save the changes. Click Add Network Interface. 5. Our first interface is connected to the Shared VPC (as indicated). We can configure the second interface using the same steps normally when creating a Cloud Connector. Select a network subnet, decide on an external IP address, and then click Done: How To: Creating Host Connection and Hosting Unit Creating a Host Connection for use with Shared VPCs is not much different from creating one for use with a Local VPC. The difference is in selecting the resources to be associated with the Host Connection. When creating a Host Connection to access Shared VPC resources, use Service Account JSON files related to the project where the provisioned machines reside. 1. Creating a Host Connection for using Shared VPC resources is similar to creating any other GCP-related Host Connection: 2. Once your project, shown as Developer Project in the following figure, has been added to the list that can access the Shared VPC, you may see both your project and the Shared VPC project in Studio. It is important to ensure you select the project where the deployed machine catalog should reside and not the Shared VPC: 3. Select the resources associated with the Host Connection. Consider the following: The Name given for the resources is SharedVPCResources. The list of virtual networks to choose from includes those from the local project, and those from the Shared VPC, as indicated by (Shared) appended to the network names. Note: If you do not see any networks with Shared appended to the name, click the Back button and verify you have chosen the correct Project. If you verify the project chosen is correct and still do not see any shared VPCs, something is misconfigured in the Google Cloud Console. See the Commonly Encountered Issues and Errors later in this document. 4, The following figure shows that the gcp-test-vpc (Shared) virtual network was selected in the previous step. It also shows that the subnet named subnet-good has been selected. Click Next: 5. After clicking Next, the Summary screen appears. In this screen, consider: The Project is Developer Project. The virtual network is gcp-test-vpc, one from the Shared VPC. The subnet is subnet-good. How To: Creating a Catalog Everything from this point forward, such as catalog creation, starting/stopping machines, updating machines, and so on, is performed exactly the same way as when using Local VPCs. Commonly Encountered Issues and Errors Working with any complex system with interdependencies may result in unexpected situations. Below are a few common issues and errors that may be encountered when performing the setup and configuration to use Citrix DaaS and GCP Shared VPCs. Missing or Incorrect Firewall Rules If the firewall rules are not found, or the rules are found but the rules or priority are incorrect, a message of this form will be returned: "Unable to find valid INGRESS and EGRESS quarantine firewall rules for VPC \ in project \. "Please ensure you have created 'deny all' firewall rules with the network tag ‘citrix-provisioning-quarantine-firewall' and proper priority." "Refer to Citrix Documentation for details."); If you encounter this message, you must review the firewall rules, their priorities, and what networks they apply to. For details, Refer to the section How To—Firewall Rules. Missing Shared Resources When Creating Host Connection There are a few reasons this situation may occur: The incorrect project was selected when creating the Host Connection For example, if the Shared VPC Host Project was selected when creating the Host Connection instead of your project, you will still see the network resources from the Shared VPC but they will not have (Shared) appended to them. If you see the shared subnets without that extra information, the Host Connection was likely made with the wrong Service Account. See How To -Creating Host Connection and Hosting Unit. Wrong Role was assigned to the Service Account If the wrong role was assigned to the Service Account, you may not be able to access the desired resources in the Shared VPC. See How To - Add Service Account To Host Project IAM Role. Incomplete or incorrect permissions granted to Role The correct Role may be assigned to the Service Account, but the Role itself may be incomplete. See How To – Create a New IAM Role. Service Account not added as subnet Member If you use Subnet-Level access, ensure the Service Account was properly added as a Member (User) of the desired subnet resources. See How To -Subnet-Level Permissions. Cannot find path error in Studio If you receive an error while creating a catalog in Studio of the form: Cannot find path “XDHyp:\\Connections …” because it does not exist It is most likely that a new Cloud Connector was not created to facilitate the use of the Shared VPC resources. It is a simple thing to overlook after going through all the above steps to configure everything. Refer to Cloud Connectors for important points on creating them.
  10. .noteBoxes { border: 1px solid; border-radius: 5px; padding: 10px; margin: 10px 0; width: 700px; } .type3 { border-color: #003098; background-color: #003098; color:white; } Audience This document is intended for architects, network designers, technical professionals, partners, and consultants interested in implementing the Citrix Secure Private Access On-Premises solution. It is also designed for network administrators, Citrix administrators, managed service providers, or anyone looking to deploy this solution. Solution Overview Citrix Secure Private Access On-Premises is a customer-managed Zero Trust Network Access (ZTNA) solution that provides VPN less access to Internal web and SaaS applications with least privilege principle, single sign-on (SSO), Multifactor Authentication and Device posture assessment, application-level security controls and app protection features along with a seamless end-user experience. The solution uses the StoreFront on-premises and Citrix Workspace app to enable a seamless and secure access experience to web and SaaS apps within Citrix Enterprise Browser. This solution also uses the NetScaler Gateway to enforce authentication and authorization controls. Citrix Secure Private Access On-Premises solution enhances an organization’s overall security and compliance posture by easily delivering Zero Trust access to browser-based (internal web apps and SaaS apps) apps using the StoreFront on-premises portal as a unified access portal to web and SaaS apps, along with virtual apps and desktops, as an integrated part of Citrix Workspace. Citrix Secure Private Access combines elements of NetScaler Gateway and StoreFront to deliver an integrated experience for end users and administrators. Functionality Service/Component providing the functionality Consistent UI to access apps StoreFront on-premises/Citrix Workspace app SSO to SaaS and Web apps NetScaler Gateway Multifactor Authentication (MFA) and device posture (aka End-Point Analysis) NetScaler Gateway Security controls and App protection controls for web and SaaS apps Citrix Enterprise Browser Authorization policies Secure Private Access Configuration and Management Citrix Secure Private Access UI, NetScaler UI Visibility, Monitoring, and Troubleshooting Citrix Secure Private Access UI and Citrix Director Use Cases Citrix Secure Private Access (SPA) On-Premises solution with Citrix Virtual Apps and Desktops On-Premises provides a unified and secure end-user experience to virtualized and browser-based apps (web apps and SaaS apps) with consistent security. SPA On-Premises solution is designed to address the following use cases via a customer-managed solution. Use case #1: Secure access for Employees and Contractors to internal web and SaaS apps from managed or unmanaged devices without publishing a browser or using a VPN. Use case #2: Provide comprehensive last-mile Zero Trust enforcement with admin-configurable browser security controls for internal web and SaaS apps from managed or unmanaged devices without publishing a browser or using VPN. Use case #3: Accelerate Merger and Acquisitions (M & A) user access across multiple identity providers, ensure consistent security and provide seamless end-user access across multiple user groups. System Requirements This article guides deploying Secure Private Access with StoreFront and NetScaler Gateway. Citrix Enterprise Browser (incl. in Citrix Workspace app) is the client software used to interact with your SaaS or internal web apps securely. Global App Config Service (GACS) is a requirement for browser management of Citrix Enterprise Browser. Note: This article does not include guidance on deploying Citrix Virtual Apps and Desktops. This guide assumes that the reader has a basic understanding of the following Citrix and NetScaler offerings and general Windows administrative experience: Citrix Workspace app StoreFront NetScaler Gateway Global App Configuration service Windows Server SQL Express or Server Product communication matrix Secure Private Access for on-premises (Secure Private Access plug-in) Versions: Citrix Workspace app Windows – 2311 and above macOS – 2311 and above Citrix Virtual Apps and Desktops – Supported LTSR and current versions StoreFront – LTSR 2203 or CR 2212 and above NetScaler Gateway – 13.0 and above We recommend using the latest build of NetScaler 13.1 or 14.1 for optimized performance. Windows Server – 2019 and above (.NET 6.x and above runtime must be supported) SQL Express or Server – 2019 and above Note: Citrix Secure Private Access On-Premises are not supported on Citrix Workspace app for iOS and Android. Refer to the following documentation for more details as needed: Citrix Workspace app Windows macOS StoreFront System requirements Plan your StoreFront deployment NetScaler Gateway Before Getting Started Common Citrix Gateway deployments Global App Configuration service (GACS) Manage Citrix Enterprise Browser through Global App Configuration service Manage single sign‑on for Web and SaaS apps through the Global App Configuration service Technical Overview Access to internal web apps is possible from any location with any device at any time through NetScaler Gateway with Citrix Enterprise Browser (incl. in Citrix Workspace app) installed. The same applies to SaaS apps, with the difference that the access can be direct or indirect through NetScaler Gateway. Citrix Enterprise Browser and Citrix Workspace app connect to NetScaler Gateway using a TLS-encrypted connection. NetScaler Gateway provides zero trust-based access by assessing the user’s device, strong nFactor user authentication, app authorization, and single sign-on (SSO). StoreFront enumerates virtual and non-virtual apps through Citrix Desktop Delivery Controller and Secure Private Access (SPA) plug-in. Citrix Enterprise Browser tunnels internal traffic (for example, https://website.company.local) to NetScaler Gateway to allow access without needing a public-facing DNS entry. SaaS application access can be direct or, for special use cases. indirect through NetScaler Gateway. Citrix Secure Private Access with Citrix Enterprise Browser allows the configuration of additional security controls for web and SaaS apps like Watermarking, copy/paste-, up/download-, and print restrictions. These restrictions are dynamically applied on a per-app basis. Scenarios Citrix Secure Private Access On-Premises can be deployed in any environment with one or more StoreFront servers and NetScaler Gateways. This section describes a few different scenarios that have been successfully implemented and validated. Scenario 1 – Single server deployment Scenario 1 is for testing purposes only and should not be considered in production environments because of less redundancy. Scenario 2 – Scalable deployment Scenario 2 is designed for performance and redundancy. This is a recommended production deployment. Scenario 3 – Geo deployment (Coming Soon) Scenario 3 is for large enterprises with geographical data center redundancy. Scenario 1 - Simple deployment Scenario 1 is a straightforward deployment that uses the least infrastructure resources. Because of less redundancy, this scenario is not recommended for use in production. Note: We assume that a working Citrix Virtual Apps and Desktops infrastructure is installed and a NetScaler is deployed in a DMZ. On-premises infrastructure environment Active Directory NetScaler VPX/MPX (Gateway) Combined StoreFront and SPA plug-in server Webserver containing websites Webserver certificate Note: This is a simplified architectural overview of scenario 1. For more detailed communication information, please see Secure Private Access for on-premises (Secure Private Access plug-in). Installation (Scenario 1) StoreFront 1. Install a web server certificate on the StoreFront and Secure Private Access machine. 2. Download the Citrix Virtual Apps and Desktops ISO file from Citrix Download Center. 3. Run the ISO installer AutoSelect.exe. 4. Select Start from Virtual Apps and Desktops. 5. Because we want a combined StoreFront and SPA plug-in server, we first install Citrix StoreFront. 6. In the Citrix StoreFront installer, accept the license agreement and click Next. 7. In the Review prerequisites page, click Next. 8. In the Ready to Install page, click Install. 9. When the installation is successfully finished, click Finish. 10. Click Yes in the reboot dialog to restart the server. Secure Private Access 1. After the reboot, run the ISO installer again. 2. Now that Citrix StoreFront is installed let’s continue installing Secure Private Access. 3. Accept the license agreement in the Secure Private Access installer and click Next. 4. On the Core Components page, click Next. 5. On the Additional Components page, select Use SQL Express on the same machine and click Next. Note:In a production environment, it is recommended to use a dedicated database server. 6. On the Firewall page, click Next to create local Windows Firewall rules automatically. 7. On the Summary page, review your installation settings and click Install. 8. On the Finish Installation page, click Finish. Note: The SPA admin console opens automatically in a browser window. Before we start configuring SPA, we need to configure a StoreFront store. Configuration (Scenario 1) StoreFront 1. Open the Internet Information Service (IIS) Manager console and verify that the correct web server certificate is assigned. 2. Open the Citrix StoreFront console and create a new deployment. 3. Enter the base URL and click Next. Note: In a production environment, multiple StoreFront servers are load-balanced for redundancy and scalability. Therefore the base URL will be the FQDN of the load balancer virtual server IP. 4. On the getting started page, click Next. 5. On the store name and access page, enter a store name, for example, Store, and click Next. 6. On the Delivery Controllers page, enter your Citrix Delivery Controller and click Next. 7. On the Remote Access page, enable Remote Access, select No VPN tunnel, add your NetScaler Gateway appliance, and Next. 8. On the Configure Authentication Methods page, verify that the User name and password and Pass-through from Citrix Gateway are correct, and click Next. 9. On the XenApp Services URL page, click Create. 10. Verify that the store was successfully created on the Summary page and click Finish. Secure Private Access – Initial configuration wizard Note: Please create a StoreFront store before running the Secure Private Access initial configuration wizard! It is recommended that you configure Kerberos authentication for the browser that you use for the Secure Private Access admin console. This is because Secure Private Access uses Integrated Windows Authentication (IWA) for its admin authentication. If Kerberos authentication isn’t set, you’re prompted by the browser to enter your credentials when accessing the Secure Private Access admin console. Please refer to our SSO to admin console documentation. 1. From the Start menu, open Citrix Secure Private Access. 2. Click Continue to start the initial configuration wizard on the SPA admin console page. 3. On the Step 1 page, select Create a new Secure Private Access site and click Next. 4. On the Step 2 page, enter your SQL server host and Site name and click Test connection. The resulting database name is a combination of "CitrixAccessSecurity". 5. Select the type of deployment, Automatically or Manually. In this scenario, select Automatically and click Next. Note: For more information on a manual database setup, follow the instructions documented at Step 2: Configure databases - Manual configuration. 6. On the Step 3 page, enter the Secure Private Access address, StoreFront Store URL, Public NetScaler Gateway address, the NetScaler Gateway virtual IP address, and callback URL. When all URLs are successfully verified, click Next. 7. On the Step 4 page, click Save to start the configuration process. Note: Because the SPA plug-in is installed on the StoreFront machine, we do not need to run the StoreFront script manually on the StoreFront server. This is automatically done by the setup routine. 8. After the configuration process is completed, click Close. Secure Private Access – App creation 1. In the menu on the left, click Applications. 2. On the right side, click Add an app 3. In the Add an app dialog, add the required fields marked with a red star and click Save. Note: For details on application parameters, see Configure applications. 4. In the menu on the left, click Access Policies. 5. On the right side, click Create policy 6. In the Create policy dialog, add the required fields marked with a red star and click Save. Note: For details on application access policies, see Configure access policies for the applications. NetScaler Gateway 1. Open a new browser tab and navigate to https://www.citrix.com/downloads/citrix-secure-private-access/Shell-Script/Shell-Script-for-Gateway-Configuration.html. 2. When prompted, log on with your Citrix Cloud account. 3. Download the Shell Script for Gateway Configuration file archive and extract it to your local computer. Note: To create a new NetScaler Gateway configuration, use ns_gateway_secure_access.sh. To update an existing NetScaler Gateway configuration, use ns_gateway_secure_access_update.sh. 4. In this scenario, we have a working NetScaler Gateway configuration and must update it for Secure Private Access on-premises. Use a tool of your choice to upload the script ns_gateway_secure_access_update.sh to the NetScaler /var/tmp folder. 5. Connect to the NetScaler CLI using an SSH client and log on. 6. Enter shell, press the return key, and change the directory to /var/tmp. 7. Change the file permissions using the command: chmod +x /var/tmp/ns_gateway_secure_access_update.sh to make the script executable. 8. Run the script /var/tmp/ns_gateway_secure_access_update.sh. Note: If you see the error -bash: ./ns_gateway_secure_access_update.sh: /bin/sh^M: bad interpreter: No such file or directory, run the following command tr -d '\r' < /var/tmp/ns_gateway_secure_access_update.sh > /var/tmp/ns_gateway_secure_access_update_unix.sh to convert the Windows line endings to Unix. Change the file permissions using the command chmod +x /var/tmp/ns_gateway_secure_access_update_unix.sh to make the converted script executable. Run the converted script and insert the required parameters. Support for smart access tag Starting with the following versions, NetScaler Gateway sends the smart access tags automatically. This enhancement removes the required gateway callback from SPA plug-in to NetScaler Gateway. 13.1 - 48.47 and later 14.1 - 4.42 and later The above script automatically enables the enhancement flags ns_vpn_enable_spa_onprem and ns_vpn_disable_spa_onprem. To make the changes persistent, run the following commands in the NetScaler shell. root@xa04-adc01# echo "nsapimgr_wr.sh -ys call=ns_vpn_enable_spa_onprem">> /nsconfig/rc.netscaler root@xa04-adc01# echo "nsapimgr_wr.sh -ys call=toggle_vpn_enable_securebrowse_client_mode">> /nsconfig/rc.netscaler For more details, look at Support for smart access tags 1. A new NetScaler command script (the default is /var/tmp/ns_gateway_secure_access) is generated. 2. Switch back to the NetScaler CLI using the command exit. 3. Before executing the new NetScaler command script, let us verify the current NetScaler Gateway configuration and update it for Secure Private Access on-premises. 4. On the Gateway virtual server, verify the following: *ICA only is set to false (OFF) TCP Profile is set to nstcp_default_XA_XD_profile Deployment Type is set to ICA_STOREFRONT On the Gateway session action for the Workspace app, verify the following: *transparentInterception is set to OFF SSO is set to ON *ssoCredential is set to PRIMARY useMIP is set to NS *useIIP is set to OFF icaProxy is set to OFF *wihome is set to "https://xa04-spa.training.local/Citrix/StoreWeb" - replace with real store URL ClientChoices is set to OFF *ntDomain is set to "training.local" - used for SSO defaultAuthorizationAction is set to ALLOW *authorizationGroup is set to SecureAccessGroup (Make sure that this group is created in NetScaler, not Active Directory. It’s used to bind Secure Private Access specific authorization policies) clientlessVpnMode is set to ON *clientlessModeUrlEncoding is set to TRANSPARENT SecureBrowse is set to ENABLED *Storefronturl is set to "https://xa04-spa.training.local" - replace with StoreFront FQDN sfGatewayAuthType is set to domain Note: For details on session action parameters, see the Command line reference for vpn-sessionAction. Based on the above example, the default session action before adding SPA looks like: add vpn sessionAction AC_OS_172.16.1.106 -transparentInterception OFF -defaultAuthorizationAction ALLOW -SSO ON -ssoCredential PRIMARY -icaProxy ON -wihome "https://xa04-spa.training.local/Citrix/StoreWeb" -ClientChoices OFF -ntDomain training.local -clientlessVpnMode OFF -storefronturl "https://xa04-spa.training.local" -sfGatewayAuthType domain Let’s create the authorization group and a new session action and modify it for Secure Private Access on-premises: add aaa group SecureAccessGroup add vpn sessionAction AC_OS_172.16.1.106_SPAOP -transparentInterception OFF -defaultAuthorizationAction ALLOW -authorizationGroup SecureAccessGroup -SSO ON -ssoCredential PRIMARY -useMIP NS -useIIP OFF -icaProxy OFF -wihome "https://xa04-spa.training.local/Citrix/StoreWeb" -ClientChoices OFF -ntDomain training.local -clientlessVpnMode ON -clientlessModeUrlEncoding TRANSPARENT -SecureBrowse ENABLED -storefronturl "https://xa04-spa.training.local" -sfGatewayAuthType domain Switch the session policy for the Workspace app to the new session action: set vpn sessionPolicy PL_OS_172.16.1.106 -action AC_OS_172.16.1.106_SPAOP 1. Run the new NetScaler commands script with the batch command. batch -fileName /var/tmp/ns_gateway_secure_access_update -outfile /var/tmp/ns_gateway_secure_access_update_output.log -ntimes 1. 2. Verify the log file that there is no error For example: shell cat /var/tmp/ns_gateway_secure_access_update_output.log Note: In this scenario, one error is shown in the log file because StoreFront and SPA plug-in are installed on the same machine. ERROR: Specified pattern or range is already bound to dataset/patset 3. On the StoreFront and SPA plug-in machine, open Citrix Secure Private Access from the Start menu. 4. On the SPA admin console page, click Mark as done in the Configure Gateway section. Scenario 2 – Scalable deployment In Scenario 2, the NetScaler Gateway, StoreFront, SPA plug-in, and SQL server are deployed in Microsoft Azure, whereas all other services are deployed on-premises. Note: NetScaler Gateway, StoreFront, SPA plug-in, and SQL server can also be deployed in the local data center.This scenario should only showcase that deploying in any cloud is possible too. We assume that a working Citrix Virtual Apps and Desktops infrastructure is installed and a NetScaler is deployed in Azure. Cloud Infrastructure environment Azure Load Balancer for NetScaler with static public IP 2x NetScaler VPX (Gateway) on Azure 2x StoreFront server 2x SPA plug-in server 1x Database server 2x Active Directory server Webserver containing websites Webserver certificates Note: This is a simplified architectural overview of scenario 2. For more detailed communication information, see Secure Private Access for on-premises (Secure Private Access plug-in). Installation (Scenario 2) StoreFront 1.On the StoreFront machine, install a web server certificate containing the load balancing FQDN and StoreFront server FQDNs. For more information about certificates, have a look at StoreFront certificate requirements. 2. Download the Citrix Virtual Apps and Desktops ISO file from Citrix Download Center. 3. Run the ISO installer AutoSelect.exe. 4. Select Start from Virtual Apps and Desktops. 5. Because we want to have a combined StoreFront and SPA plug-in server, we first install Citrix StoreFront. 6. In the Citrix StoreFront installer, accept the license agreement and click Next. 7. In the Review prerequisites page, click Next. 8. In the Ready to install page, click Install. 9. When the installation is successfully finished, click Finish. 10. Click Yes in the reboot dialog to restart the server. 11. For redundancy, install a second StoreFront server following the same steps. Secure Private Access 1. On the Secure Private Access machine, install a web server certificate matching the load balancer FQDN name. The same certificate must be installed on the other SPA plug-in nodes. If the load balancing protocol used is SSL, the same certificate must be used on the load balancer. 2. Mount the downloaded Citrix Virtual Apps and Desktops ISO file and run the installer AutoSelect.exe. 3. Select Start from Virtual Apps and Desktops. 4. Click Secure Private Access to start the installation. 5. Accept the license agreement in the Secure Private Access installer and click Next. 6. On the Core Components page, click Next. 7. On the Additional Components page, deselect Use SQL Express on the same machine and click Next. Note: A dedicated database server is recommended for production deployment. 8. On the Firewall page, click Next to automatically create local Windows Firewall rules. 9. On the Summary page, review your installation settings and click Install. 10. On the Finish Installation page, click Finish. 11. For redundancy, install a second SPA plug-in server following the same steps. Note: The SPA admin console opens automatically in a browser window. Before we start configuring SPA, we need to configure a StoreFront store. Configuration (Scenario 2) StoreFront 1. Open the Internet Information Service (IIS) Manager console and verify that the correct web server certificate is assigned. 2. Open the Citrix StoreFront console and create a new deployment. 3. Enter the base URL using the load balancer FQDN and click Next. For example, https://stf-lb.training.local/. Note: The load balancing configuration is done later. 4. On the getting started page, click Next. 5. On the store name and access page, enter a store name, for example, StoreLB, and click Next. 6. On the Delivery Controllers page, enter your Citrix Delivery Controller and click Next. 7. On the Remote Access page, enable Remote Access, select No VPN tunnel, add your NetScaler Gateway appliance, and Next. 8. On the Configure Authentication Methods page, verify that User name and password and Pass-through from Citrix Gateway, and click Next. 9. On the XenApp Services URL page, click Create. 10. On the Summary page, verify that the store was successfully created and click Finish. 11. Open Windows PowerShell to update the StoreFront monitoring service URL and run the following commands: $ServiceUrl = "https://localhost:443/StorefrontMonitor" Set-STFServiceMonitor -ServiceUrl $ServiceUrl Get-STFServiceMonitor Default StoreFront monitoring service URL If you want to revert the service URL change, run the above commands again with a changed $ServiceUrl = "http://localhost:8000/StorefrontMonitor". 1. Verify that the Receiver for Web Sites loopback communication is set to On. Get-STFWebReceiverService -VirtualPath "/Citrix/StoreLBWeb" | Get-STFWebReceiverCommunication | Format-Table Loopback Loopback -------- On 2. Join the second StoreFront server in the server group. Please follow the documented instructions for Join an existing server group. Secure Private Access – Initial configuration wizard Note: Please create a StoreFront store before running the Secure Private Access initial configuration wizard! Information It is recommended that you configure Kerberos authentication for the browser that you use for the Secure Private Access admin console. This is because Secure Private Access uses Integrated Windows Authentication (IWA) for its admin authentication. If Kerberos authentication isn’t set, you’re prompted by the browser to enter your credentials when accessing the Secure Private Access admin console. Refer to our SSO to admin console documentation. 1. From the Start menu, open Citrix Secure Private Access. Important Within the web browser, verify the web server certificate that protects the SPA admin console. The certificate must be uploaded before the Secure Private Access installation. 2. On the SPA admin console page, click Continue to start the initial configuration wizard. 3. On the Step 1 page, select Create a new Secure Private Access site and click Next. 4. On the Step 2 page, enter your SQL server host and Site name and click Test connection. The resulting database name is a combination of "CitrixAccessSecurity". 5. Select the type of deployment, Automatically or Manually. In this scenario, select Manually and click Download script. Note: The displayed error is expected because the database does not exist. Secure Private Access – manual database setup 1. Open the SQL Server Management Studio and connect to the database engine using a database administrator account. 2. In the SQL Server Management Studio, click File, select Open and select File. 3. In the Open File dialog, search for the downloaded SQL script and click Open. 4. Verify the script content and click Execute. The script creates the database and a login for the Windows server training\xa05-spa. 5. Switch back to the SPA admin console and click Test connection. The connection is now successful and the server has write permissions to the database. 6. Click Next. 7. On the Step 3 page, enter the Secure Private Access address, StoreFront Store URL, Public NetScaler Gateway address, the NetScaler Gateway virtual IP address, and callback URL. When all URLs are successfully verified, click Next. 8. On the Step 4 page, click Save to start the configuration process. Note: Because StoreFront is installed on a different server, the SPA plug-in PowerShell script must manually be executed on the StoreFront server. The StoreFront server group replication mechanism propagates the changes to all members. 9. After the configuration process is completed, click Close. 10. Join the second SPA plug-in server to the cluster. Open another browser and open the second SPA plug-in admin console and click Continue. 11. On the Step 1 page, select Join an existingSC Secure Private Access site and click Next. 12. On the Step 2 page, enter your SQL server host and Site name, click Test connection, select Manually and click Download script. Secure Private Access – manual database setup 1. Open the SQL Server Management Studio and connect to the database engine using a database administrator account. 2. In the SQL Server Management Studio, click File, select Open and select File. 3. In the Open File dialog, search for the downloaded SQL script and click Open. 4. Verify the script content and click Execute. The script verifies that the database exits and creates the login for the Windows server training\xa04-spa. 5. Switch back to the SPA admin console and click Next. The server now has write permissions to the database. Click Next. 6. On the Step 4 page, click Save to start the configuration process. 7. After the configuration process is completed, click Close. 8. The SPA plug-in cluster can be managed over each node. Secure Private Access – App creation 1. In the menu on the left, click Applications. 2. On the right side, click Add an app 3. In the Add an app dialog, add the required fields marked with a red star and click Save. Note: For details on application parameters, see Configure applications. 4. In the menu on the left, click Access Policies. 5. On the right side, click Create policy 6. In the Create policy dialog, add the required fields marked with a red star and click Save. Note: For details on application access policies, see Configure access policies for the applications. Secure Private Access – StoreFront configuration 1. On the Secure Private Access server, open the Start menu and open Citrix Secure Private Access. 2, In the menu on the left, click Settings. 3, In the menu on the left, click Settings and select the Integrations tab. 4. In the StoreFront Store URL section, click Download script. 5. Copy the downloaded file StoreFrontScripts.zip to a StoreFront server and exact the files to any folder. 6. Open a Windows x64 bit compatible PowerShell window with admin privilege and run the PowerShell script ConfigureStorefront.ps1. The script modifies the StoreFront store (in this scenario, StoreLB) to support Secure Private Access applications. NetScaler StoreFront and SPA Plugin Load Balancing Note: The below example has not enabled SSL Default Profiles. If your NetScaler configuration does, add the cipher directly into the SSL profile and ignore the virtual server cipher configuration. The following servers are used - xa04-stf.training.local xa05-stf.training.local xa04-spa.training.local xa05-spa.training.local IP addresses 172.16.1.107 (StoreFront load balancing VIP) 172.16.1.108 (SPA plug-in load balancing VIP) Certificates dh5-2048.key (Diffie-Hellman key, group 5, 2048 bit) stf-lb.training.local spa-lb.training.local Make sure to create the Diffie-Hellman key and replace the server names, IP addresses, and certificates before running the commands in NetScaler CLI. Connect to NetScaler CLI using an SSH client and run the following commands: ## SSL Profile ## ## Do not forget to replace the Diffie-Hellmann key name ## add ssl profile SECURE_ssl_profile_frontend -dhCount 1000 -dh ENABLED -dhFile "/nsconfig/ssl/dh5-2048.key" -eRSA ENABLED -eRSACount 1000 -sessReuse ENABLED -sessTimeout 120 -tls1 DISABLED -tls11 DISABLED ## Monitors ## add lb monitor mon-StoreFront STOREFRONT -scriptName nssf.pl -dispatcherIP 127.0.0.1 -dispatcherPort 3013 -LRTM DISABLED -secure YES -storefrontcheckbackendservices YES add lb monitor mon-SPA-Plugin HTTP -respCode 200 -httpRequest "GET /secureAccess/health" -LRTM DISABLED -secure YES add lb monitor mon-SPA-Admin-console HTTP -respCode 200 -httpRequest "GET /accessSecurity/health" -LRTM DISABLED -secure YES ## Server ## ## Do not forget to replace server names ## add server xa04-stf.training.local xa04-stf.training.local add server xa05-stf.training.local xa05-stf.training.local add server xa04-spa.training.local xa04-spa.training.local add server xa05-spa.training.local xa05-spa.training.local ## Services ## ## Do not forget to replace service names ## add service xa04-stf.training.local_443 xa04-stf.training.local SSL 443 -gslb NONE -maxClient 0 -maxReq 0 -cip ENABLED X-Forwarded-For -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO -state DISABLED bind service xa04-stf.training.local_443 -monitorName mon-StoreFront add service xa05-stf.training.local_443 xa05-stf.training.local SSL 443 -gslb NONE -maxClient 0 -maxReq 0 -cip ENABLED X-Forwarded-For -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO bind service xa05-stf.training.local_443 -monitorName mon-StoreFront add service xa04-spa.training.local_443 xa04-spa.training.local SSL 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO -state DISABLED bind service xa04-spa.training.local_443 -monitorName mon-SPA-Plugin add service xa05-spa.training.local_443 xa05-spa.training.local SSL 443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO bind service xa05-spa.training.local_443 -monitorName mon-SPA-Plugin add service xa04-spa.training.local_4443 xa04-spa.training.local SSL 4443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO -state DISABLED bind service xa04-spa.training.local_4443 -monitorName mon-SPA-Admin-console add service xa05-spa.training.local_4443 xa05-spa.training.local SSL 4443 -gslb NONE -maxClient 0 -maxReq 0 -cip DISABLED -usip NO -useproxyport YES -sp OFF -cltTimeout 180 -svrTimeout 360 -CKA NO -TCPB NO -CMP NO bind service xa05-spa.training.local_4443 -monitorName mon-SPA-Admin-console bind service xa04-spa.training.local_443 -monitorName mon-SPA-Plugin ## LB vServer ## ## Do not forget to replace vServer names and IP addresses ## add lb vserver lbvs-stf-lb.training.local_443 SSL 172.16.1.107 443 -persistenceType COOKIEINSERT -persistenceBackup SOURCEIP -cookieName STFPersistence -cltTimeout 180 add lb vserver lbvs-spa-lb.training.local_443 SSL 172.16.1.108 443 -persistenceType NONE -cltTimeout 180 add lb vserver lbvs-spa-lb.training.local_4443 SSL 172.16.1.108 4443 -persistenceType NONE -cltTimeout 180 ## Do not forget to replace vServer names and service bindings ## bind lb vserver lbvs-stf-lb.training.local_443 xa04-stf.training.local_443 bind lb vserver lbvs-stf-lb.training.local_443 xa05-stf.training.local_443 bind lb vserver lbvs-spa-lb.training.local_443 xa04-spa.training.local_443 bind lb vserver lbvs-spa-lb.training.local_443 xa05-spa.training.local_443 bind lb vserver lbvs-spa-lb.training.local_4443 xa04-spa.training.local_4443 bind lb vserver lbvs-spa-lb.training.local_4443 xa05-spa.training.local_4443 ## Do not forget to replace vServer names ## set ssl vserver lbvs-stf-lb.training.local_443 -sslProfile SECURE_ssl_profile_frontend set ssl vserver lbvs-spa-lb.training.local_443 -sslProfile SECURE_ssl_profile_frontend set ssl vserver lbvs-spa-lb.training.local_4443 -sslProfile SECURE_ssl_profile_frontend ## Do not forget to replace vServer names ## bind ssl vserver lbvs-stf-lb.training.local_443 -cipherName SECURE bind ssl vserver lbvs-spa-lb.training.local_443 -cipherName SECURE bind ssl vserver lbvs-spa-lb.training.local_4443 -cipherName SECURE ## Do not forget to replace vServer names and certificates ## bind ssl vserver lbvs-stf-lb.training.local_443 -certkeyName stf-lb.training.local bind ssl vserver lbvs-spa-lb.training.local_443 -certkeyName spa-lb.training.local bind ssl vserver lbvs-spa-lb.training.local_4443 -certkeyName spa-lb.training.local NetScaler Gateway Note: To create a new NetScaler Gateway configuration, use ns_gateway_secure_access.sh. To update an existing NetScaler Gateway configuration, use ns_gateway_secure_access_update.sh. 1. Open a new browser tab and navigate to https://www.citrix.com/downloads/citrix-secure-private-access/Shell-Script/Shell-Script-for-Gateway-Configuration.html. 2. When prompted, log on with your Citrix Cloud account. 3. Download the Shell Script for Gateway Configuration file archive and extract it to your local computer. 4. In this scenario, we have a working NetScaler Gateway configuration and must update it for Secure Private Access on-premises. 5. Use a tool of your choice to upload the script ns_gateway_secure_access_update.sh to the NetScaler /var/tmp folder. 6. Connect to the NetScaler CLI using an SSH client and log on. 7. Enter shell, press the return key, and change the directory to /var/tmp. 8. Change the file permissions using the command chmod +x /var/tmp/ns_gateway_secure_access_update.sh to make the script executable. 9. Run the script /var/tmp/ns_gateway_secure_access_update.sh. Note: If you see the error -bash: ./ns_gateway_secure_access_update.sh: /bin/sh^M: bad interpreter: No such file or directory, run the following command tr -d '\r' < /var/tmp/ns_gateway_secure_access_update.sh > /var/tmp/ns_gateway_secure_access_update_unix.sh to convert the Windows line endings to Unix.Change the file permissions using the command chmod +x /var/tmp/ns_gateway_secure_access_update_unix.sh to make the converted script executable. Run the converted script and insert the required parameters. Note: If you see the error -bash: ./ns_gateway_secure_access_update.sh: /bin/sh^M: bad interpreter: No such file or directory, run the following command tr -d '\r' < /var/tmp/ns_gateway_secure_access_update.sh > /var/tmp/ns_gateway_secure_access_update_unix.sh to convert the Windows line endings to Unix. Change the file permissions using the command chmod +x /var/tmp/ns_gateway_secure_access_update_unix.sh to make the converted script executable. Run the converted script and insert the required parameters. Support for smart access tag Starting with the following versions, NetScaler Gateway sends the smart access tags automatically. This enhancement removes the required gateway callback from SPA plug-in to NetScaler Gateway. 13.1 - 48.47 and later 14.1 - 4.42 and later The above script automatically enables the enhancement flags ns_vpn_enable_spa_onprem and ns_vpn_disable_spa_onprem. To make the changes persistent, run the following commands in the NetScaler shell. root@xa04-adc01# echo "nsapimgr_wr.sh -ys call=ns_vpn_enable_spa_onprem">> /nsconfig/rc.netscaler root@xa04-adc01# echo "nsapimgr_wr.sh -ys call=toggle_vpn_enable_securebrowse_client_mode">> /nsconfig/rc.netscaler For more details, look at Support for smart access tags 1. A new NetScaler command script (the default is /var/tmp/ns_gateway_secure_access) is generated. 2. Switch back to the NetScaler CLI using the command exit. 3. Before executing the new NetScaler command script, let us verify the current NetScaler Gateway configuration and update it for Secure Private Access on-premises. 4. On the Gateway virtual server, verify the following: *ICA only is set to false (OFF) TCP Profile is set to nstcp_default_XA_XD_profile Deployment Type is set to ICA_STOREFRONT On the Gateway session action for the Workspace app, verify the following: *transparentInterception is set to OFF SSO is set to ON *ssoCredential is set to PRIMARY useMIP is set to NS *useIIP is set to OFF icaProxy is set to OFF *wihome is set to "https://stf-lb.training.local/Citrix/StoreLBWeb" - replace with real store URL ClientChoices is set to OFF *ntDomain is set to "training.local" - used for SSO defaultAuthorizationAction is set to ALLOW *authorizationGroup is set to SecureAccessGroup (Make sure that this group is created in NetScaler, not Active Directory. It’s used to bind Secure Private Access specific authorization policies) clientlessVpnMode is set to ON *clientlessModeUrlEncoding is set to TRANSPARENT SecureBrowse is set to ENABLED *Storefronturl is set to "https://stf-lb.training.local" - replace with StoreFront FQDN sfGatewayAuthType is set to domain Note: For details on session action parameters, see the Command line reference for vpn-sessionAction. Based on the above example, the default session action before adding SPA looks like: add vpn sessionAction AC_OS_172.16.1.106 -transparentInterception OFF -defaultAuthorizationAction ALLOW -SSO ON -ssoCredential PRIMARY -icaProxy ON -wihome "https://stf-lb.training.local/Citrix/StoreLBWeb" -ClientChoices OFF -ntDomain training.local -clientlessVpnMode OFF -storefronturl "https://stf-lb.training.local" -sfGatewayAuthType domain Let’s create the authorization group and a new session action and modify it for Secure Private Access on-premises: add aaa group SecureAccessGroup add vpn sessionAction AC_OS_172.16.1.106_SPAOP -transparentInterception OFF -defaultAuthorizationAction ALLOW -authorizationGroup SecureAccessGroup -SSO ON -ssoCredential PRIMARY -useMIP NS -useIIP OFF -icaProxy OFF -wihome "https://stf-lb.training.local/Citrix/StoreLBWeb" -ClientChoices OFF -ntDomain training.local -clientlessVpnMode ON -clientlessModeUrlEncoding TRANSPARENT -SecureBrowse ENABLED -storefronturl "https://stf-lb.training.local" -sfGatewayAuthType domain Switch the session policy for the Workspace app to the new session action: set vpn sessionPolicy PL_OS_172.16.1.106 -action AC_OS_172.16.1.106_SPAOP 5. Run the new NetScaler commands script with the batch command. For example, batch -fileName /var/tmp/ns_gateway_secure_access_update -outfile /var/tmp/ns_gateway_secure_access_update_output.log -ntimes 1. 6. Verify the log file that there is no error For example, shell cat /var/tmp/ns_gateway_secure_access_update_output.log Note: In this scenario, one error is shown in the log file because StoreFront and SPA plug-in are installed on the same machine. ERROR: Specified pattern or range is already bound to dataset/patset 7. On the StoreFront and SPA plug-in machine, open Citrix Secure Private Access from the Start menu. 8. On the SPA admin console page, click Mark as done in the Configure Gateway section. Scenario 3 – Geo deployment (Coming Soon) Testing any scenario 1. Open the Citrix Workspace app and create a new account. In our scenarios, the URL https://citrix.training.com was used. 2. Log on to NetScaler Gateway. 3. Secure Private Access apps along with Citrix Virtual Apps and Desktops are displayed. In this scenario, no CVAD app is marked as a favorite. Thus, they are only displayed under APPS. 4. Launch web app Extranet. Note: All security controls are enabled on this application. Restrict clipboard access Restrict printing Restrict downloads Restrict uploads Display Watermark Restrict key logging Restrict screen capture The above screenshot shows "Display Watermark". The screenshot below shows "Restrict screen capture". Summary Citrix Secure Private Access for on-premises allows zero trust-based access to SaaS and internal web apps. This deployment guide covered publishing web apps and setting security controls. The result is an integrated solution with single sign-on for users to access SaaS and internal web apps like virtual apps.
  11. Very helpful document! Thank you! Looks like many hours spent. I have 2 or 3 suggestions. I think maybe the title should be changed. The article is all about hibernation but hibernation is not mentioned. Also the title mentions that this is for Powershell but the instructions have both Powershell and GUI versions throughout. Maybe "Using hibernation with Azure VMs in Citrix DaaS" or something? A second thing is that many of the images are hard to read, too small when seeing them in the web page and then not enough quality to read them if I enlarge. The screen by screen for the Create Machine Catalog Wizard is the main one that I had a bad time with. Lastly, perhaps this is a wider site issue but the navigation columns on the right and left of the web page are so prominent that the content is only in the center third of my screen.
  12. Overview Local Host Cache and Service Continuity should be at the forefront of conversations when building a resilient Citrix environment. Local Host Cache and Service Continuity are Citrix technologies that can maintain end-user access to business-critical workloads during potential service disruptions. While they function differently, both features serve the same purpose: to keep users launching their apps and desktops regardless of service health. Let's start by differentiating the 2 features: Local Host Cache (LHC): LHC leverages a locally cached copy of the Site database hosted on Cloud Connectors. The local copy of the Site database is used to broker sessions if connectivity between the Cloud Connectors and Citrix Cloud is lost. LHC is enabled by default for DaaS environments, but some configurations must be considered to ensure LHC works properly during a service disruption. Service Continuity: Service Continuity is a DaaS-only feature that uses Connection Lease files downloaded to a user’s endpoint when they login to Citrix Workspace from either the Workspace app or a browser (the Citrix Workspace web extension is required for Service Continuity to work in a browser). Service Continuity uses the Connection Lease files when a normal end-user launch path cannot be established. It’s important to note that Service Continuity can leverage the LHC database on the Cloud Connectors, so many of the LHC misconfigurations below can also impact customers using Service Continuity for resiliency. Service Continuity is only supported for DaaS connections through the Workspace service. Service Continuity cannot be used with an on-premises StoreFront server. Note: The deprecated Citrix feature called “connection leasing” resembles Workspace connection leases in that it improves connection resiliency during outages. Otherwise, that deprecated feature is unrelated to service continuity. Understanding the resiliency features is a critical first step in configuring your environment correctly in the case of a service disruption. This article assumes a working knowledge of LHC and Service Continuity. There are a few crucial configurations to check for to ensure that users can continue to access their resources when LHC or Service Continuity activates. The table below lists common misconfigurations that may impact the availability of DaaS resources in the event of a service disruption. Review this list and update any potential misconfigurations in your environment before it’s too late! Impacts All Access Methods In the table below, you’ll see that some misconfigurations impact StoreFront only, some impact Workspace only, and some can impact both. That’s because, with Service Continuity, the Cloud Connectors still attempt to retrieve the VDA addresses from the Citrix Cloud-hosted session broker (using the information supplied from the cached connection leases on the endpoint). If the session broker is unreachable, the VDA addresses are determined using the LHC database. You can see the entire process here in more detail. Misconfiguration Description Impact Detection Mitigation Pooled single-session OS VDAs that are power managed are unavailable during LHC Because power management services reside on Citrix Cloud infrastructure rather than Cloud Connectors, power management becomes unavailable during an LHC event. This results in an inability to reboot power-managed pooled single-session OS VDAs and reset their differencing/write cache disks while LHC is active. For security reasons, these VDAs are unavailable by default during LHC to avoid changes and data from previous user sessions being available on subsequent sessions. Pooled single-session VDAs in power-managed delivery groups are unavailable during LHC events. Review the Delivery Groups node in Studio. Any pooled single-session delivery group that is power-managed and is not configured for access during LHC will show a warning icon. Edit the Local Host Cache settings within the delivery group. Note: Changing the default can result in changes and data from previous user sessions being present in subsequent sessions. Too many VDAs in a Resource Location The LHC broker on Cloud Connectors caps VDA registrations at 10,000 per resource location if the resource location goes into LHC mode. More than 10,000 VDAs in a resource location can result in excessive load on Cloud Connectors. This can result in stability issues and a subset of VDAs being unavailable during LHC. Review the Zones node to see if any alerts are detected. If your environment has too many VDAs in a resource location, a warning icon will show on the resource location (check out the Troubleshoot tab at the bottom of the Zones node after clicking the resource location to learn more about the errors and warnings in that resource location). Reconfigure Resource Locations to contain no more than 10,000 VDAs. 3 Consecutive failed config syncs Cloud Connectors periodically sync configurations from Citrix Cloud to the local database to ensure up-to-date configurations if LHC activates. Failed configuration syncs can result in stale or corrupted configurations used in case of an LHC event. Monitor Cloud Connectors for 505 events from the Config Synchronizer Service. Email alerts for failed config syncs are on the roadmap for Citrix Monitor! Review firewall configurations to ensure your firewall accepts XML and SOAP traffic. Review CTX238909. Open a support ticket to determine why config sync failures occur in your environment. Multiple elected brokers One Connector per Resource Location should be elected for LHC events. Cloud Connectors must be able to communicate with each other to determine the elected broker and understand the health of other peer connectors to make a go/no-go decision about entering LHC mode. “Split brain” scenario where multiple Connectors in the same Resource Location remain active during an LHC event. VDAs may register with any elected connectors in the Resource Location, and launches may fail intermittently while LHC is active. Monitor Connectors for 3504 events from the High Availability Service. Review to see if more than one Connector per Resource Location is being elected. Ensure Connectors can communicate with each other at http://<FQDN_OF_PEER_CONNECTOR>:80/Citrix/CdsController/ISecondaryBrokerElection. If using a proxy, bypassing the proxy for traffic between Connectors is recommended. Lack of Regular Testing It’s debatable whether a lack of testing can be considered a misconfiguration, but what’s not up for debate is the impact testing can have on ensuring this tech works in your environment! Testing can ensure infrastructure is scaled properly and works as expected before a disruption occurs. Testing should be done at regular intervals. For testing LHC, see Force an Outage. Forcing LHC is relevant to both on-prem StoreFront customers and customers leveraging Service Continuity. DaaS resources are potentially inaccessible during a service disruption. If you don’t have a testing plan for LHC or Service Continuity in your environment, consider this misconfiguration detected. Create a testing plan to test Service Continuity and Local Host Cache regularly. Low vCPU cores per socket configuration LHC operates using a Microsoft SQL Server Express LocalDB on the Cloud Connector. Microsoft has a limitation on SQL Express in which the Connector is limited to the lesser of 1 socket or 4 vCPU cores when using the LHC DB. If we configure Cloud Connectors to use one core per socket (e.g., 4 sockets, 4 cores), we limit LHC operating on a single core during an LHC event. Because all VDA registration and brokering operations go through a single connector during LHC, this can negatively impact performance and cause issues with VDA registration during the service disruption. More info regarding LHC core and socket configurations can be found in the Recommended compute configuration for Local Host Cache. Negative impact on the stability of VDA re-registration during an LHC event and the performance of LHC brokering operations. Check Task Manager to view core and socket configurations. Divide the number of virtual processors by the number of sockets to get your core-per-socket ratio. Reconfigure your VM to use at least 4 cores per socket. A new instance type may have to be used for public cloud workloads. Rebooting your connector may be required to reconfigure the core and socket configuration. Undersized Cloud Connectors During an event in which LHC activates, a single Cloud Connector per resource location begins to broker sessions. The elected LHC broker handles all VDA registrations and session launch requests (see Resource locations with multiple Cloud Connectors for more information on the election process). Sizing connectors to handle this added load during an LHC event is important for ensuring consistent performance. Negative impact on stability and performance during LHC VDA registration and LHC steady state. Check out the Zones node within Citrix DaaS Web Studio! If your environment has undersized connectors, https://docs.citrix.com/en-us/citrix-daas/manage-deployment/zones.html#troubleshooting. The sizing of connectors can also be checked on each connector machine at the hypervisor or VM level. Reconfigure connectors to have at least 4 vCPU and 6 GB of RAM. Review the Recommended compute configuration for local host cache for recommended sizing guidelines based on the number of VDAs in the Resource Location. Multiple Active Directory (AD) domains in a single resource location As per Citrix DaaS limits e-docs, only one Active Directory domain is supported per resource location. Issues may arise if you have multiple Cloud Connectors in a zone and the connectors are in different AD domains. Multiple AD domains in a single Resource Location can cause issues with VDA registration during LHC events. VDAs may have to try multiple Connectors before finding one they can register with. This can impact VDA registration times and add additional load on Connectors when VDAs must register, especially in VDA registration storms like Local Host Cache re-registration or Autoscale events. If your environment has multiple AD domains in a resource location, a warning icon will show on the resource location (check out the “Troubleshoot” tab at the bottom of the Zones node after clicking the Resource Location to learn more about the errors and warnings in that Resource Location). If you click the zone, it will show you the FQDN of the connectors within the zone. Reconfigure resource locations to only contain connectors in a single AD domain per resource location. Impacts Workspace The following misconfigurations only apply when Workspace is used as the access tier. Misconfiguration Description Impact Detection Mitigation Service Continuity not enabled Things can’t work if they’re not turned on! Service Continuity is a core resiliency feature for customers leveraging Citrix Workspace service as their access tier. You can manage Service Continuity on the Citrix Cloud Workspace configuration page. Without Service Continuity enabled, Connection Lease files won’t be downloaded, and users won’t be able to access their apps and desktops during a service disruption. View the Service Continuity tab in the Citrix Cloud Workspace configuration page to see if the feature is enabled. Enable Service Continuity. See Configure Service Continuity for more information. Access clients are unsupported for Service Continuity To download Service Continuity Connection Lease files, users must access their Workspace from a client that supports Service Continuity. See User device requirements to learn which client versions and access scenarios are supported. Users accessing from clients that do not support Service Continuity will be unable to launch DaaS resources during a service disruption. Review session launches in Monitor for Workspace app versions. Update Citrix Workspace app clients to versions that support Service Continuity. Encourage users using the web browser to install the Citrix Workspace web extension. Impacts StoreFront The following misconfigurations only apply when StoreFront is used as the access tier. Misconfiguration Description Impact Detection Mitigation StoreFront ‘Advanced Health Check’ setting not configured StoreFront’s Advanced Health Check feature gives StoreFront additional information about the Resource Location where a published app or desktop can be launched. Without Advanced Health Check, StoreFront may send launch requests to a Resource Location that does not deliver that particular resource, resulting in intermittent launch failures during an LHC event. On StoreFront, run “Get-STFStoreFarmConfiguration” via PowerShell. Automated detection of the StoreFront Advanced Health Check feature is on our roadmap for Web Studio! Enable the StoreFront Advanced Health Check feature. For StoreFront 2308 forward, StoreFront Advanced Health Check is enabled by default. If you upgrade your StoreFront to the 2024 LTSR once it is released, Advanced Health Check will be enabled automatically. Incorrect load balancing monitor Some customers opt to use a load-balancing vServer to balance XML traffic between StoreFront and Connectors for optimized manageability and traffic management. When connectors in a resource location go into LHC, only the primary broker can service launch requests. The remaining Connectors send health checks to try to reconnect again. If an incorrect monitor is used on the load balancing server, StoreFront may continue to send launch requests to all connectors in the resource location rather than just the elected broker. Potential intermittent launch failures during an LHC event. Check your load balancer to ensure the monitor bound to the load balancer is monitoring for brokering capabilities, not just TCP responses. NetScaler has this functionality out of the box with the CITRIX-XD-DDC monitor. Note: The CITRIX-XML-SERVICE monitor is for previous versions of Citrix Virtual Apps and Desktops and does not perform the same checks as the CITRIX-XD-DDC monitor. Configure your load balancing vServer to monitor Connectors based on brokering capabilities (e.g., use the CITRIX-XD-DDC monitor for connector load balancing). Connectors not listed as a single set of resource feed in StoreFront With the addition of StoreFront’s Advanced Health Check feature, Citrix recommends that all Cloud Connectors within a single Cloud tenant be included as a single set of Delivery Controllers in StoreFront. Check out this Citrix TIPs blog for more information regarding recommended configurations. Duplicate icons for end users or more complex multi-site aggregation configurations are required in StoreFront. View your resource feed configuration in the “Manage Delivery Controllers” tab in StoreFront. Configure all Connectors (or all Connector load balancing vServers) are listed as a single Site within StoreFront. Review Add resource feeds for Desktops as a Service for more information. Tags used to restrict launches to a subset of Resource Locations in a Delivery Group With Advanced Health Check, StoreFront knows what resource locations a published app or desktop can launch from. StoreFront does this using Delivery Group to Machine Catalog mappings. However, StoreFront is not aware of tags. Consider a scenario in which a Delivery Group contains Machine Catalogs from Resource Location “A” and Resource Location “B”. If we use tags to restrict app/desktop launches to only Resource Location “A”, StoreFront will continue to send launch requests to both Resource Location “A” and “B” during an LHC event because it does not have tag information. Potential intermittent launch failures during LHC events. Review tags used in your environment. Automated detection of Resource Location-based tag restrictions is on our roadmap for Web Studio! Configure tags so that at least one (preferably multiple!) VDA in each Resource Location delivered from a Delivery Group contains each tag. Not all connectors are receiving Secure Ticket Authority (STA) requests A subset of connectors in a resource location are not receiving STA requests from StoreFront. This can be because either they are not listed in StoreFront or there is another problem with communication, such as an expired certificate on the connector. During a Local Host Cache event, a single Connector acts as the STA server for the Resource Location. If the elected broker is not receiving STA traffic, all launches through a NetScaler could fail during an LHC event. Check that all connectors are listed as STAs on your StoreFront and NetScaler Gateway servers. Automated detection of STA traffic is on our roadmap for Web Studio! View NetScaler Gateway configurations in StoreFront and ensure all Connectors are listed as STA servers. Review NetScaler appliances and ensure all STAs listed in StoreFront are in the same format in the NetScaler Gateway vServers. STA service health can also be monitored in the Gateway vServer. StoreFront not communicating with all Cloud Connectors StoreFront can only contact a subset of connectors in a resource location. This can be because either they are not listed in StoreFront or there is another problem with communication, such as an expired certificate on the connector. StoreFront not communicating with a subset of Cloud Connectors can negatively impact the scalability and performance of an environment during both steady-state and LHC operations. If the elected broker is not receiving StoreFront traffic, all LHC launch attempts may fail. Review the resource feeds in StoreFront to ensure that all connectors are listed. If so, test that StoreFront can communicate with all listed connectors over the port configured in the resource feed. Automated detection of StoreFront traffic is on our roadmap for Web Studio! Add all connectors to the resource feed in StoreFront and fix any communication issues between StoreFront and the connectors. One of the most common communication issues between StoreFront and connectors is an expired certificate on the connector when XML traffic is over port 443. Note: For customers with many connectors, it may be beneficial to configure load-balancing vServers for each resource location to reduce the management overhead and simplify troubleshooting. Review Citrix TIPs: Integrating Citrix Virtual Apps and Desktops service and StoreFront for more information. Summary Correctly configuring your Citrix environment significantly impacts its availability and performance. Review your environments for these potential misconfigurations to keep your business running, no matter what!
  13. if you want to have more than the 100 raw, use this query (exemple for connections) let Source = OData.Feed("https://api-us.cloud.com/monitorodata/connections",null,[Headers=[Authorization=GetAccessToken,#"Citrix-CustomerId"="xxxxxxxxx"]]) in Source
  14. Overview This article examines the concept of policy labels and gives a few examples of how they might be useful. If you have ever used nFactor authentication, most likely, you have been exposed to policy labels. However, they can be used in more places. Some examples are together with responder, rewrite, and content switch policies. What are policy labels? A policy label can be seen as a container where one or more policies are bound. The policy label is invoked when a policy expression is evaluated as true. Why use policy labels? Having the policies with the most hits as early as possible in the evaluation chain is always important. When you have many policies that require evaluation, it might become hard to understand the flow and see the result. It might even be risky to add additional policies if required, as you might add them incorrectly. There is also a tiny bit of latency added when evaluating policy expressions, if you have a few policies to evaluate it doesn't matter that much, but when you have a lot of policy expressions, it might start to matter. When you start grouping policies that belong together in policy labels, you might skip evaluating many policies. If the rules are made properly, this will lower the overall resources used for policy evaluation. Another good reason to use policy labels is if you have several policies that should be bound to several services. This will limit the number of bindings you are required to make, and it will also be easier to update all services when required, as you will only need to update the bindings within the policy label, affecting all places where it is used. Examples Example 1: In this example, policy labels are used together with rewrite policies to create a simple method of scrubbing HTTP response headers from unwanted information. There are many HTTP headers that the server might set that leak a lot of information about how the server is set up. Some HTTP headers are optional and not required by the client for the applications to work properly and can be removed from the response sent. If you use a browser and use the developer tools for the browser, you can inspect which HTTP headers are being received by a client. We have picked a few header examples that a client should normally not be able to see. Example headers to be scrubbed: Server X-Powered-By X-Aspnet-Version X-Aspnetmvc-Version First, create the rewrite actions that will do the HTTP header scrubbing: add rewrite action scrubb-Server-rwact delete_http_header Server add rewrite action scrubb-X-Powered-By-rwact delete_http_header X-Powered-By add rewrite action scrubb-X-Aspnet-Version-rwact delete_http_header X-Aspnet-Version add rewrite action scrubb-X-Aspnetmvc-Version-rwact delete_http_header X-Aspnetmvc-Version Next create the rewrite policies that will do the HTTP header scrubbing: add rewrite policy scrubb-Server-rwpol "HTTP.RES.HEADER(\"Server\").EXISTS" scrubb-Server-rwact add rewrite policy scrubb-X-Aspnet-Version-rwpol "HTTP.RES.HEADER(\"X-Aspnet-Version\").EXISTS" scrubb-X-Aspnet-Version-rwact add rewrite policy scrubb-X-Aspnetmvc-Version-rwpol "HTTP.RES.HEADER(\"X-Aspnetmvc-Version\").EXISTS" scrubb-X-Aspnetmvc-Version-rwact add rewrite policy scrubb-X-Powered-By-rwpol "HTTP.RES.HEADER(\"X-Powered-By\").EXISTS" scrubb-X-Powered-By-rwact Now, create the rewrite policy that will select which traffic should pass the policy label: add rewrite policy scrubb-HTTP-headers-rwpol true NOREWRITE Create the policy label, in this example we want the “Transform Name” to be http_res: add rewrite policylabel scrubb-HTTP-headers-label http_res Bind the scrubbing policies to the policy label: bind rewrite policylabel scrubb-HTTP-headers-label scrubb-Server-rwpol 100 NEXT bind rewrite policylabel scrubb-HTTP-headers-label scrubb-X-Powered-By-rwpol 110 NEXT bind rewrite policylabel scrubb-HTTP-headers-label scrubb-X-Aspnet-Version-rwpol 120 NEXT bind rewrite policylabel scrubb-HTTP-headers-label scrubb-X-Aspnetmvc-Version-rwpol 130 NEXT Let's create a content switch where the policy label can be bound: add cs vserver scrubb-HTTP-headers-csvs HTTP 10.1.1.1 80 -cltTimeout 180 -persistenceType NONE Lastly, lets bind the selection policy and use it to invoke the policy label: bind cs vserver scrubb-HTTP-headers-csvs -policyName scrubb-HTTP-headers-rwpol -priority 100 -gotoPriorityExpression NEXT -type RESPONSE -invoke policylabel scrubb-HTTP-headers-label To reuse the same policy label for several HTTP services, you add the last policy bind with the invocation for each of the virtual servers where it is required. If you later need to modify which HTTP headers are being scrubbed, you modify the policy label, and it will be applied to all your HTTP services simultaneously. Example 2: In this example, policy labels will be used to group content switch policies instead of binding all policies directly on the content switch. In this example, two applications are consolidated under the same content switch virtual server. The plex application requires traffic to some URLs to be directed to a specific load-balanced virtual server, which will handle the traffic that requires authentication and all other traffic to be sent to the normal load-balanced virtual server. The web application is a web hosting service that only allows traffic destined for specific sites to be forwarded. They are all hosted on the same load-balanced virtual server. First, we create the Content Switch virtual server: add cs vserver webservices-csvs HTTP 10.1.1.100 80 -cltTimeout 180 -persistenceType NONE Creation of the content switch actions and policies for the Plex application: add cs action plex-csact -targetLBVserver plex-lbvs add cs action plexauth-csact -targetLBVserver plexauth-lbvs add cs policy plex-csvs -rule "HTTP.REQ.HOSTNAME.SERVER.EQ(\"plex.example.local\")" add cs policy plex-all-csvs -rule HTTP.REQ.IS_VALID -action plex-csact add cs policy plex-admin-csvs -rule "HTTP.REQ.URL.STARTSWITH(\"/admin/\")" -action plexauth-csact add cs policy plex-upload-csvs -rule "HTTP.REQ.URL.STARTSWITH(\"/upload/\")" -action plexauth-csact Create and bind policies to the plex policy label: add cs policylabel plex-plabel HTTP bind cs policylabel plex-plabel plex-upload-csvs 100 bind cs policylabel plex-plabel plex-admin-csvs 110 bind cs policylabel plex-plabel plex-all-csvs 120 Creation of the content switch actions and policies for the web application: add cs action web-csact -targetLBVserver web-lbvs add cs policy web-cspol -rule "HTTP.REQ.HOSTNAME.SERVER.EQ(\"web.example.local\")" add cs policy web-site1-cspol -rule "HTTP.REQ.URL.STARTSWITH(\"/site1/\")" -action web-csact add cs policy web-site2-cspol -rule "HTTP.REQ.URL.STARTSWITH(\"/site2/\")" -action web-csact add cs policy web-site3-cspol -rule "HTTP.REQ.URL.STARTSWITH(\"/site3/\")" -action web-csact Create and bind policies to the web policy label: add cs policylabel web-plabel HTTP bind cs policylabel web-plabel web-site1-cspol 100 bind cs policylabel web-plabel web-site2-cspol 110 bind cs policylabel web-plabel web-site3-cspol 120 Bind policies and invoke the policy labels on the content switch virtual server: bind cs vserver webservices-csvs -policyName plex-csvs -priority 100 -gotoPriorityExpression USE_INVOCATION_RESULT -invoke policylabel plex-plabel bind cs vserver webservices-csvs -policyName web-cspol -priority 110 -gotoPriorityExpression USE_INVOCATION_RESULT -invoke policylabel web-plabel Note that this is a simple example to show how policy labels can be used, it could be written in different ways without using so many policies.
  15. Hello Team, Really good lab, simple and objective ! regards
  16. Overview Here are the configuration steps for setting up an ADC, configuring SSL Forward Proxy, and SSL Interception using the latest Citrix ADC marketplace template. The URL Redirection to Secure Browser capability of the ADC enables administrators to define specific website categories to be redirected from the local browser to Secure Browser automatically. The Citrix ADC acts as an intermediate proxy to do the interception between local browsing and the internet, thus achieving web isolation and protecting the corporate network. This capability increases security without compromising user experience. Conceptual Architecture Scope This proof-of-concept guide describes the following: Obtain Secure Browser Trial Account Set up ADC in Azure Set up Citrix ADC appliance as proxy Set up SSL Interception Set up Rewrite Policy and Actions Deployment Steps Section 1: Obtain Secure Browser Trial Account [Reference doc for Remote Browser Isolation service]/en-us/citrix-remote-browser-isolation) Request a Secure Browser trial Navigate to your Citrix Cloud account and enter user name and password Click Sign In. If your account manages more than one customer select the appropriate one Double-click the Secure Browser Tile. If you know who your account team is, then reach out to them to get the trial approved. If you are unsure who your account team is, then continue to the next step. Click Request a Call Enter your details and in the Comments section specify “Remote Browser Isolation service trial.” Click Submit. Once you have the Secure Browser trial approved, refer to the Publish a Secure Browser section of the Citrix Doc to publish a Secure Browser app. Enable URL Parameters In your Citrix Cloud subscription, double-click the Secure Browser tile On your published browser, called “browser” in this example, click the three dots and select Policies Enable URL Parameters policy on your published browser Section 2: Set up ADC in Azure The ADC can be set up in any cloud of choice. In this example Azure is our Cloud of choice. Configure an ADC instance Navigate to All Resources and click + Add button, search for Citrix ADC Select Citrix ADC template Select the software plan according to your requirements (in this example Bring Your Own License) Click Create Configure NIC Card Navigate to All Resources and select the NIC card for the ADC instance Select IP Configurations, make a note of the ADC management address Enable IP Forwarding Settings, save the changes. Configure Virtual IP Click Add, set virtualip as the name of the new config Select Static and add new IP address after the management address Enable Public address option and create a new public IP address Save the changes Set up the FQDN on the client Navigate to the Public IP address resource created for the virtualip configuration Click Configuration, and add a DNS label (in this example, urlredirection.eastus.cloudapp.azure.com) Set up Networking rules Add the following Networking rules At this point the ADC instance in Azure is set up Section 3: Set up Citrix ADC appliance as proxy Set up the ADC as a proxy to route the traffic from the client browser to the Internet. Log in to ADC management console Navigate to the Citrix ADC management console by inputting the instance's public IP address in the search bar of your browser Log in to the console by inputting the user name and password you set up in the previous steps From the initial configuration screen, click Continue Upload the licenses Navigate to System > Licenses > Manage Licenses Upload the necessary licenses for ADC. Reboot the server after uploading both licenses. After reboot, log in to the management again Navigate to System > Settings > Configure Modes Only two options must be enabled Mac based forwarding and Path MTU Discovery Navigate to System > Settings > Configure Basic Features Select: SSL Offloading, Load Balancing, Rewrite, Authentication, Authorization, and Auditing, Content Switching, and Integrated Caching Navigate to System > Settings > Configure Advanced Features Select: Cache Redirection, IPv6 Protocol Translation, AppFlow, Reputation, Forward Proxy, Content Inspection, Responder, URL Filtering, and SSL Interception Set up the NTP Server Navigate to System > NTP Servers > Add Create a server for example pool.ntp.org Enable NTP when prompted and set server to enabled Save the Configuration from the management portal save action Open SSH Session to ADC management address, log in with credentials you used while provisioning the ADC from Azure Set up TCP Profile and vServer Get the virtualip from the steps in Section 2 and input in the command (in this example 10.1.0.5) Run the following commands with the sslproxy address for example, virtualip: To add TCP profile: add ns tcpProfile proxy-tcpprofile01 -dynamicReceiveBuffering ENABLED -KA ENABLED -mptcp ENABLED -mptcpDropDataOnPreEstSF ENABLED -mptcpSessionTimeout 360 -builtin MODIFIABLE To add virtual server add cs vserver sslproxy01 PROXY 10.1.0.5 8080 -cltTimeout 360 -tcpProfileName proxy-tcpprofile01 -persistenceType NONE bind cs vserver sslproxy01 -lbvserver azurelbdnsvserver add netProfile proxy-netprofile01 -srcIP 10.1.0.5 -srcippersistency ENABLED -MBF ENABLED -proxyProtocol ENABLED -proxyProtocoltxversion V2 set cs vserver sslproxy01 -netProfile proxy-netprofile01 set ssl vserver sslproxy01 -sslProfile ns_default_ssl_profile_frontend save ns config To change the Cache settings go back to management session on browser Navigate to Optimization > Integrated Caching Navigate to Settings > Change cache settings Set Memory Usage Limit to 250 MB and click OK Set up the client for URL Redirection On a client, for example Firefox Configure your browser proxy to virtualip, Public IP, or FQDN: 8080 that you configured in Section 2 (for example, urlredirection.eastus.cloudapp.azure.com:8080) Now that we have an ADC set up, test for any website connectivity from the browser with the ADC acting as a proxy. Section 4: Set up SSL Interception SSL interception uses a policy that specifies which traffic to intercept, block, or allow. Citrix recommends that you configure one generic policy to intercept traffic and more specific policies to bypass some traffic. References: SSL Interception URL categories Video example of configuration Create an RSA Key Navigate to Traffic management > SSL > SSL Files > Keys Select Create RSA Key Select the key file name and required key size Once the key is created, download the .key file for later use Create a Certificate Signing Request (CSR) Navigate to Traffic management > SSL > SSL Files > CSRs > Create Certificate Signing Request (CSR) Name the request file, for example semesec_req1.req Click Key Filename > Appliace the key file name is the one created in the previous step, in this example smesec_key1.key After selecting the Key continue to fill in the blanks required: Common Name, Organization Name, and State or Province Click Create Create a Certificate Navigate to Traffic management > SSL > SSL Files > Certificates > Create Certificate Give the certificate a name and choose both the Certificate Request File (.req) and the Key File name (.key) created in the previous steps Click Create Once the certificate is created, download the .cert file for later use Create SSL INTERCEPT policy Navigate to Traffic management > SSL > Policies Click Add Give the policy a name and select the INTERCEPT action Expression to intercept news: client.ssl.detected_domain.url_categorize(0,0).category.eq("News") Click Create To bind the Intercept policy to the virtual server navigate to Security > SSL Forward Proxy > Proxy Virtual Servers Select the virtual server, in this example sslproxy01 Select add SSL Policies and click No SSL Policy Binding Bind the intercept policy: Create SSL BYPASS policy Navigate to Traffic management > SSL > Policies Click Add Give the policy a name and select the NOOP action - there is no BYPASS option, see next step Expression to bypass policy: CLIENT.SSL.DETECTED_DOMAIN.CONTAINS("cloud") Navigate to Security > SSL Forward Proxy > SSL Interception Policies Select the policy to edit it Change Action from NOOP to BYPASS Click OK Double check that the Action is now BYPASS Go back to Traffic management > SSL > Policies to double check the change To bind the Bypass policy to the virtual server navigate to Security > SSL Forward Proxy > Proxy Virtual Servers Double-click the virtual server, in this example sslproxy01 Select add SSL Policies and click SSL Policy Binding Bind the bypass policy > Add Click Bind Create SSL Profile Navigate to System > Profiles > SSL Profile > Add Create the profile by giving it a name, in this example smesec_swg_sslprofile Check the box to enable SSL Sessions Interception, then click OK Click OK to create SSL Profile Must install the cert-key pair Make sure you have a .pfx format of the cert-key pair before. See the following step for guidance on how to generate a .pfx file from the .cert and .key files that you previously downloaded. Prepare cert-key pair Start by installing the SSL tool Add the openssl installation path to the system environment variables From PowerShell, run the command: openssl pkcs12 -export -out smesec_cert1.pfx -inkey smesec.key1.key -in smesec.cert1.cert Bind an SSL Interception CA Certificate to the SSL Profile Navigate to System > Profiles > SSL Profile Select the profile created previously Click + Certificate Key Click Install Choose the .pfx file prepared previously Create a password (you need it later) Click Install Bind the SSL Profile to the virtual server Navigate to Security > SSL Forward Proxy > Proxy Virtual Servers Select the virtual server, in this example sslproxy01 Click to edit SSL Profile Choose the SSL profile created in previously, in this example smesec_swg_sslprofile Done Section 5: Set up Rewrite Policies and Actions A rewrite policy consists of a rule and action. The rule determines the traffic on which rewrite is applied and the action determines the action to be taken by the Citrix ADC. The rewrite policy is necessary for URL redirection to happen to Secure Browser based on the category of the URL entered in the browser, in this example "News". Reference Create rewrite policy and action Navigate to AppExpert > Rewrite > Policy Click Add Create the policy by naming it, cloud_pol in this example and use the expression: HTTP.REQ.HOSTNAME.APPEND(HTTP.REQ.URL).URL_CATEGORIZE(0,0).CATEGORY.EQ("News") Click create Create the Action in PuTTy Run the following command: add rewrite action cloud_act REPLACE_HTTP_RES q{"HTTP/1.1 302 Found" + "\r\n" + "Location: https://launch.cloud.com/<customername>/<appname>?url=https://" + HTTP.REQ.HOSTNAME.APPEND(HTTP.REQ.URL.PATH) + "\r\n\r\n\" "} Bind rewrite policy to virtual server Back to the ADC management console Navigate to AppExpert > Rewrite > Policy Go to the policy cloud_pol and change the action to cloud_act (the one created previously) To choose the type of the rewrite policy navigate to Security > SSL Forward Proxy > Proxy Virtual Servers Select “+ Policies” Policy: Rewrite Type: Response Select the policy created, in this example cloud_pol Priority: 10 Bind Click done Save configuration Bind Certificate key to Profile Navigate to System > Profiles > SSL Profile Select the profile created, for example smesec_swg_sslprofile Double-click + Certificate Key Select the certificate key, for example smesec_cert_overall Click Select Click Bind Click Done Save configuration Import the certificate file to the browser Upload the certificate into firefox (per our example with News category websites) Go to Options in your browser of choice, Firefox in this example Search “certs” > click “View Certificates” In the Certificate Manager window click “Import…” Browse for your cert and click open, smesec_cert1.cert in this example Input the password you created when making the certificate Your certificate authority must be installed properly Demo News websites from the local browser are redirected to Secure Browser automatically. See the following demo Summary In this PoC guide, you have learned how to set up Citrix ADC in Azure and Configure SSL Forward Proxy and SSL Interception. This integration allows the dynamic delivery of resources by redirecting browsing to Remote Browser Isolation service. Thus, protecting the company network without sacrificing user experience.
  17. Overview This guide was written for people with experience running and/or building a traditional, customer managed virtualization system based on Citrix Virtual Apps and Desktops technology. It is intended to help Citrix customers modernize their system to take advantage of the overwhelming benefits provided by Google Cloud and Citrix Cloud DaaS. It provides guidance on how to migrate a customer managed deployment of Citrix virtualization technology to Google Cloud and Citrix DaaS, dramatically simplifying and modernizing the system in the process. For many existing Citrix customers, migration is a journey, and many will run parallel systems for some time to facilitate migration with minimal disruption. These customers will likely implement something that resembles the Cloud Migration design pattern. Migration and Modernization Strategy Citrix customers of all shapes and sizes are moving to cloud-based infrastructure and services, often as part of digital transformation and/or IT modernization initiatives. Many of these organizations are turning to Google Cloud and Citrix’s cloud services. This shift is often driven by the need to do more than lift and shift everything to the cloud – they’re looking to transform their people and processes as well as their infrastructure. Citrix and Google provide products, services, and guidance which meet customers where they’re at on their digital transformation journey. They also support and even accelerate the transformation process across all ‘planes’ of technology in an organization: people, processes, and technology. In the Google world, customer managed Citrix virtualization systems are often referred to as 'VDI', and the migration and modernization of such systems typically falls into their infrastructure modernization pillar. Google provides multiple different service offerings which can help customers migrate and modernize these 'VDI' workloads for Google Cloud, led by either Google, Citrix, or partner professional services organizations. These engagements typically follow a process flow which starts by assessing existing infrastructure, planning priorities and preparing foundations, migrating workloads in waves, and optimizing workloads and operations to get more value out of Google Cloud. Google also provides a couple toolsets which are often used in these engagements, including the StratoZone assessment, analysis, and planning tool, Google Migrate for Compute Engine, and VMware HCX for VMware Engine. More information on the overall Google Cloud migration journey can be found in Migration to Google Cloud: Getting Started. Migration and Modernization Process Overview As you migrate and modernize your Citrix virtualization system and move it to Google Cloud, you’ll find that there are a number of different components or functions that you’ll be touching along the way. With each component or function, we’ll be moving from a customer managed layer to cloud managed services. Citrix and Google provide a variety of tools and services that are designed to dramatically simplify the process. At a glance, they are: Component/Function: Migrate from: (customer managed) Migrate to: (cloud services) Migrate using: Session brokering and administration Citrix VDI (CVAD) Citrix DaaS (CVADS) Citrix Automated Configuration tool Workspace environment management Citrix Workspace Environment Manager (WEM) Citrix Workspace Environment Management Service Citrix WEM Migration toolkit User interface (UI) services Citrix StoreFront Citrix Workspace Service …or run parallel NA - initial config only HDX session proxy Citrix ADC/Gateway Citrix Gateway Service …or run parallel NA - initial config only VDA infrastructure Self-managed infrastructure Google Cloud Compute Engine (GCE) or Google Cloud VMware Engine (GCVE) Citrix Image Portability Service or VMware HCX for VMware Engine Application infrastructure Self-managed infrastructure Google Cloud Compute Engine (GCE) or Google Cloud VMware Engine (GCVE) Google Migrate for Compute Engine or VMware HCX for VMware Engine Authentication and Authorization Microsoft Active Directory Google Cloud Identity or other SAML/OIDC identity provider Google Cloud Directory Sync or other tools At the beginning of this process, most customers will likely be running their Citrix virtualized workloads, and the applications running inside of them, as virtual machines on top of self-managed compute, storage, and networking infrastructure. In most environments, these services are provided by a traditional hypervisor and management stack based upon VMware, Microsoft, Nutanix, or Citrix hypervisor platforms. Google provides a broad variety of different tools and services which can be used to host any application on Google Cloud. Citrix VDA’s – which run as virtual machines – can be run on either Google Cloud Compute Engine (GCE) or Google Cloud VMware Engine (GCVE). The migration and modernization process can be broken down into three generalized phases and a couple parallel workstreams. These can be summarized as: Phase 1: Migrate to Citrix Cloud services and prepare Google Cloud to receive workloads Workstream 1a – migrate virtualization infrastructure to Citrix Cloud services Workstream 1b - setup landing zone and enterprise networking on Google Cloud Phase 2: Build Citrix Cloud resource location on Google Cloud Phase 3: Migrate applications and workloads to Google Cloud Workstream 3a - migrate app infrastructure to Google Cloud Workstream 3b - port VDA images to Google Cloud In the sections below, we're going to outline these phases/workstreams, and provide some pointers to external material that will provide additional details. That said - we'll most likely miss something that's important to you. We'd appreciate the opportunity to learn from your feedback! What did we miss? What can we do better? Please start the conversation with an email to our Citrix on Google SME working group. Thank you in advance for allowing us to be part of your journey! Phase 1: Migrate to Citrix Cloud services and prepare Google Cloud to receive workloads The first phase includes two workstreams, and these workstreams can be run in parallel. They are: Workstream 1a – migrate virtualization infrastructure to Citrix Cloud services. This process is outlined in Deployment Guide: Migrating Citrix Virtual Apps and Desktops from on-premises to Citrix Cloud. This deployment guide will help the reader migrate from a typical customer managed, on-premises deployment architecture to an architecture serviced by Citrix Cloud services. See example before and after diagrams below: Workstream 1b - setup landing zone and enterprise networking on Google Cloud. This includes extending active directory services into Google Cloud. One of the foundational design/build activities Google will guide customers through is setting up the 'shell' of the customer's infrastructure on Google Cloud, often described as a landing zone. A landing zone provides you with a place to 'land' workloads on Google Cloud, and consists of at least one Google Cloud Project and basic networking services. While creating a basic landing zone and setting up a functional network on Google Cloud is simple, Enterprise requirements often force more complicated configurations. You can setup a basic landing zone on Google Cloud by completing the first 5 steps detailed in the Getting Started guide, but for more complex scenarios you've got a bit more work to do. For insights into how to create a landing zone for Enterprise environments, you can start with the links below or contact your Google Cloud Customer Engineer for more targeted guidance. Links Migration to Google Cloud: Getting Started YouTube Video Series - Kickstarting your Migration to Google Cloud Phase 2: Build Citrix Cloud resource location on Google Cloud This phase will leverage the Citrix Cloud services plus the landing zone and networking established in phase 1 to create a Citrix Cloud resource location on Google Cloud. This includes preparing a Google Cloud project and creating an IAM service account for hosting VDA’s on Google Cloud, installing Cloud Connectors, and defining the hosting connection and resources used for VDA provisioning and fleet management. Detailed instructions on how to build a resource location on Google Cloud can be found in the Getting Started with Citrix DaaS on Google Cloud Deployment Guide. The Getting Started POC Guide details 8 steps that need to be completed in order to have a functional Citrix Cloud Resource Location on Google Cloud Compute Engine(GCE). The details of the steps will differ if the resource location will be deployed on Google Cloud VMware Engine (GCVE) but the outcomes are the same. If you're migrating an existing, customer managed control plane to GCE, you'll essentially be completing steps 1-5 below as part of Workstream 1b outlined above. The full 8 steps detailed in the Getting Started Guide are: Step 1: Setup a Google Cloud Project Step 2: Configure Network Services Step 3: Create Virtual Machines Step 4: Configure Access to VM Consoles Step 5: Deploy Active Directory Step 6: Initialize the Citrix Cloud Resource Location Step 7: Configure Citrix DaaS Step 8: Validate the Configuration Once you've got a functional Citrix Cloud resource location running on Google Cloud, you can focus on completing the third phase of migration which we'll describe for you below. Phase 3: Migrate applications and workloads to Google Cloud This phase also includes a couple different workstreams which can run in parallel. In a Citrix virtualization environment, the 'workload' is often considered the VDA as the VDA is where the UI for a given application is executed. Most applications, however, require additional infrastructure to run. This could be as simple as a file share where documents and data are stored, as complicated as the first and second tier of a client/server application, or anywhere in between. One long standing leading practice in the Citrix virtualization world is to place your VDA's (where the application(s) run) as close as you can to the infrastructure the application is running on. While this isn't always strictly necessary, it often goes a long way towards optimizing the performance of the application(s) in question. We break this phase down into two separate workstreams because the tooling to migrate the 'applications' often differs from the tooling used to migrate the 'workloads'. The first workstream we've called "Workstream 3a - migrate app infrastructure to Google Cloud". This workstream could be relatively simple or quite involved depending upon the complexity of your environment. To complete this workstream, you'll need to have identified the components and dependencies of the application and supporting infrastructure. You'll also need to know what your target IaaS service will be on Google Cloud (Compute Engine or VMware Engine), and have prepared either/both to receive workloads. With this information in hand, you can then begin utilizing the migration tools available to move the app and supporting infrastructure to your target IaaS service. You can begin your learning journey on how to use these tools at the links below. Links Google Migrate for Compute Engine VMware HCX for VMware Engine The second workstream we've called Workstream 3b - port VDA images to Google Cloud. For most customers, a lot of work goes into creating and maintaining their 'golden' images, which are essentially the source from which their various VDA estates are cloned and maintained. These images often end up being bespoke (to each catalog of VDA's running the same application stack) and very customized for a specific environment. These golden images are often relatively complex as they include an operating system, patches and updates, supporting software and management or reporting agents, application binaries, helper applications, and more. They also change over time as the components of the stack evolve and change. Some customers build and update these golden images by hand, utilizing nothing more than a simple change log and manual change management process. Other customers invest in automation and tooling to create and manage their golden images, ultimately reducing the manual efforts required and decreasing the chance for inconsistencies and errors to develop. Some customers also incorporate Citrix's AppLayering technology into their golden image management process. Regardless, most will leverage one of Citrix's provisioning and image management tools (MCS or PVS) to deploy and update their VDA fleet. In general, the more you've already invested in automation and tooling, the more options you'll have for completing this workstream because you've likely already prepared to handle changes in underlying compute platform. For a glimpse of what's possible with modern automation tooling for Windows, check out the work of Citrix Technology Professional Trond Eirik Haavarstein and his Automation Framework. The tooling and technique recommendations we provide here will differ depending upon your source compute environment, your target compute environment (GCE or GCVE), and how much you've invested in automating your image builds and updates. They'll also be influenced by your desire to modernize the workload (by say updating the operating system to a new version). We'll outline some of the available tools and techniques for porting your golden VDA images to Google Cloud below. Citrix Image Portability Service Citrix has invested heavily in creating a modern web service capable of porting VDA images from various source compute environments to various target environments including Google Compute Engine (GCE). They call this the Image Portability Service, and as of the time of writing it is in Tech Preview. It's designed to work in concert with other Citrix services and tooling (such as the Citrix Automated Configuration tool) to allow customers to integrate golden image management into their preferred developer workflows. It currently works with source golden images used by PVS or MCS running on VMware, and outputs an image that can be used by MCS on Google Compute Engine. It is aware of many of the unique requirements for an image to be used by Citrix virtualization (such as ListOfDDCs for the Citrix VDA, machine identity, KMS/Office re-arming, etc.) and handles them appropriately for you. Refer to the documentation link provided for more information. Google Migrate for Compute Engine As we introduced earlier Google Migrate for Compute Engine is a powerful tool customers leverage to migrate virtual machines to GCE in bulk. Google Migrate could potentially be used to facilitate a one-time migration of your golden image virtual machines from a source compute platform to GCE. Once on GCE, it should be able to be used as the golden image instance for MCS on GCE, though this process has not been tested by Citrix Engineering as of the time of this writing. VMware HCX for VMware Engine For customers who choose to deploy their workloads on Google Cloud using the VMware Engine service (GCVE) as the compute environment, Google offers VMware HCX for VMware Engine to help migrate their workloads. HCX could potentially be used to facilitate a one-time migration of your golden image virtual machines from your on-premises VMware environment into GCVE. Once on GCVE, it should be able to be used as the golden image instance for MCS on GCVE, though this process has not been tested by Citrix Engineering as of the time of this writing.
  18. Overview This guide is designed to walk you through the technical prerequisites, use cases, and configuration of App protection policies for your Citrix Virtual Apps and Desktops or Citrix DaaS deployment. App protection is an add-on feature for Citrix Workspace app (CWA) that provides enhanced security when using Citrix published resources. Two policies provide anti-keylogging and anti screen capturing capabilities in a Citrix HDX session. System Requirements App protection policies feature requires specific versions of Citrix Workspace app, Citrix infrastructure components (for on-premises deployments), Virtual Delivery Agents (VDA), Operating System platforms, Citrix Licenses (for both Citrix Virtual Apps and Desktops and DaaS) and supports various endpoints Refer to the system requirements in product documentation for the most up to date requirements. Licenses Valid Citrix licenses are required: Citrix Virtual Apps and Desktops App protection add-on license For Citrix DaaS, the App Protection feature is included as a part of certain Citrix Cloud service packages and licenses are provided directly on Citrix Cloud. On-premises Citrix Virtual Apps and Desktops Infrastructure The following server components are required only for on-premises deployments to use Citrix Web Studio. For Citrix DaaS deployments, skip to the Workspace Installation section. StoreFront 2103 or higher Delivery Controller 2103 or higher Installation - Licensing Download the license file and import it into the Citrix License Server alongside an existing Citrix Virtual Desktops license Use the Citrix Licensing Manager to import the license file. For more information, see Install licenses Installation - Delivery Controller On your Delivery Controller, restart your Broker Service to enable the App Protection feature license in your environment. Open Citrix Web Studio. Select Settings, and turn on the Enable XML trust toggle. Select Delivery Groups, select a delivery group, then click Edit. Click App Protection and then select Anti-keylogging and Anti-screen capturing checkboxes, then click Save. Installation - Citrix Workspace app Include the App Protection component using one of the following methods: For Windows: Starting with Citrix Workspace app version 2212, the App Protection Component is installed by default during the Citrix Workspace app installation. For more information on installing the App Protetion feature with Citrix Workspace app versions prior to 2311, see here. For macOS: App protection requires no specific installation or configuration on Citrix Workspace for Mac. For Linux: When you install the Citrix Workspace app using the tarball package, the following message appears: Do you want to install the App Protection component? Warning: You can’t disable this feature. To disable it, you must uninstall Citrix Workspace app. For more information, contact your system administrator. [default $INSTALLER_N]: Enter Y to install App Protection. Restart your endpoint. Testing - Citrix Workspace app for Windows Following steps provides guidance for anti screen sharing testing only. To test anti-keylogging protection, we recommend consulting with your own security team. Launch Citrix Workspace app and login Click on a protected virtual app or virtual desktop (for example Admin Desktop) and launch the HDX session. If you don't see protected resources, you are probably using web store or unsupported Citrix Receiver / Citrix Workspace app. (Optional) If App protection is not installed, you get the following popup when trying to launch a protected virtual app or desktop. Click Yes Try to perform a screen capture and confirm you see a blank screen (expected behavior). When testing anti-keylogging and anti screen capture protection, be aware of expected behavior: Anti-keylogging - This feature is active only when a protected window is in focus Anti screen capture - This feature is active when a protected window is visible (not minimized) Another simple method to test the anti screen capture protection is to use one of the popular conference tools (GoToMeeting, Microsoft Teams, Zoom, or Slack). Screen sharing should not be possible when protection is enabled. References Product Documentation - Citrix Workspace app Product Documentation - App protection
  19. Overview Citrix App Layering is a process to create and a technology that allows you to simplify the management of virtual images. App Layering enables you to create a virtual desktop for users, a complete virtual machine for Citrix Machine Creation Services (MCS), or an entire virtual disk to use with Citrix Provisioning (PVS). Citrix App Layering creates layers that are containers for the file system objects and registry entries unique to that layer. These layers are virtual disks, created and updated independently of each other, and are compiled into an image. There are five different types of layers: OS Layer: The Windows OS is installed in the OS layer. You can reuse the same OS layer with all compatible platforms and app layers. Platform Layer: All software and tools are installed into this layer. For example, specific on-premises or cloud tools or antivirus software. A platform layer for each part of your infrastructure can be created if you use more than one hypervisor. App Layer: Applications get installed in the App Layers. Typically a single application is installed on each App Layer, though you can include more. This Proof of Concept guide is designed to help you get started with Citrix App Layering within a Microsoft Azure environment. The guide walks you through the following to begin using Citrix App Layering: Install the Citrix App Layering appliance in Microsoft Azure. Access the Citrix App Layering management interface. Set up an SMB file share. Configure Azure Deployment Connector Configuration. Create an OS Layer. Create a Platform Layer. Create an App Layer. Publish the layered image. Create a machine catalog and delivery group from the new image. Architecture Overview The Citrix App Layering appliance, also known as the Enterprise Layer Manager (ELM), creates and manages layers which can be assigned to users or machines. With the Citrix App Layering appliances, administrators can create different layers such as application layers, OS layers, and platform layers, which will be kept in a repository managed by the Citrix App Layering appliance. Administrators can create a layer image with the combination of a specific OS layer and a few application layers as per the requirement of the end users. During the layered image creation process, the different layers are merged to form a single master image, which can be used by Citrix Machine Creation Services. Once the machine catalog is created administrators can create or provision machines which can be assigned to the users through the delivery group. Users then can launch the desktops when logged into Citrix Workspace. For additional information on Citrix App Layering, review the Citrix App Layering Reference Architecture. Prerequisites Microsoft Azure Subscription A Resource Group setup for the POC. Visit here for more information on creating an Azure Resource Group. Resource Group Shared Image Gallery Resource Group Disk Access Disk Access Private Endpoint connection Azure PowerShell Module An SMB File Share Microsoft Active Directory Supported internet browser for management console access (Edge, Chrome, Firefox) Windows 11 21H2 OS A Citrix DaaS or Citrix Virtual Apps and Desktops entitlement Current Citrix Virtual Delivery Agent (VDA) installer for Windows Citrix account to download all software Deployment Steps Install App Layering Appliance Log in to Citrix downloads and download the latest version of the App Layering installation package for your hypervisor. We are using Microsoft Azure for our deployment, so we download the Microsoft Azure Appliance Installation Package. Extract the zip file to a folder on your local drive. Open Windows PowerShell and confirm that the Azure PowerShell module is installed by running the Get-InstalledModule -Name Az command. Open PowerShell, browse to the folder where the App Layering file was extracted and run the installation script: AzureELMDeploymentV7.ps1 Enter R to choose Run Once. Enter the hostname for the appliance at the DeploymentName prompt. Choose your available Azure environment to install the appliance. By default, AzureCloud is selected. When prompted, sign into your Azure subscription. Follow the prompts to enter the subscription name. Enter the resource group name where the appliance is installed, and hit Enter. Enter the storage account name if one exists. A storage account is created if one does not exist by default. Enter the Azure location where the appliance is hosted, such as East US. Choose the virtual network to be used. In this setup, we are choosing our existing virtual network. Choose a subnet. In our case default. Provide an IP Address if using Static IP. In our case, we are using Dynamic, so hit Enter. Provide a VM size for the appliance. For our example, we are using the Standard DS4_v2. Enter the user name for the appliance. Enter the password for the appliance. You are prompted to provide the location of the VHD file for the ELM appliance. Browse to the location and select the unidesk_azure_system VHD file and click Open. The ELM appliance will now be created in Azure. Depending on your local connection, this process can take up to 60 minutes. When completed, the script output is as seen in the following screenshot: Configure App Layering Appliance Access App Layering Appliance Connect to the App Layering appliance from a machine in your Azure subscription by entering the IP address that you assigned earlier in a web browser. Enter the user name administrator and password Unidesk1, then click Login. Accept the EULA, then click Continue. Enter a new default password and confirm the new password, then click Save. The Getting Started with App Layering page loads. Create SMB File Share Connect to the virtual machine via RDP, where the SMB share is created. Create a file folder and open the folder properties. Click Sharing, then select Share. Add an administrator account for App Layering to the Share and give Read/Write permission level, then click Share. Configure SMB Share on Appliance Return to the App Layering management screen, and select Connect hyperlink on step 1. Click Edit on the Network File Share screen. Enter the SMB file share path, Username, and Password to access. Select Confirm and Complete. Click Save. Configure Azure Resource Manager (ARM) Templates As of App Layering v2211 all Azure resources created by App Layering Azure Deployments Connector are created using the deployment of a user specified ARM template. For more information on ARM templates refer to the Azure documentation here and the Citrix App Layering Azure Deployment documentation here. Create Azure Template Spec For our POC, we use the Citrix provided Starter Templates that can be used with the Azure Deployments connector. Within your Azure Resource Group you have created for the POC, create a Template Spec. Enter the template name (CacheDisk), confirm the Subscription and Resource Group details, enter a version number, then click Next: Edit Template. Copy the Cache Disk Starter Template code from here Paste the copied code into the Edit Template screen, then click Review + create. Click Create. Repeat these steps for each of the remaining Starter Templates (Boot Image, Machine, and Layered Image). Configure Azure Connector Configuration The new Azure Deployment connector does not prompt for credentials within the Citrix App Layering management console and also no longer requires an Azure App Registration/Service Principal. Instead, the ELM must be assigned a managed identity within Microsoft Azure. Create User Assigned Managed Identity Sign in to your Azure portal. Search for then go to Managed Identities. Click + Create. Select your Subscription, Resource Group, Region, and Name for your Managed Identity then click Review + create. Click Create. Your Managed Identity has now been created. Click Go to Resource. Select Access Control (IAM). Click + Add and Add role assignment. Select Contributor, then click Next. Choose User, group, or service principal, then Select Members. Select the Resource Group created for the POC. Click Select, then Review + assign. Select Review + assign. Go to your App Layering appliance in Azure Portal, then click Identity. Select the User assigned tab, then click + Add. Choose your App Layering managed identity, then click Add. Your managed identity has been added to the appliance. Click System assigned tab, then toggle Status to On, then click Azure role assignments. Click + Add role assignment, choose your Resource Group, and select Contributor for role. Click Save. Create Azure Compute Gallery In Azure portal, go to Azure compute galleries, then click + Create. Choose your Resource Group, Name, Region, then click Review + Create. Click Create. The Azure compute gallery is now active. Azure Deployment Connector Configuration Return to Getting Started with App Layering and click Create a connector configuration. Click Add Connector Configuration. Choose Azure Deployments from the drop-down list, then click New. Provide a Name for the connector. Copy the following into the Custom Data field. { "location": "eastus", "gallery": "yourGalleryName", "generation": "V2", "vMSize": "Standard_D4s_v3", "subnetId": "/subscriptions/yourSubscriptionID/resourceGroups/yourResourceGroupName/providers/Microsoft.Network/virtualNetworks/MyVnet/subnets/yourSubNetName" } Select your Machine Template by clicking Browse. Select the Machine template spec that you created earlier in Azure, then click Save. Select Browse to select your Resource Group. Choose your Resource Group, then click Save. In Cache Disk, click Browse. Select the CacheDisk template spec, then click Save. Select Browse to select your Resource Group. Choose your Resource Group, then click Save. In Layered Image, click Browse. Select the LayeredImage template spec, then click Save. Select Browse to select your Resource Group. Choose your Resource Group, then click Save. Click Add Boot Image Deployment. Select Browse, then choose your BootImage template spec, then click Save. Select Browse to select your Resource Group and select your Resource Group, then click Save. Click Confirm and Complete. Review the Configuration Summary, then click Save. Prepare the OS Layer You must meet all requirements so that the OS layer works correctly in your environment. Before proceeding, ensure that you have reviewed the following: Requirements and Recommendations. Open Microsoft Azure Portal and select Create a resource. Create a new Virtual Machine. Complete the Basics tab of the Create a virtual machine wizard, then select Next: Disks. Select OS Disk Type, then click Next: Networking. Select the Virtual Networkand Subnet, then click Next: Management. Select the options required for your configuration on the Management tab, then select Review+Create. If Validation passes, Click Create. When the Azure virtual machine deployment has been completed, connect to the virtual machine via RDP. Install all important updates, then reboot the machine. Once rebooted, reconnect to the virtual machine. Open File Explorer and browse to C:\Windows\OEM. Rename the Unattend script file to UnattendOld. Turn off Automatic Windows Updates by disabling the Windows Update service. Open an elevated PowerShell session and run the following command: Set-LocalUser -Name "youradminnamehere" -PasswordNeverExpires 1. Open Citrix Downloads and download the Citrix App Layering OS Machine Tools. Run the citrix_app_laerying_os_machine_tools_22.11.0.exe. Click Yes at the extraction prompt. Click the appropriate response if you are using KMS for your OS. For our purposes, we select Do not use KMS. The virtual machine prompts for a reboot. Click Close. Reconnect to the virtual machine after reboot. Open File Explorer and browse to C:\Windows\Setup\Scripts\. Run setup_x64.exe. Click Next. On the Specify your answer file, verify C:\windows\panterh\unattend.xml is selected, then click Next. Once completed, click Finish. Open the command prompt as administrator, and browse the Microsoft .NET Framework directory currently in use. Type in the following command: ngen eqi 3 and hit enter. Run Citrix Optimizer Download the Citrix Optimizer Tool. Once downloaded, unzip the package and then open the Citrix Optimizer Tool. Select the appropriate Citrix-prepared template to run. For our setup, we choose the recommended template for Windows 11, then click Analyze. Once the analysis process is completed, review the status, then click Done. Click Select All, then click Optimize. Once optimization completes, close the Citrix Optimizer Tool. Import the OS Layer to ELM Open an elevated PowerShell window. Run the command in the screenshot below. Enter the IP address of your ELM appliance. Provide the user name and password for the App Layering appliance when prompted. Enter the LayerName, VersionName, LayerSizeGib, LayerDescription, VersionDescription, and Comment. The virtual machine will disconnect and reboot. Connect to your admin virtual machine and open the Citrix App Layering Management console. Select Tasks to view the status of the import process. Your OS Layer is complete once the import process completes. Select Layers, then OS Layers. Your new OS Layer is now Deployable. Create Platform Layer In the Citrix App Layering Management Console, select Layers, Platform Layers, then click Create Platform Layer. Provide the information for the following, then click Confirm and Complete: Layer Name = Windows 11 Platform Layer Initial Version Name = Initial Platform Max Layer Size = 10 GB OS Layer = Windows11OSLayer and Initial version Select This platform layer will be used for publishing Layered images Hypervisor = “Microsoft Azure” Provisioning Service = **Machine Creation" Connection Broker = Citrix Virtual Desktops Connector Configuration = Azure Deployments-AppLayerAzure Packaging Disk file name = Windows 11 Platform Layer Review Summary, then click Create Layer. Review the Platform Layer creation process by clicking Tasks. The task status changes to Action Required. Highlight the task, then click the View Details icon. Take note of the Packaging Machine name and connect to the virtual machine via RDP. Use your credentials to log in to the OS Layer virtual machine you created earlier. Join the Platform Layer virtual machine to your domain. Once the virtual machine has rebooted from the domain join, reconnect via RDP with the local administrator account. Install the latest Citrix Virtual Delivery Agent (VDA) to the Platform Layer machine. Once the VDA has been installed, move on to the next step. Double-click the Shutdown for Finalize icon on the desktop. The virtual machine shuts down if successful. Open the Citrix App Layering Management Console, browse to Layersand Platform Layers, and select the Platform Layer you created. Select the Initial Platform version, select Version Information. The layer is in the status of Finalizing. When completed, the Platform Layer status shows Deployable. Create App Layer In the Citrix App Layering Management Console, select Layers > App Layers, then click Create App Layer. Provide the information for the following, then click Confirm and Complete: Layer Name = Adobe Reader Initial Version Name = AR Initial Max Layer Size = 30 Select the Windows 11 OS Layer and the Initial version Connector Configuration = Azure Deployments -AppLayerAzure Click Create Layer on the Layer Summary blade. Select Tasks to review the app layer task process. The task status changes to Action Required. Highlight the task, then click the View Details icon. Connect to the virtual machine via RDP. Use your credentials to log in to the OS Layer virtual machine you created earlier. Once connected to the virtual machine, download and install Adobe Acrobat Reader. Upon completing the Adobe Acrobat Reader install, click the Shutdown to Finalize icon on the desktop. The virtual machine shuts down if successful. Open the Citrix App Layering Management Console, browse to Layersand App Layers, and select the App Layer you created. Click Version Information version, select AR Initial. After a few moments, the layer begins to finalize. When completed, the App Layer status shows Deployable. Create an Image Template Login into the Citrix App Layering management console. Select Images from the left navigation menu, then select Create Template. Provide the following information in the Create Template blade, then click Confirm and Complete. Name: Win11Template Description: Windows 11 App Layering POC Template Select the Windows11OSLayer Click Edit Selection under App Layers, select Adobe Reader Select the Windows 11 Platform Layer Select the correct connector in Connector Configuration Leave all other selections to default Review the Template Summary, click Create Template. The Windows 11 template is now publishable. Select the Win11Template, then click Publish Layered Image. Click Publish. Select Tasks to review the status of the image build process. The Published Layered Image task shows as Done when completed. The virtual machine template is now ready to be used to create your Machine Catalog and Delivery Group. Create Machine Catalog Login into Citrix DaaS and click Manage in the DaaS tile. Click Machine Catalogs, then Create Machine Catalog. Click the appropriate machine type, then click Next. Select Machines that are power managed, and Deploy machines using Citrix Machine Creation Services](MCS), then click Next. Click Master Image. Select the template created earlier from the Image Gallery folder, then click Done. Select the minimum functional level for the catalog, then click Next. Select the appropriate Storage and License Types, then click Next. Provide the number of virtual machines to create, select the Machine size, then click Next. Select NICs, then click Next. Click Next on the Disk Settings page. Choose to create a Resource Group to provision the machines or an existing resource group. We select our existing resource group for our deployment, then click Next. Select the appropriate Active Directory, provide the OU location for the computer accounts, and provide the machine name, then click Next. Enter your domain credentials, then click Done. Click Next. Click Next on the Scopes blade. Click Next on the WEM blade. Provide a name and description for the Machine Catalog, then click Finish. The catalog will now be created. The new machine catalog is now available. Create a Delivery Group Navigate to Delivery Groups, and select Create Delivery Group. Select the correct machine catalog and the number of machines for the delivery group, then click Next. On the Users blade, select how you assign your users. We select Allow any authenticated users to use this delivery group for our purposes, then click Next. Click Next on the Applications blade. On the Desktops blade, click Add. Provide a Display name, Description, and click OK. Click Next. Click Next. Select the appropriate license for your Citrix DaaS deployment, then click Next. Provide a Delivery Group name and click Finish. The Delivery Group is now available. Launch Windows 11 Desktop Launch the newly created Windows 11 Desktop by accessing your Workspace URL. The process can be seen below: Summary This guide walked you through installing and configuring Citrix App Layering in Microsoft Azure to simply the image management of your virtual machines. You learned how to install and configure the Citrix App Layering Appliance and create OS, Platform, and App Layers. The process included how to publish a new virtual machine template in Azure from the layers you created, creating a machine catalog from the template machine, and then a delivery group. Lastly, the process walked you through assigning users to machines and allowing them to connect to the desktop using the Citrix Workspace app. To learn more about Citrix App Layering, visit the Citrix App Layering product documentation.
  20. Overview The Citrix Automated Configuration Tool (ACT) migrates workloads and aids in daily Citrix administration tasks for both on-premises and Cloud. The ACT Backup and Restore functionality allow Citrix administrators to create a Citrix Site backup in configuration files (.yaml), which contain the current state of the site. Think of this as a point-in-time snapshot of your Citrix Site, rather than an incremental backup. ACT generates configuration files for each of the Citrix components (Machine Catalogs, Delivery Groups, Applications, and Host Connections). The ACT Backup and Restore feature are only available as part of the Citrix Web Studio console. The article Install Web Studio details the steps required to install and configure the Citrix Web Studio console. Backup and Restore Process When you select the Backup + Restore tab in Citrix Web Studio, you're prompted to download and install the Automated Configuration Tool. The Backup and Restore process is completed using the PowerShell SDK of the ACT. Download the .msi file and install the Automated Configuration Tool. It's suggested to install it on a Citrix Delivery Controller server. After installing the Automated Configuration Tool, verify the installation by viewing it listed under 'Programs and Features'. A Direct Access icon named Auto Config is created for the Automated Configuration Tool on the desktop. Creating a Backup and Restore Before starting working with the tool, let’s get familiar with the main base cmdlets of the Automation Configuration Tool: Backup-CvadAcToFile - Backs up all the configurations from your Citrix Site. Restore-CvadAcToSite - Restores backup YAML files to the Citrix Site. This can refer to the same or a different Citrix Site. Based on these cmdlets, you can do either a full restore or a granular selection of the component you want by adding a switch pointing to the specific resource/configuration file (.yml) such as -includebyname (we explain this later and provide examples). Backing up your site is a straightforward process, as mentioned, the Automated Configuration Tool exports your site “snapshot” into configuration files. The ACT files are in the folder %HOMEPATH%\Documents\Citrix\AutoConfig Below is a screenshot displaying the contents of a backup folder. A unique folder is created for each backup with the date and time stamp of when the backup was completed. Every backup folder has configuration files that can be used to restore. Backup and Restore Example This example walks you through the following processes to begin using the Automated Configuration Tool: Creating the base folder structure %HOMEPATH%\Documents\Citrix\AutoConfig Modifying the CustomerInfo.yml file for On-Premises Creating a Backup Restoring from Backup Creating the Folder Structure The ACT requires a folder structure to locate the .yml files generated during the backup. First, we need to 'export' the initial configuration. Launch ACT and run the command Export-CvadAcToFile ACT creates the folder structure, and in it the configuration files. Below is a screenshot displaying the results. Modifying Configuration File for On-Premises Open the CustomerInfo.yml file from C:\Users\%username%\Documents\Citrix\AutoConfig Find the 'Environment' variable and replace the value 'Production' with 'OnPrem' Creating a Backup Once the Folder structure is ready, you can continue to run the 'Backup' cmdlet. As mentioned before, to create a backup you need to run the following command: Backup-CvadAcToFile Restoring from Backup Files that are imported come from the folder specified using the -RestoreFolder parameter. This parameter recognizes the folder that holds the .yml files to restore the site. This must be a fully qualified folder path. This cmdlet can also be used to revert to the previous configuration or to backup and restore your Citrix Site. This command can add, delete, and update your Citrix Cloud Site. Here are some examples of restoration granularity and their corresponding commands: Restore to a specific backup Restore-CvadAcToSite -RestoreFolder %HOMEPATH%\Documents\Citrix\AutoConfig/Backup_yyyy_mm_dd_hh_mm_ss Restore Granular for specific components Restore a specific Machine Catalog Restore-CvadAcToSite -RestoreFolder %HOMEPATH%\Documents\Citrix\AutoConfig/Backup_yyyy_mm_dd_hh_mm_ss -MachineCatalog -IncludeByName “name of the catalog” Restore a specific Delivery Group Restore-CvadAcToSite -RestoreFolder %HOMEPATH%\Documents\Citrix\AutoConfig/Backup_yyyy_mm_dd_hh_mm_ss -DeliveryGroups -IncludeByName “name of the Delivery Group” Running the Restore Launch ACT and enter the 'Restore' command depending on what you want to restore, either a full restore or by component. We're restoring a single Delivery group in this example to demonstrate how detailed and granular the command syntax can be. Restore-CvadAcToSite -RestoreFolder C:\Users\amiringectxadminaz\Documents\Citrix\AutoConfig\Backup_2023_04_14_10_50_31 -DeliveryGroups -IncludeByName “VDI-MANUAL” Let’s break down this command for a clear understanding of how it works. This provides guidance on using other switches: Restore-CvadAcToSite: This is the base Restore command or instruction. -RestoreFolder: followed by the folder name. It calls the specific path/location of the files that we want to restore. -DeliveryGroups: Switch used to specify Citrix Component to migrate. -IncludeByName: Specify a single resource to migrate using this command (add suffix/prefix to restore multiple resources with similar names). A prompt request for you to 'Confirm' if you want to continue. Type YES to continue. The ACT starts the process by contacting the backup folder that you specified in the command, followed by the specific component (Delivery Group, in this example). ACT reads and pulls the existing information in the DeliveryGroups.yml file, as demonstrated below: Final Result Once ACT finishes running the restore process, it generates a Log File with details of what was done and list recommendations in case something goes wrong. Below you can see the content of the log file. From the Citrix Web Studio console and click the Refresh button on the top right corner to refresh the console and show the restored components. Click “Refresh” on the Citrix Web Studio console. The Restore process is complete. Summary This guide walked you through configuring the ACT folder structure, modifying the On-Premises configuration, creating a backup, and performing a restore. To learn more about the Citrix Automated Configuration Tool cmdlets, visit the Automated Configuration Tool product documentation
  21. Overview The Automated Configuration tool facilitates migrating and exporting configurations to Citrix DaaS. This Proof of Concept guide illustrates the step by step instructions on how to use this tool. Administrators can easily test and explore the Citrix DaaS features and advantages, while simultaneously running existing On-Premises environments and even facilitate moves between cloud regions, back up existing configurations, and other use cases. The Automated Configuration download link also contains additional information and detailed documentation on said use cases and customizations. Pre-requisites On-Premises environment Citrix Virtual Apps and Desktops (CVAD) on-Premises environment with at least one registered VDA. Citrix Virtual Apps and Desktops on-Premises environment running on one of the following versions: Any Long Term Service Release (LTSR) versions with their latest CU (1912, 2203); Or one of the corresponding latest two Current Releases (CR) versions (for example: 2308, 2311). The domain-joined machine where you plan on running the Automated Configuration tool commands must be running .NET 4.7.2 version or higher. A machine with the Citrix PowerShell SDK, which is automatically installed on the DDC. Download the Automated Configuration tool MSI from the Official Downloads website Cloud-related components Note: If migrating between cloud sites (cloud to cloud migration) refer to the Official Documentation for detailed steps. Valid Citrix DaaS license. Administrator must be able to log into the Cloud Portal and obtain: The resource location name, customer ID, client Secret (app ID and Secret Key) The existing Citrix Cloud Resource Location has at least one Cloud Connector, which is marked as green (Healthy) and is part of the same domain as the On-Premises setup. This proof of concept guide demonstrates how to Complete the On-Premises pre-requisites Export your On-Premises site configuration into YAML (.yml) files Complete the cloud pre-requisites Complete the requisites for importing site configuration when using different provisioning methods (Machine Creation Services (MCS) for both Pooled and Static Catalogs) Import your Site Configuration into Cloud (by editing the required files) Troubleshooting tips and where to find more information Complete Pre-requisites for Exporting from on-premises site These steps must be run in your DDC or the domain-joined machine where you want to run the Automated Configuration tool. Download the latest Automated Configuration tool MSI to your On-Premises DDC or a domain-joined machine. Note: See the Pre-requisites section for more details on how to run it from a different machine. The tool can be downloaded from here. Note: See the Pre-requisites section for more details on how to run it from a different machine. Run the MSI on your On-Premises DDC, by right-clicking on the AutoConfig_PowerShell_x64.msi installer and clicking on Install. Read the License Agreement and check the box if you accept the terms. Then click Install: Files are copied and the progress bar continues moving until it finishes the install. After the MSI runs, a window indicating successful completion pops up. Click Finish to close the MSI setup window. Note: Upon successful execution a desktop icon called Auto Config is created. When launched, the corresponding folder structure located in C:\Users\<username>\Documents\Citrix\AutoConfig, is created. This tool is the one used on subsequent steps. Export your On-Premises site configuration Using an export PowerShell command, you can export your existing On-Premises configuration and obtain the necessary .yml files. These files are used to import your desired configuration into Citrix Cloud. After running the MSI installer on the previous step, you get an Auto Config shortcut automatically created on the Desktop. Right-click this shortcut and click Run as Administrator. Run the Export-CvadAcToFile command. This command exports policies, manually provisioned catalogs, and delivery groups. It also exports applications, application folders, icons, zone mappings, tags, admin roles and scopes, and other items. Note: For MCS machine catalogs and delivery groups, refer to the steps on Requisites for Importing Site Configuration using different Provisioning Methods section in this guide. Once the tool finishes running, the Overall status shows as True and the export process is completed (the output lines shown match the following illustration). Note: If there are any errors, diagnostic files are created in the action-specific subfolders (Export, Import, Merge, Restore, Sync, Backup, Compare), which can be found under %HOMEPATH%\Documents\Citrix\AutoConfig. Refer to the Troubleshooting Tips section if you encounter any errors. The resulting .yml files are now in the current users Documents\Citrix\AutoConfig path: See the image below for an example of the contents on a .yml file (Application.yml) Complete Prerequisites in Cloud Go to your Resource Location and make sure your Cloud Connectors are both showing green (Available). Note: If you need instructions on how to set up your Cloud Connectors, refer to this guide. To verify the health status of your Cloud Connectors, first log in to your cloud portal with your Citrix administrator credentials (or your Azure AD credentials, when applicable). If you have more than one Organization ID (Org ID), select your corresponding tenant. Upon logon, go to the hamburger menu on the top left corner and then click Resource Locations: Access the Cloud Connector(s) tile under your Resource Location. Requisites for Importing Site Configuration using different Provisioning Methods Dealing with Provisioning Services (PVS) Machine Catalogs, Delivery and Application Groups and Policies No extra steps are required to import your PVS Catalogs and their corresponding applications at this time. Follow the steps mentioned on the Import your Site Configuration into Cloud section in this guide. Dealing with Machine Creation Services (MCS): Pooled VDI multi-session (Random) and RDS Machine Catalogs The import and export commands are supported for this task now. Both the golden image and the configuration in Catalogs with User data: Discard can be migrated. However the virtual machines in these catalogs do not get migrated, since the site you are importing from is responsible for maintaining the life cycle of the virtual machine. When machines are turned on, their state might change, affecting import data for the virtual machines synchronization. Therefore when migrating these catalogs using the tool, it creates a catalog metadata and initiates master image creation. However zero machines are imported. Important Considerations: The MCS catalog import process can take a couple of hours based on the size of the master image. Therefore the import command within the tool only starts the MCS catalog creation and does not wait for it to finish. After the import has completed, the catalog creation progress can be monitored via Studio in the cloud deployment. Once the master image is created you can provision machines. Consider your hypervisor's existing capacity, since you have consumption from your on-premises usage. All other objects (including the Delivery Group, applications, policies, and everything that uses the catalog) can be imported, without having to wait for the master image creation. The same commands available within the tool can be used to migrate catalogs and all other objects. When the catalog has finished creating, machines can be added to the imported catalog, and then users can launch their resources. Dealing with Machine Creation Services (MCS): Static Assigned Machines This process imports some low-level details which are stored in the database so it needs to be run from a machine with database access. The tool import process migrates the configuration, master image and the machines as well. It is a quick operation since no images are created. Important Considerations: The VDAs need to be pointed to the Cloud Connectors for them to register with Citrix Cloud. Refer to the Activating sites documentation to activate your Cloud site and thus control reboot schedule, power management, and others, via Citrix Cloud. Once the migration is completed, if you want to delete the corresponding catalog from your on-premises site, you must select the option to leave VM and AD account. Otherwise both records will be deleted and the Cloud site left pointing to the deleted virtual machine. Import your Site Configuration into Cloud During this step, you obtain the customer connection details, manually create your Zone mappings, and import the configuration to your Cloud tenant. Note: For MCS, first follow the corresponding subsections under the Import your Site Configuration into Cloud section in this guide. Obtaining Customer Connection Details Administrators must edit the CustomerInfo.yml file and add the corresponding CustomerName, CustomerID, and SecretKey values to it. These values can be obtained and generated from the Cloud portal, as shown in the following steps. First, open your CustomerInfo.yml file using a text editor application, such as Notepad. The following screenshot shows the CustomerInfo.yml file values that must be edited (underlined in red): On your Cloud portal click the hamburger menu again and go to Identity and Access Management: Go to the API Access tab and copy the Customer ID value, which can be found next to the customer ID text as seen on the following screenshot (red rectangle): Paste the retrieved value between the quotes that follow the CustomerId field in your CustomerInfo.yml file, between the “” (quotes): Back on your Cloud portal, go to the Identity and Access Management portal and API Access tab. Enter the name you want to identify this API key with on the Name your Secure Client box. Then click the Create Client button. Note: This action generates the Client ID and the Secret Key. Copy the ID and the Secret values, one by one (paste them on the CustomerInfo.yml file as shown in the following step). Then click Download to save the file for later reference. Paste the ID and Secret values onto the corresponding fields in the CustomerInfo.yml file: Manually update the Zone Mapping file (ZoneMapping.yml) On-Premises Zones cannot be automatically migrated to a cloud Resource Location, so they must be mapped using the ZoneMapping.yml file. Note: Migration failures occur if the zone is not mapped with a homonymous resource location (a Resource Location with the exact same name). Back in the same directory where your .yml files reside (Documents\Citrix\AutoConfig), open up the ZoneMapping.yml using Notepad or your preferred text editor. Note: The Primary value must be replaced with the name of your corresponding Zone you want to migrate objects from (in your On-Premises environment). You can find this name under your On-Premises Citrix Studio console > Configuration > Zones. Note: if your Zone is named Primary in your On-Premises environment, this value on the ZoneMapping.yml file doesn't need to be changed: Still on the ZoneMapping.yml file, the Name_Of_Your_Resource_Zone value must be replaced with your Cloud Resource Location name. This value can be found on your cloud portal under the Hamburger menu > Resource Locations: Copy your Resource Location name (Shown as My Resource Location on the following screenshot): Paste this value on the ZoneMapping.yml file instead of the Name_Of_Your_Resouce_Zone value: Multiple Zones in your On-Premises environment can also map to one Resource Location in the cloud. However there always must be one row in the file for each zone in the On-Premises environment. For multiple zones on-premises and one Resource Location, the format on this file would look as follows: When mapping Zones to different Resource locations, the file must look like this instead: Manually update the CvadAcSecurity.yml file Host Connections and their associated hypervisors can be migrated to Citrix DaaS. Adding the Host Connections requires security information for the specific hypervisor. This information needs to manually added into the CvadAcSecurity.yml file. Note For this example we are using Citrix Hypervisor. For information on the security information required for other hypervisor types, visit the Citrix DaaS product documentation site. Back in the same directory where your .yml files reside (Documents\Citrix\AutoConfig), open up the CvadAcSecurity.yml using Notepad or your preferred text editor. In the CvadAcSecurity.yml file, enter the username and password for your hypervisor connection, then Save the file. Merging configuration Back on the Migration tool PowerShell console, run the following command: Merge-CvadAcToSite to merge the existing Cloud configuration (if any exists) with the configuration exported from the On-Premises site. When each task runs successfully, the output looks green as .yml files are imported and the corresponding components are added to the cloud site: The resulting files show in the following directory: <This PC>\Documents\Citrix\AutoConfig\Import_<YYYY_MM_DD_HH_mm_ss> On this same folder, you can find a Backup_YYYY_MM_DD_HH_mm_ss folder. Note: Copy this folder somewhere safe as it is a backup of the configuration. The Backup folder contains the following files, which are helpful for reverting changes, if needed: Verify configuration created in Cloud Studio Access your Citrix DaaS Manage tab via the Cloud Console > My Services > Citrix DaaS > Manage tab). Refresh to make sure the Machine catalogs, Delivery groups, policies, tags, and applications are now showing as expected. Note: Depending on what you import, the results vary as they are specific to your own unique configuration. Review each section to make sure the expected items are listed. Machine Catalogs list example: If everything looks as expected, your Citrix DaaS migration is complete. Migrating Cloud environment across Resource Locations For guidance on how to migrate across Cloud regions or DaaS environments, refer to the Citrix DaaS Migrating from Cloud to cloud documentation. Troubleshooting Tips General information for troubleshooting: Refer to the Automated Configuration Tool Troubleshooting FAQ article. Prior to opening support ticket with Citrix, collect all log and *.yml files into a single zip by running New-CvadAcZipInfoForSupport. No customer security information is included. Forward the zip file at the following location %HOMEPATH%\Documents\Citrix\AutoConfig\CvadAcSupport_yyyy_mm_dd_hh_mm_ss.zipNew-CvadAcZipInfoForSupport. Running any cmdlet creates a log file and an entry in the master history log file. The entries contain the date, operation, result, backup, and log file locations of the execution. This log provides potential solutions and fixes to common errors. The master history log is located in %HOMEPATH%\Documents\Citrix\AutoConfig, in the file named History.Log.* All operation log files are placed in a backup folder. All log file names begin with CitrixLog, then show the auto-config operation and the date and timestamp of the cmdlet execution. Logs do not auto-delete. Console logging can be suppressed by using the -quiet parameter
  22. Overview Many organizations need to support non-domain joined solutions where the Citrix-accessed virtual machine is not managed through Active Directory. Several use cases that can require this type of configuration include: Providing non-domain joined desktops to developers or contractors where local administrator rights are needed to install specific applications. Researchers in the healthcare space that require these same rights. Temporary workforce where the workload are only needed for a short time. With Citrix DaaS and Citrix Gateway service support for non-domain joined Virtual Delivery Agents (VDA), this is achievable. The following guide provides the requirements and step-by-step instructions to create and configure a non-domain joined Windows 11 virtual machine hosted in Azure, a machine catalog and delivery group using Citrix DaaS, and access to end users via Citrix Workspace or Citrix Workspace app. Requirements and Prerequisites Review the requirements for creating and accessing non-domain joined virtual machines via Citrix DaaS. Both single-session (desktops only) and multi-session (apps and desktops) are supported. For this POC Guide, the following are being used: A current Citrix DaaS subscription. Single-session Windows 11 image hosted in Azure. Citrix VDA 2303 Rendezvous v2 enabled. Azure Active Directory for Citrix Workspace authentication. Enable Authentication for Citrix Workspace Citrix Workspace supports several authentication identity providers to allow users access to non-domain joined virtual machines including: Azure Active Directory Active Directory Active Directory and Token Okta Google IdP SAML Citrix Gateway Adaptive Authentication Azure Active Directory is being used for this POC Guide. Ensure that the authentication option you have chosen is connected to your Citrix Cloud tenant in Identity and Access Management. Refer here for the instructions to connect an identity provider. Configure Azure Active Directory authentication for Citrix Workspace From the Citrix Cloud menu, select Workspace Configuration. Select Authentication. Select Azure Active Directory, select I understand the impact on the subscriber experience, then click Confirm. Create Windows Virtual Machine Non-domain joined machines are supported on all platforms supported by Citrix Machine Creation Services (MCS). In this step, you create the Windows virtual machine on any supported hypervisor or hyperscaler supported for MCS. In our case, Microsoft Azure is being used. Once your virtual machine is created, follow these steps: RDP into your virtual machine Download the latest release and correct OS type release of the Citrix Virtual Delivery Agent Install the required applications. Run the VDA setup Select Create a master MCS image, then click Next. In the Core Components window, click Next. Select any Additional Components your deployment requires, such as Citrix Profile Management and Machine Creation Services storage optimization, click Next. In Delivery Controller, select Let Machine Creation Services do it automatically, Click Next. Click Next. Select Automatically, click Next. Review the summary page, click Install. When the installation is complete, click Finish and let the machine restart. Once the machine restarts, edit the following registry value: HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\VirtualDesktopAgent Create Machine Catalogs Click Machine Catalogs, then click Create Machine Catalog. Select Single-session OS for the Machine Type, click Next. In Machine Management, select Machines that are power managed, Citrix Machine Creation Services (MCS), and the appropriate resources from the drop-down list. click Next. Select I want users to connect to the same (static) desktop each time they log on and Yes, create a dedicated virtual machine and save changes on the local disk for the Desktop Experience, click Next. Select the Master Image, set the VDA functional level to 2206 (or later), click Next. Select your storage and Windows license type, click Next. In Virtual Machines, choose the number of virtual machines to create and the machine size, click Next. Choose non-domain-joined for the Identity type, provide a name for desktops, click Next. On the summary page, provide a name for the Machine Catalog, click Finish. The Machine Catalog is now being created. Once complete, move on to creating the Delivery Group. Create Delivery Groups Select Delivery Groups, then click Create Delivery Group. Select the desktops and number of machines to add, then click Next. Select Desktops as the delivery type, click Next. Choose Leave user management to Citrix Cloud option, then click Next. Select your license type, then click Next. Review the summary, give the Delivery Group, and display a name, then click Finish. Your Non-domain joined Delivery Group is now ready. Create Rendezvous Citrix Policy Click Policies Click Create Policy Find the Rendezvous Protocol settings and click Select. Select Allowed, then click Save. Click Next Choose the policy assignment method by Delivery Group. Select the delivery group in the drop-down list, ensure Enable is selected, then click Save. Click Next. Select Enable policy, name the policy, and then click Finish. The rendezvous protocol policy is now enabled. Assign Desktops On the Citrix Cloud home page, click View Library. Click the ellipsis for the Non-domain Joined Windows 11(Desktops) and select Manage Subscribers Begin to type the name of the user, then select the user. Once all users/groups have been Subscribed, close the screen. Launch Desktop Connect to your Workspace URL, and provide credentials for login. Select the desktop to launch. Summary This guide walked you through creating a non-domain joined Windows 11 virtual machine in Microsoft Azure. You learned how to enable Azure Active Directory for your Citrix Cloud tenant, create a Windows 11 master image, Machine Catalog, Delivery Group, and a Citrix Policy to enable the Rendezvous protocol. Lastly you assigned the desktops via the Citrix Cloud library and then accessed them via Citrix Workspace. Please refer to the following references for any additional information on the topics covered in this POC Guide. Citrix DaaS Non-domain joined VDA Rendezvous V2
  23. Overview IT Service Management (ITSM) Adapter is a Citrix Cloud service that lets you extend the ServiceNow capabilities into your Citrix DaaS and on-premises environments. With the service, IT teams and end users can deliver and manage Citrix Virtual Apps and Desktops using ITSM workflows in ServiceNow through the Self-Service capabilities. This PoC document shows you how to integrate the Citrix ITSM adapter service + ServiceNow for a test environment with out-of-the-box features for End Users and Administrators. These features and roles include: Also, don’t forget to review the features released in new versions, what's new, enhancements, and what's in the roadmap. Conceptual Architecture Before getting started with the configuration guide, let's have a better understanding of the Architecture: In the diagram, you can see that we’re working with two cloud services. The first is ServiceNow, and the second is the Citrix Cloud control plane. Beginning with the ServiceNow service, we're going to require two components: the Orchestrator plugging, which is going to be the component to access all the workflow design, and the Citrix ITSM connector plug-in, and it's going to make the connection with the Citrix Cloud control plane. Installing the Citrix ITSM connector plug-in from the ServiceNow store creates two tables in the ServiceNow database instance. These tables are related to the Citrix licensing and the Citrix VDA information. On the other hand, from a Citrix Cloud control plane standpoint, the Citrix ITSM adapter is already installed, so you only need to enable the service for your integration. If you have an on-premises environment, perform a site aggregation to discover that workload. Last but not least, the communication between both services happens through an OAuth profile from ServiceNow that provides a client ID and secret password to register with the Citrix ITSM service. Now that you better understand the Architecture and how Citrix ITSM works and communicates with your environments (Cloud and on-premises), it’s time to move forward with our Readiness Review. Prerequisites To set up the ITSM Adapter service, make sure you have the following things ready: An account with system administrator permission to your ServiceNow instance After the service is set up, you can implement role-based access control. For more information, see Access Management. Register to Citrix Cloud control plane to get access to ITSM service and sign up for a free trial. A Citrix Cloud administrator account with Full Access. Connectivity between your ServiceNow instance and the IP address of the region where your Citrix Cloud account is located. Scope We’re going to summarize the integration in the following steps: ServiceNow Enable Orchestration plug-in Install Citrix IT Service Management Connector from the ServiceNow Store Create an OAuth application. Citrix Cloud / ITSM service console Enable ITSM service. Add your ServiceNow instance. Add the on-premises Citrix environment (Site aggregation) Orchestration plug-in Under the My Instance admin console, go to the Instance Action and click Activate Plugin. Search for the Orchestration plug-in and click activate. Once completed, you receive an email notification. Install Citrix IT Service Management Connector from the ServiceNow Store The Citrix ITSM plug-in for ServiceNow provides an easy way to extend ServiceNow’s capabilities into your Citrix infrastructure. While virtual app and desktop (Citrix Workspace) delivery allows organizations to be more flexible and end-users more productive with anywhere, any device access, this new plug-in further enhances the capabilities of Citrix environments with ServiceNow integration. With out-of-the-box workflows and the ability to create custom workflows, IT teams can now automate, monitor, and manage their Citrix environments, allowing them to focus on strategic projects while automating minor tasks. Go to the ServiceNow store, and on the search bar, type Citrix. Select the Citrix IT Service Management Connector and click Get to start the installation process Click Install. Once completed, you can review the installation logs. Finally, the ITSM connector shows as Installed. So far, you’ve prepared your ServiceNow instance for the integration. A small piece (the OAuth client ID and Secret) is pending, but we go back later to the SNOW console to do that when needed. Now is the time to start working on the Citrix Cloud console to activate the service and register your ServiceNow instance. Registering SNOW with Citrix ITSM service On the Citrix Cloud console, go to My Services (under the hamburger menu) Go to Services > ITSM. Click the + Add a ServiceNow instance tab. Copy the Redirect URL in the window (you need it to create the OAuth application in the ServiceNow console). Create the OAuth Application To add the Client ID and secret, you must create the OAuth application: Go to the ServiceNow Admin Console and search for OAuth > System OAuth and click New. Select **Create an OAuth API endpoint for external clients. Enter the following information: Name: any friendly name (Citrix Cloud) Client ID: it’s shown automatically. Redirect URL: Paste the URL copied from the Citrix Cloud Console on the ITSM window. Click Submit on the Top right corner. Return to the OAuth application under the OAuth console on ServiceNow. (we named it POC-ITSM App in the following example.) Select/open the new OAuth endpoint created. Copy the Client ID and Secret and ServiceNow URL. Return to the Citrix Cloud Console, past that information, and Connect. Click Allow when prompted to connect to the ServiceNow account on instance XXXX. You get a message asking to Allow your ServiceNow instance to connect with Citrix Cloud. Click Allow. Once completed, you can verify that your integration was completed. At the ServiceNow console, go to Citrix IT Service Management Connector > Home. You see your Citrix Cloud connection Status as Connected Go to the ITSM adapter service management console at Citrix Cloud Console. Click the Manage tab. You see your ServiceNow instance status as Registered. You’ve completed the integration of Citrix ITSM service + ServiceNow. Now, start your POC. Login into your ServiceNow Admin Console. You see all the Citrix IT Service Management features in your ServiceNow console. Under Citrix IT services Management > Services. You find the Out of the Box workflows. Add and on-premises Citrix Site You can add your site by doing a Site Aggregation for customers with on-premises environments. To Citrix Cloud. This does not modify how you manage your on-premises environments, as it only creates a connection to the ITSM service so your ServiceNow instance can discover and run Citrix + ServiceNow workflows. It’s a straightforward process. For a detailed step-by-step, follow this Citrix Documentation. We can summarize the process in the following steps: Install a Cloud Connector on your on-premises hypervisor (Windows Server domain join VM). Add your on-premises Delivery Controller to Citrix Cloud. Complete the on-premises site registration. Enter the IP and FQDN of your Delivery Controller. Citrix Cloud establishes communication with it through the Cloud Connector. Validate that your on-premises site is registered with Citrix Cloud. It shows up as Green.
  24. Overview Microsoft Azure Virtual Desktops allow enterprises to deliver virtual applications and desktops from Azure. This proof of concept (PoC) guide is designed to help you quickly configure Citrix DaaS with Azure Virtual Desktop for a trial evaluation only, in a hybrid environment. At the end of this PoC guide you are able to bridge your on-premises Citrix DaaS deployment with a Microsoft Azure subscription using Citrix DaaS. You are able to let your users launch an Azure Virtual Desktop virtual app or desktop using the new Windows 11 Multi-Session experience, while also accessing on-premises resources. Conceptual Architecture Scope In this PoC guide, you experience the role of a Citrix Cloud and Microsoft Azure administrator and create a hybrid environment that spans your organization’s on-premises deployment and Azure. You provide access to a virtualization environment consisting of the Windows 11 Multi-Session experience in Azure Virtual Desktop (AVD) and on-premises resources to an end user with Citrix DaaS. This guide showcases how to perform the following actions: Create a new Azure subscription and an Azure Active Directory (AAD) Tenant (if you don’t have one) Connect your on-premises AD to your AAD using Azure AD Connect Create Master Image using Windows 11 Enterprise for Virtual Desktops Create a Citrix Cloud account (if you don’t have one) Request a Citrix DaaS trial Create a Citrix DaaS account (Citrix Cloud account) and add the Azure tenant as a Resource Location Create a Windows Server VM and install the Citrix Cloud Connector in your Azure resource location Prepare the Azure Virtual Desktop template for the session host virtual machines (VMs). Install the Citrix Virtual Delivery Agent on the AVD VM Use your Citrix Virtual Apps and Desktops service account (Citrix Cloud account) to connect to your Azure subscription using the Citrix Cloud Connector Use Citrix Machine Creation Services for deploying a catalog and then create a delivery group Create a Windows Server VM and install the Citrix Cloud Connector in on-premises Resource Location and add it as a resource location Use your Citrix DaaS account (Citrix Cloud account) to connect to your on-premises resources using the Citrix Cloud Connector Let your users connect to the AVD or on-premises sessions via Citrix Workspace There is a requirement from Microsoft that the AVD session hosts must be joined to a Windows Active Directory (AD) domain that has been synchronized with either Azure AD using Azure AD Connect or with Azure AD Domain Services. It requires you to connect your on-premises Active Directory to your organization’s Azure subscription. This is out-of-scope for this guide but if you are also a Citrix Networking customer then you can use Site-to-Site VPN with Citrix ADC (which requires a public IP). The two preceding options are creating IPsec tunnels between your on-premises environment and the AVD network in Azure. If you are looking for a solution that does much more than just set-up a link between these two locations, then we suggest considering creating an end to end SDWAN solution. The main advantages this gives you are integrated security, orchestration, and policy-based configuration. SDWAN has further benefits: Enables direct access to video-on-demand that is rendered from the customer data center Provides intelligent traffic steering from the VDA to other on-premises properties VoIP and real-time video traffic are navigated from the corporate data center Aggregate more than 1 link to give you resiliency and expanded bandwidth by combining the different links To set up an end to end SDWAN solution you can follow these guides: Express route or Point-to-Site VPN which doesn’t require a public IP are other options to establish the connectivity. This guide provides detailed instructions on how to deploy and configure your environment including VMs, connecting your AD to Azure AD. As a Citrix and Azure tenant administrator, you create the AVD environment to enable your users to test various scenarios that showcase Citrix DaaS and Azure Virtual Desktop integration. Create an Azure Subscription and an Azure Active Directory Tenant If you are an existing Microsoft O365 customer you already have an Azure Active Directory (AAD) and so you can log in to Azure as the global administrator of the subscription and skip to the next section. Go to the url: Sign into Azure and login to Azure Enter you information, click Next Verify your identity and then provide your credit card details for billing purposes. You may be asked to verify your card details by making a payment of ~ USD 1 Once you are done you see this in Azure Portal. If that is the payment method you want to use click Next. Else change it and then click Next. If you agree, check the I agree checkbox for subscription agreement, offer details, and privacy statement. Click Sign Up. Alternately you can enroll in an O365 Enterprise E3 trial by going to this link and providing your details Click + Create a Resource and search for Azure Active Directory and select it Click Create Provide the Organization name and Initial domain name of the AD that you want to create. Select the Country or Region and then click Create Wait for the Azure AD to be created Connect the on-premises AD to Azure AD using Azure AD Connect Launch an RDP session to the AD. Open a browser and login to Azure as the global administrator of the subscription and Azure AD. Click Azure Active Directory and then Azure AD Connect. Click Download Azure AD Connect In the browser window that opens click Download Click Run In the Azure AD connect dialog, click Continue Click Use Express Settings Provide the Azure Active Directory global administrator Username and Password. Click Next and login again if requested Provide the Active Directory enterprise administrator Username and Password. Click Next. Check the Continue without matching all UPN suffixes to verified domains, click Next Click Install Once the config is complete, click Exit Go back to the Azure Active Directory page in the Azure portal and click Users. Validate that one or more users you created are visible in the list. Create a master image using Windows 11 Enterprise Multi-session Select + or + Create a resource. Search for Microsoft Windows 11 and select the Microsoft Windows 11 option that shows Windows 11 Enterprise multi-session in the drop-down list. Select the Windows 11 Enterprise multi-session option from the drop-down list and then click Create Select the appropriate Subscription and the Resource group created for AVD to deploy the machine in. Provide a name for the Master Image VM. Choose the same region as the AD VM. Enter the credentials for the administrator account. Click Next: Disks Select the appropriate OS disk type and Encryption type for your deployment. Click Next: Networking Select the virtual network that your other VMs are on and ensure that a Public IP is being created for the Master Image. Click Review + Create Ensure that the Validation Passed message appears and check the machine settings. Click Create to begin the Master Image VM creation Once the VM creation completes, click Go to resource. The VM must have a networking rule to allow incoming RDP traffic on it Public IP. Click Networking in the Favorites column. Click Add inbound port rule Your public IP can be obtained by running a google search for whatsmyip address to make RDP connections to the AD VM. Select IP Address as Source, enter the Public IP Address of the machine you want to connect from in the Source IP field, set Destination Port ranges to 3389, and Protocol to TCP. Set an appropriate Priority value and provide a name to the rule. * Click Add *: Leaving port 3389 open remotely long-term can pose a security risk. RDP in to the machine with the admin credentials you provided when creating the VM and join the VM to the domain and reboot the machine. Create a Cloud Connector in your Azure subscription Select + or + Create a resource in Azure. Select Windows Server 2019 Datacenter under Get Started to create a new Windows Server 2016 machine. Select the appropriate Subscription and the Resource group created for AVD to deploy the machine in. Provide a name for the Cloud connector VM. Choose the same region as the AD VM. Enter the credentials for the administrator account. Click Next: Disks Select the appropriate OS disk type and Encryption Type for your deployment. Click Next: Networking Select the virtual network that your other VMs are on and ensure that a Public IP is being created for the Cloud Connector VM. Click Review + Create Ensure that the Validation Passed message appears and check the machine settings. Click Create to begin the Cloud connector VM creation Once the VM creation completes, click Go to Resource The VM must have a networking rule to allow incoming RDP traffic on it Public IP. Click Networking in the favorites column and then click the name of the network interface Click Network Security Group and then select the Network Security Group of your AVD VM as it already has the port rules to allow access to your machine. Click Save *: Leaving port 3389 open remotely long-term can pose a security risk. RDP in to the machine with the admin credentials you provided when creating the VM and join the Cloud Connector VM to the domain and reboot the machine. Create a Citrix Cloud account If you are new to Citrix Cloud, follow the instructions on the Citrix Cloud Sign Up page. If you are an existing Citrix Cloud customer continue onto the next section. Ensure that you have an active Citrix Cloud account. If your account has expired you need to contact sales to enable it. Create a new Resource Location RDP to the Cloud Connector VM and login as the AD admin. Goto the URL: Citrix Cloud. Enter Username and Password. Click Sign In. (If your account manages more than one customer select the appropriate one) Under Resource Locations Click Edit or Add New Click + Resource Location and enter name of the New Resource Location. Click Save Under the newly created resource location click + Cloud Connectors Click Download. Click Run when the download begins The Citrix Cloud Connectivity test successful message is displayed. Click Sign in and Install to continue. If the test fails, check the following link to resolve the issue - CTX224133 From the drop-down lists select the appropriate Customer and Resource Location. Click Install Once the installation completes, a service connectivity test runs. Let it complete and you again see a successful result. Click Close Refresh the Resource Location page in Citrix Cloud. Click Cloud Connectors The newly added Cloud Connector is listed. In production environments, ensure to have 2 Cloud Connectors per resource location Request a Citrix DaaS trial Sign in to your Citrix Cloud account From the management console, select Request Trial for the Citrix DaaS Service Install Virtual Delivery Agent on the AVD host VM While we wait, we can install the Citrix Virtual Apps and Desktops, Virtual Delivery Agent on the Windows 11 Multiuser VM that we created. Connect to the AVD VM via RDP as the domain admin Open Citrix.com in your browser. Hover over Sign in and click My Account Sign in with your Username and Password. Click Downloads. From the Select a Product drop-down list, select Citrix Virtual Apps and Desktops In the page that opens, select the latest version of Citrix Virtual Apps and Desktops 7 (without the .x at the end) Scroll down to Components that are on the product ISO but also packaged separately. Click the chevron to expand the section. Click Download File under Server OS Virtual Delivery Agent Check “I have read and certify that I comply with the Export Control Laws” checkbox, if you agree. Click Accept. The download begins. Save the file and Run it when the download completes Click Next in the Environment section to create a master MCS image. In the Core Components section, check the Citrix Workspace app checkbox if your users can use the session to launch sessions from within it. Click Next In the Additional section choose the components that you need and click Next NOTE: To see logon information in Citrix Director, select also Citrix User Profile Manager Enter the UPN for the Cloud Connector VM and click Test Connection. Ensure that the test is successful a green tick appears next to the entered UPN. Click Add and click Next Click Next in the Feature section and Next again in the Firewall section. Click Install in the Summary section Once the installation completes, in the Diagnostics section click Connect. Enter your Citrix Cloud credentials, click OK. Once the credentials are validated, click Next Click Finish and let the VM reboot. Create a hosting connection between Citrix DaaS and Azure Configure Citrix DaaS to connect to the Azure Subscription that hosts the Azure Virtual Desktop VMs. Once the trial is approved, Log in to Citrix Cloud from your local machine. Scroll to My Services, and locate DaaS service tile, click Manage the Full Configuration page is displayed in the left navigation menu, click Zones and verify that the Resource Location and Cloud Connector you have setup are visible. In the left menu under Configuration. Click Hosting and then click Add Connection and Resources that host the machines. From the drop-down lists select Create New Connection, Microsoft© AzureTM as Connection Type, Azure Global for Azure environment and the Azure zone location setup as a Resources Location. Select Citrix provisioning tools (Machine Creation Services selected. Click Next Paste your Azure Subscription ID in the Subscription ID text box and enter a Connection Name. Click Create New to create a new service principal. Alternately you can manually grant Citrix Cloud Access to the Azure subscription (with more restrictive roles than the default contributor) CTX224110 Sign in to your Azure account when prompted. Ensure that the user is an owner and not an external user in the subscription Check the Consent on behalf of your organization checkbox and click Accept if you agree. Once the validation completes Connected is displayed. Click Next Select the appropriate Region and click Next Enter a Name for these resources and select the appropriate Virtual network and Subnet where the VMs are to be created. Click Next. Review the Summary and click Finish You are returned to the Hosting page. Once you are done click Machine Catalogs to start creating your catalog. Create a Machine Catalog and a Delivery Group Click Create Machine Catalog. In the Machiune Catalog Setup dialog, click Next Ensure Multi-Session OS is selected. Click Next Ensure Machines that are power managed and Citrix Machine Creation service are selected and the correct Azure network is shown in the Resources. Click Next Choose the correct disk that is associated with the AVD VM. From the minimum functional level drop-down list select 1811 (or newer). Click Next. A pop-up appears to ask for the VM attached to the VHD to be stopped. Log in to the Azure portal and Under Virtual Machines, go to the AVD VM and Click the Stop button. Ignore the warning about losing the Public IP. Wait for status to show Stopped (deallocated). Return to the Citrix Cloud tab and click Close Leave Defaults and click Next Modify the number of VMs if you want and select the machine size you want for your VMs. Click Next Click Next Set the write back cache size if you want it. Click Next Click Next and click Close Select the OU in which the VMs are placed. Enter the computer Account naming scheme. Ensure that the name is fewer than 15 chars long and ends with a #. Click Next Click Enter Credentials. In the dialog that opens enter username and password of the AD admin. Click Done. Click Next Click Next Click Next Enter a name for the machine catalog. Click Finish Wait for the catalog to be created. When Machine Catalog creation is finished, from the left side menu click Delivery Groups then Create Delivery Group. Select the Windows 11 Multisession Catalog. Increment the number of machines to the number of VMS you want to add to the delivery group. Click Next For our example we select the Allow any authenticated users to use this Delivery Groupo radio button. Click Next If you want to also make apps available from this delivery group click the Add drop-down list and choose From Start Menu From the Add Applications from the Start drop-down list Dialog check the boxes next to the apps you want to make available. Then click OK Click Next In the Desktops section click Add. Enter Display Name for the delivery group. Ensure Enable desktop is checked. Click OK Click Next Click Next Select the appropriate License Type and Click Next Enter a Delivery Group name. Click Finish Once the delivery group is created, your Delivery Group overview looks like this. If you want to add your on-premises resources to the Workspace follow the below steps. Create a Cloud Connector in your on-premises data center Add a cloud connector in your on-premises data center. Create a Windows server 2012 R2 or 2016 VM in your on-premises. Repeat the steps in the “Create a new Resource Location” section. Add on-premises site to the Workspace Follow the steps in the guide to add the on-premises site to Citrix DaaS. Complete until the end of Task 3: Configure connectivity and confirm settings in this page. Launch the session from Citrix Workspace Open the Workspace URL you saved earlier (from Citrix Cloud) to the Citrix Workspace. Log in as one of the domain users that you added to the Delivery Group. Click View all Desktops Click the Windows 11 Multi-session DG desktop that you created in Azure. The session launch gives you access to the Azure Virtual Desktop Summary The guide walked you through bringing your Azure hosted Azure Virtual Desktop and on-premises resources (using Workspace Configuration) together, so users access them in one place. You learned how to create a hybrid setup to manage both Azure Virtual Desktops based VMs and on-premises based resources using Citrix Virtual Apps and Desktops. The process included creating a network connection between the Azure virtual network and your on-premises data center. Also you learned how to synchronize your on-premises Active Directory with Azure Active Directory with Azure AD connect. We even looked at how to create a Citrix Cloud account, if you didn't have one and get access to the Citrix Virtual Apps and Desktops service, which makes all this work. To learn more about migrating your on-premises Citrix Virtual Apps and Desktops setup to the cloud, read the deployment guide
  25. Overview Citrix developed the Citrix Image Portability Service (IPS) to simplify moving workloads between different resource locations and hypervisor platforms. With IPS you can even move your workloads between on-premises and public cloud environments. Citrix Image Portability Service provides Citrix administrators with a simple workflow to manage workloads between on-premises and public cloud platforms like: Microsoft Azure Google Cloud Platform (GCP) AWS (AWS) VMware vSphere Citrix XenServer Nutanix Developed using App Layering cross-platform technology, the Citrix Image Portability Service uses Citrix DaaS REST-APIs to migrate on-premises Machine Creation or Provisioning Services Images. The Image Portability workflow is the framework for a migration of an Image from your on-premises location to your public cloud subscription. After exporting your Image, Image Portability Service helps you transfer the Image to your public cloud subscription and prepare it to run. Finally, Citrix Provisioning or Machine Creation Services provisions the Image in your public cloud subscription. The product documentation can be found at https://docs.citrix.com/en-us/citrix-daas/migrate-workloads.html. Scope In this guide, we show how to migrate workflows from on-premises (VMware vSphere 8) to public cloud (Microsoft Azure and AWS). We use three methods: Pure API calls using Postman PowerShell (PoSH) A self-written Windows application that encapsulates all necessary API calls and PowerShell scripts into a GUI Conceptual architecture The Citrix Image Portability Service (IPS) is a REST-API with all the semantics and requirements of other Citrix DaaS and Virtual Apps and Desktops APIs. The headers and authentication work the same as the other Citrix APIs do. Citrix IPS relies on the Connector Appliance for Cloud Services to communicate with Citrix Cloud. IPS uses provisioning technology to create and manage resources in the resource location where the Connector Appliance is installed. More info: https://docs.citrix.com/en-us/tech-zone/learn/tech-briefs/Image-portability-service.html. Image Portability Service components Citrix Cloud services Citrix Credential Wallet Citrix Connector Appliance Compositing Engine VM PowerShell modules Citrix Cloud services The Citrix Cloud Services API is a REST-API service that interacts with the Image Portability Service. Using the REST-API service, you can create and monitor Image Portability jobs. For example, you make an API call to start an Image Portability job, such as to export a disk, and then make calls to get the status of the job. Citrix Credential Wallet The Citrix Credential Wallet service securely manages system credentials, allowing the Image Portability Service to interact with your assets. For example, when exporting a disk from vSphere to an SMB share, the Image Portability Service requires credentials to open a connection to vSphere and to the SMB share to write the disk. The credentials are stored in the Credential Wallet and the Image Portability Service can retrieve and use those credentials. Create all necessary credentials in the right format before use IPS. This service gives you the ability to fully manage your credentials. The Cloud Services API acts as an access point, giving you the ability to create, update, and delete credentials. Compositing Engine The Compositing Engine is the workhorse of the Image Portability Service. The Compositing Engine (CE) is a single VM created at the start of an Image Portability job. The CE is created in the jobs target hypervisor/hyperscaler. For example, when exporting a disk from vSphere, the job creates the CE on the vSphere server. When a prepare job runs in Azure, AWS, or Google Cloud, it creates the CE in Azure, AWS, or Google respectively. The CE mounts your disk to itself, and then does the necessary manipulations to the disk. Upon completion of the preparation or export job, the CE VM and all of its components are deleted. Connector Appliance The Connector Appliance runs in your environment and acts as a controller for individual jobs. It receives job instructions from the Cloud service, and creates and manages the Compositing Engine VMs. The Connector Appliance VM acts as a single, secure point of communication between the Citrix Cloud services. Deploy one or more Connector Appliances in each of your Resource Locations. By co-locating the Connector Appliance and the Compositing Engine, the deployment’s security posture increases greatly. The Connector Appliance keeps all components and communications within the same Resource Location. It needs access to the following URLs to prepare Images in the Image Portability Service: *.layering.cloud.com credentialwallet.citrixworkspaceapi.net graph.microsoft.com login.microsoftonline.com management.azure.com *.blob.storage.azure.net PowerShell modules We provide a collection of PowerShell modules for use within scripts as a starting point to develop your own custom automation. The supplied modules are supported as is, but you can modify them if necessary for your deployment. The PowerShell automation uses supplied configuration parameters to compose a REST call to the Citrix Cloud API service to start the job. It provides you with periodic updates as the job progresses. If you want to develop your own automation solution, you can make calls to the cloud service directly using your preferred programming language. See the API portal for detailed information about configuring and using the Image Portability Service REST endpoints and PowerShell modules https://developer.cloud.com/citrixworkspace/citrix-daas/Image-portability-service/docs/overview. To use the PowerShell scripts, you need the following: The latest version of PowerShell Connectivity to the Microsoft PowerShell Gallery to download the required PowerShell libraries Install-Module -Name PowerShellGet -Force -Scope CurrentUser -AllowClobber Install-Module -Name "Citrix.Workloads.Portability","Citrix.Image.Uploader" -Scope CurrentUser Update-Module -Name "Citrix.Workloads.Portability","Citrix.Image.Uploader" -Force Install-Module -Name Az.Accounts -Scope CurrentUser -AllowClobber -Force Install-Module -Name Az.Compute -Scope CurrentUser -AllowClobber -Force Install-Module -Name VMware.PowerCLI -Scope CurrentUser -AllowClobber -Force -SkipPublisherCheck Install-Module -Name Amazon AWS.Tools.Installer Install-Amazon AWSToolsModule Amazon AWS.Tools.EC2,Amazon AWS.Tools.S3 All PowerShell cmdlets have built-in help containing full syntax and examples: Get-Help Start-IpsVsphereExportJob -Full PS C:\TACG> Get-Help Start-IpsVsphereExportJob NAME Start-IpsVsphereExportJob OVERVIEW Starts an Image Portability Service job to export an Image from Vsphere. SYNTAX Start-IpsVsphereExportJob -CustomerId <String> -SmbHost <String> [-SmbPort <String>] -SmbShare <String> [-SmbPath <String>] -SmbDiskName <String> [-SmbDiskFormat <String>] -SmbCwId <String> [-Deployment <String>] -ResourceLocationId <String> -VsphereCwSecretId <String> -VsphereHost <String> [-VspherePort <Int32>][-VsphereSslCaCertificateFilePath <String>] [-VsphereSslCaCertificate <String>] [-VsphereSslFingerprint <String>] [-VsphereSslNoCheckHostname <Boolean>] -VsphereDataCenter <String> -VsphereDataStore <String> [-VsphereResourcePool <String>] -VsphereNetwork <String> [-VsphereHostSystem <String>] [-VsphereCluster <String>] -SourceDiskName <String> [-AssetsId <String>] [-Tags <Hashtable>] [-Timeout <Int32>] [-Prefix <String>] [-JobDebug <Boolean>] [-Flags <String[]>] [-DryRun <Boolean>] [-SecureClientId <String>] [-SecureSecret <String>] [-LogFileDir <String>] [-LogFileName <String>] [-OverwriteLog] [-Force] [<CommonParameters>] Start-IpsVsphereExportJob -ConfigJsonFile <String> [-SecureClientId <String>] [-SecureSecret <String>] [-LogFileDir <String>] [-LogFileName <String>] [-OverwriteLog] [-Force] [<CommonParameters>] DESCRIPTION Starts an Image Portability Service job to export an Image from Vsphere to a virtual disk file on a SMB fileshare. Start-IpsVspherePublishJob is an alias for this cmdlet. LINKS REMARKS To open the examples, type: "get-help Start-IpsVsphereExportJob -examples". Obtain more information by using: "get-help Start-IpsVsphereExportJob -detailed". Technical information can be be obtained by using: "get-help Start-IpsVsphereExportJob -full". Requirements A Citrix Connector Appliance in all resource locations where IPS will be used. A Windows (SMB) file share must be locally accessible to any export, prepare, and publish job. A valid Citrix Cloud Customer ID and Citrix DaaS entitlement. On-premises master Machine Create Services (MCS) or Provisioning Services (PVS) Image. Access to public cloud resource location. A Citrix Machine Catalog Image - IPS requires using Images that have one of the following tested configurations: Windows Server 2016, 2019, and 2022H2 Windows 10 or 11 Provisioned using Machine Creation Services or Citrix Provisioning Citrix Virtual Apps and Desktops VDA version 1912CU6, 1912CU7, 2203CU1, 2203CU2, 2212, 2303, 2305, 2308 Citrix PVS Agent version 1912CU6, 1912CU7, 2203CU1, 2203CU2, 2212, 2303, 2305, 2308 Remote Desktop Services enabled for console access in Azure Image Portability service supports the following hypervisors and cloud platforms Source platforms: VMware vSphere 7.0 and 8.0 Citrix Hypervisor/XenServer 8.2 Nutanix Prism Element 3.x Microsoft Azure Google Cloud Platform Destination platforms: VMware 8.0 Microsoft Azure AWS Google Cloud Platform For using Postman Download the Postman app for API calls: https://www.postman.com/downloads/ For using PowerShell Download the Remote PowerShell kit: https://docs.citrix.com/en-us/citrix-daas/sdk-api.html#citrix-virtual-apps-and-desktops-remote-powershell-sdk Install-Module -Name "Citrix.Workloads.Portability","Citrix.Image.Uploader" Add-PSSnapin Citrix.* Get-Module -ListAvailable "Citrix.*" Folder: C:\Program Files\WindowsPowerShell\Modules ModuleType Version Name ExportedCommands ---------- ------- ---- ---------------- Script 2.1.11.0 Citrix.Image.Uploader {Copy-ToAzDisk, Copy-ToAwsDisk, Get-VhdSize, Get-VhdConten... Script 2.3.1 Citrix.Workloads.Portability {Start-IpsAwsPrepareJob, Start-IpsVsphereExportJob, Start-... For using the Windows .NET application Download and install the DaaS Remote PowerShell kit Download and install the latest version of PowerShell Download and install the latest version of Azure PowerShell Download and install .NET 7.0 Core Download and install Chilkatsoft Chilkat .NET Core The .NET application shows the possibility to automate workflows by invoking REST-API calls and PowerShell scripts. The source code implements all 4 steps of the Image Migration Workflow. Complete Citrix Cloud Prerequisites Furthermore, some account information is necessary to obtain – the most important one is the Customer ID: After logging in to Citrix Cloud, open the Account Settings window of the Cloud portal: In the Account Settings window you find the Customer ID. Write it down or save it for further use. Create an API client API clients in Citrix Cloud are always tied to one administrator and one customer. API clients are not visible to other administrators. If you want to access to more than one customer, you must create API clients within each customer. API clients are automatically restricted to the rights of that administrator that created it. For example, if an administrator is restricted to access only notifications, then the administrator's API clients have the same restrictions: Reducing an administrator’s access also reduces the access of the API clients owned by that administrator. Removing an administrator’s access also removes the administrator's API clients. To create an API client, select the Identity and Access Management option from the menu. If this option does not appear, you may not have adequate permissions to create an API client. Contact your administrator to get the required permissions: In the next screen select the API Access tab: Name your Secure Client and click Create Client. Now a message appears that ID and Secret have been created successfully. Download or copy the Client ID and Secret to access the APIs. Accessing Citrix Cloud via API calls or PowerShell To call APIs, you must also create a Bearer token. The Bearer token is used for API authentication and authorization. Tokens can be obtained using a standard OAuth 2.0 Client Credential grant flow. For more information about OAuth 2.0 Client Credential grant, see https://tools.ietf.org/html/rfc6749#section-4.4. To get a bearer token, make a POST call to the trust service's authentication API: POST https://api-us.cloud.com/cctrustoauth2/{customerid}/tokens/clients Obtain a Bearer token for authentication Flow to obtain a Bearer token using the Postman app It is important to set the URI to call and the required parameters correct: The URI must follow this syntax: For example [https://api-us.cloud.com/cctrustoauth2/{customerid}/tokens/clients] where {customerid} is your Customer ID you obtained from the Account Settings page. If your Customer ID is for example 1234567890 the URI would be [https://api-us.cloud.com/cctrustoauth2/1234567890/tokens/clients] Paste the correct URI into Postman´s address bar and select POST as the method. Verify the correct settings of the API call – please check the Headers tab to reflect the settings: The next step is to enter your API credentials into the Body of the API call. Make sure that the Parameter type is set to x-www-form-encoded: Now you are ready to call the API to get a Bearer token by pressing the Send button: If everything is set correctly, Postman shows a Response containing a JSON-formatted file containing the Bearer token in the field access-token: The token is normally valid for 3600 seconds. If an error occured, the Response contains some hints about the error: To get a bearer token, make a POST call to the trust service's authentication API: POST https://api-us.cloud.com/cctrustoauth2/{customerid}/tokens/clients Response: { "token_type": "bearer", "access_token": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9…pZWFOBHuZ63tvGvRA", "expires_in": "3600" } Now you have a valid Bearer token for further API calls. Flow to obtain a Bearer token using PowerShell PS C:\TACG> $tokenUrl = 'https://api-eu.cloud.com/cctrustoauth2/1234567890/tokens/clients' PS C:\TACG> $response = Invoke-WebRequest $tokenUrl -Method POST -Body @{ >> grant_type = "client_credentials" >> client_id = "50XXXXXXXX-XXXXX-XXXXX-XXXXXXXXXXX7" >> client_secret = "8MXXXXXXXXXXXXXXXXXXX==" >> } PS C:\TACG> $token = $response.Content | ConvertFrom-Json PS C:\TACG> $token | Format-List token_type : bearer access_token : J9.eyJzdWI ... BTp3KN5N_qr7Hjk-VTQy3Qp2dCId_cnagZaQleo1E98ifw2eUch1vu8tjYR-_NkA expires_in : 3600 Flow to obtain a Bearer token using PowerShell All relevant data is stored in an adjacent JSON file – you can follow the guide to obtain the Customer ID and the API Client values: After entering the correct values you can use the Windows application to create the Bearer token: Now you have your Bearer token for authentication and authorization. It must be included in each API-/PoSH-Call! Image Migration Workflow Migrating Images with the Citrix Image Portability Service consists of a 4-stage workflow. Export: The export stage exports Images from the on-premises hypervisor and preps them for upload to the public cloud resource location. This stage can include converting file system types to a common format for the cloud. Upload: The upload stage uploads the Image to the target cloud subscription. This process is a point-to-point transfer using the configuration and credentials for the public cloud supplied by the administrator. Prepare: The prepare stage is a complex set of steps: Removal of source platform components Injection of target platform components into the Image Reconfiguration of the VDA Rearm of the OS based on the supplied configuration properties At the end of this phase, the Image boots once to allow Windows plug-and-play to run and configure the OS for the new platform. Once the preparation completes, the Image is ready for provisioning with MCS. Publish: The final stage deploys the Image as a new machine catalog. In the case of MCS, the Citrix Virtual Apps and Desktops Remote PowerShell SDK can automate the creation of an MCS catalog from the migrated disk. In the case of PVS on a HyperScaler, the Image Portability Service provides a REST interface or a PowerShell cmdlet to publish the migrated disk directly into the PVS vDisk store. Throughout the four stages, the Image Portability Service uses App Layering compositing engines in the background to modify the Image and drive the process. All Citrix Image Portability workflows are based on the configuration of the source Image and provisioning targets, either MCS or PVS. The workflow chosen determines the steps required by the Image Portability Service. Part 1: Exporting the Image - Prerequisites Exporting the Image to an SMB file share is a multi-step process: Determine the ID of the Resource Location where the Image resides Add new credentials to the Credential Wallet Export the Image to the SMB share In this guide we export the Image from VMware vSphere. The following vCenter permissions are necessary to run the IPS export disk job in a VMware environment. For current permission requirements look at the product documentation https://docs.citrix.com/en-us/citrix-daas/migrate-workloads.html#vmware-vcenter-required-permissions. - Cryptographic operations - Direct Access - Datastore - Allocate space - Browse datastore - Low level file operations - Remove file - Folder - Create folder - Delete folder - Network - Assign network - Resource - Assign virtual machine to resource pool - Virtual machine - Change Configuration - Add existing disk - Add new disk - Remove disk - Edit Inventory - Create from existing - Create new - Remove - Interaction - Power off - Power on Determining the ID of the Resource Location using Postman GET https://api-eu.cloud.com/resourcelocations Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} Response (shortened): { "items": [ { "id": "58110e95-XXXX-XXXX-XXXX-XXXXXXXXXX", "name": "The Austrian Citrix Guy - vSphere", "internalOnly": false, "timeZone": "GMT Standard Time", "readOnly": false }, { "id": "5faae27c-XXXX-XXXX-XXXX-XXXXXXXXXX", "name": "The Austrian Citrix Guy - Amazon AWS", "internalOnly": false, "timeZone": "GMT Standard Time", "readOnly": false }, { "id": "76918c03-XXXX-XXXX-XXXX-XXXXXXXXXX", "name": "The Austrian Citrix Guy - AzHib", "internalOnly": false, "timeZone": "GMT Standard Time", "readOnly": false }, ... ] } Copy the value of the id field as you need it for further calls. Determining the ID of the Resource Location using PowerShell PS C:\TACG> Get-ConfigZone | select Name, ExternalUid Name ExternalUid ---- ----------- Initial Zone 00000000-XXXX-XXXX-XXXX-XXXXXXXXXX The Austrian Citrix Guy - Amazon AWS 5faae27c-XXXX-XXXX-XXXX-XXXXXXXXXX The Austrian Citrix Guy - AzGPU b46c6873-XXXX-XXXX-XXXX-XXXXXXXXXX The Austrian Citrix Guy - AzHib 76918c03-XXXX-XXXX-XXXX-XXXXXXXXXX The Austrian Citrix Guy - AzStHCI local e344b537-XXXX-XXXX-XXXX-XXXXXXXXXX The Austrian Citrix Guy - Azure Connectorless eec20fa2-XXXX-XXXX-XXXX-XXXXXXXXXX The Austrian Citrix Guy - On-Prem dc4b9f88-XXXX-XXXX-XXXX-XXXXXXXXXX The Austrian Citrix Guy - vSphere 58110e95-XXXX-XXXX-XXXX-XXXXXXXXXX Copy the value of the ExternalUid field as you need it for further calls. Determining the ID of the Resource Location using the Windows .NET Application The .NET application requests all available Resource Locations automatically. You can choose the needed one using a drop-down field. Add new credentials to the Credential Wallet using Postman The correct types of Credentials are imperative for each operation - for example for Azure you need Azure-type credentials, for SMB SMB-type and so forth. The examples show the creation of UsernamePassword-type credentials. More info about the different credential types and the creation can be found here: https://developer-docs.citrix.com/en-us/citrix-daas-service-apis/Image-portability-service/credentials. POST https://api.eu.layering.cloud.com/credentials` Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} Body (Raw): { "id": "vspXXXXXXXX", "type": "UsernamePassword", "username": "XXXXXXXXXXXX", "domain": "XXXXXXXX", "password": "XXXXXXXXXXXXX" } Response (shortened): { "id": "vspXXXXXXXXX", "type": "UsernamePassword", "state": "Ok", "createdAt": "2023-11-13T09:00:18.2371226Z", "updatedAt": "2023-11-13T09:00:18.2371226Z" } Add new credentials to the Credential Wallet using PowerShell PS C:\TACG> $Params = @{ >> CustomerId = 'uzXXXXXXXXXXX' >> CredentialType = 'UsernamePassword' >> CredentialId = 'platformid' >> UserDomain = 'XXXXXXXXXXXXXXXXXXXX' >> UserName = 'XXXXXXXXXXXXXXXXXXXXXXX' >> UserPassword = 'XXXXXXXXXXXXXXXXXXXXXXXXX' >> } PS C:\TACG> New-IpsCredentials @Params Logging to C:\TACG\Credentials.log Verbose=False Interactively authenticating for Citrix customer uzXXXXXXXXXXX. Authenticated for Citrix customer uzXXXXXXXXXXX. Creating new UsernamePassword credential platformid geo EU api url https://api.eu.layering.cloud.com/ Created credential id platformid for name platformid platformid Part 1: Exporting the Image from on-premises vSphere using Postman Before we can export the Image we need more information from the vSphere platform: The following parameters must be obtained from vSphere: vCenterHost vCenterPort datacenter datastore cluster network sourceDisk The following parameters are also required: - Prefix: name of the export job - ResourceLocationID: ID of the Resource Location of the Image - OutputStorageLocation: - Type (for example SMB) - Credential-ID (for example SMB) - Host-IP (for example 10.10.11.44): must be reachable by the Connector Appliance in the Resource Location of the Image - SharePath (for example SMB): must be reachable by the Connector Appliance in the Resource Location of the Image When all values are set we can call the API to export the Image. POST https://api.eu.layering.cloud.com/Images/$export?async=true` Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} Body (Raw): { "platform": "vSphere", "platformCredentialId": "vsXXXXXXXXX", "vCenterHost": "(FQDN)", "vCenterPort": 443, "vCenterSslNoCheckHostname": "true", "datacenter": "XXXX", "datastore": "XXXXX", "cluster": "TACG", "network": "VM Network", "prefix": "my-cejob", "resourceLocationId": "58110e95-XXXX-XXXX-XXXX-XXXXXXXXXX", "outputStorageLocation": { "type": "SMB", "credentialId": "XXXXXXXXX", "host": "10.10.11.99", "sharePath": "_sw" }, "outputImageFilename": "my-exported-Image", "timeoutInSeconds": 3600, "sourceDiskName": "ds:///vmfs/volumes/65245805-XXXX-XXXX-XXXX-48210b5c6cd7/TACG-VSP-W11-M/TACG-VSP-W11-M.vmdk" } Response: { "id": "abcXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX", "type": "exportImage", "overallProgressPercent": 0, "isCancellable": true, "parameters": [], "status": "notStarted", "resultLocation": null, "additionalInfo": null, "warnings": [], "error": [], "createdAt": "2023-10-31T14:05:15Z", "updatedAt": "2023-10-31T14:05:15Z", "startedAt": "2023-10-31T14:05:15Z" } As the job runs asynchronously, we need another REST-API-call to get the status of the export job using the id parameter from the response: GET https://api.eu.layering.cloud.com/jobs/{job-id}` Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} The Response of the call contains all progress and error information. If the call is successful, we see the exported Image on the referenced SMB share. Part 1: Exporting the Image from on-premises vSphere using PowerShell Before we can export the Image we need more information from the vSphere platform: The following parameters must be obtained from vSphere: vCenterHost vCenterPort datacenter datastore cluster network sourceDisk The following parameters are also required: - Prefix: name of the export job - ResourceLocationID: ID of the Resource Location of the Image - OutputStorageLocation: - Type (for example SMB) - Credential-ID (for example SMB) - Host-IP (for example 10.10.11.44): must be reachable by the Connector Appliance in the Resource Location of the Image - SharePath (for example SMB): must be reachable by the Connector Appliance in the Resource Location of the Image When all values are set we can call the API to export the Image: PS C:\TACG> $ExportParams = @{ >> CustomerId = 'XXXXXXXXXX' >> SecureClientId = '71XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXb' >> SecureSecret = 'd-XXXXXXXXXXXXXX==' >> SmbHost = '10.10.11.99' >> SmbShare = '_SW' >> SmbDiskName = 'ipsexp-ps' >> SmbDiskFormat = 'VhdDiskFormat' >> SmbCwId = 'SMB' >> ResourceLocationId = '58XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX2' >> VsphereCwSecretId = 'vsXXXXXXX' >> VsphereHost = '(FQDN)' >> VspherePort = '443' >> VsphereSslNoCheckHostname = $true >> VsphereDataCenter = 'TACG' >> VsphereDataStore = 'XXXXXXXXXXXX' >> VsphereNetwork = 'VM Network' >> VsphereCluster = 'TAXXXXXXXXXX' >> SourceDiskName = "ds:///vmfs/volumes/65245805-87e7af41-5aeb-48210b5c6cd7/TACG-VSP-W11-M/TACG-VSP-W11-M.vmdk" >> } PS C:\TACG> Start-IpsVsphereExportJob @ExportParams -Verbose | Wait-IpsJob Logging to C:\TACG\ExportVsphereToSmb.log Verbose=True Authenticating for Citrix customer XXXXXXXX using API key XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX. VERBOSE: apikey VERBOSE: __AllParameterSets VERBOSE: Get-XDAuthentication: Enter VERBOSE: invoking Get-XDAuthenticationEx: VERBOSE: Get-XDAuthentication: Exit Authenticated for Citrix customer XXXXXXXX. Starting export workflow ***** Call Method: ExportImageJob overwrite: False ***** geo EU api url https://api.eu.layering.cloud.com/ VERBOSE: POST with -1-byte payload VERBOSE: received 326-byte response of content type application/json; charset=utf-8 Image Export started with id 7a4XXXXXXX-XXXX-XXXX-XXXXXXXXXX Logging to C:\TACG\ExportVsphereToSmb.log Verbose=False Interactively authenticating for Citrix customer XXXXXXXX. Authenticated for Citrix customer XXXXXXXXX. geo EU api url https://api.eu.layering.cloud.com/ Job 7a4XXXXXXX-XXXX-XXXX-XXXXXXXXXX status: inProgress percent done: 0 parameters @{name=currentStep; value=GenerateSecurityData} Geo EU api url https://api.eu.layering.cloud.com/ Job 7a4XXXXXXX-XXXX-XXXX-XXXXXXXXXX status: inProgress percent done: 27 parameters @{name=currentStep; value=CreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 7a4XXXXXXX-XXXX-XXXX-XXXXXXXXXX status: inProgress percent done: 67 parameters @{name=currentStep; value=Wait ForExportImage} geo EU api url https://api.eu.layering.cloud.com/ Job7a4XXXXXXX-XXXX-XXXX-XXXXXXXXXX status: inProgress percent done: 67 parameters @{name=currentStep; value=Wait ForExportImage} geo EU api url https://api.eu.layering.cloud.com/ Job 7a4XXXXXXX-XXXX-XXXX-XXXXXXXXXX status: inProgress percent done: 73 parameters @{name=currentStep; value=GetCeLogs} geo EU api url https://api.eu.layering.cloud.com/ Job 7a4XXXXXXX-XXXX-XXXX-XXXXXXXXXX status: complete percent done: 100 parameters Job 7a4XXXXXXX-XXXX-XXXX-XXXXXXXXXX final status: complete Job profile @{created compositing engine=0:01:12.666709, prepared to create compositing engine=0:00:01.100751, deleted compositing engine=0:00:01.774440, uploaded export assets to compositing engine=00:00:03.0517790}) Artifacts Status LogFileDir LogFileName --------- ------ ---------- ----------- {output, tags, disk, temporary} complete ExportVsphereToSmb.log If the PowerShell script completed successfully, we see the exported Image on the referenced SMB share. Part 1: Exporting the Image from on-premises vSphere using the Windows .NET Application The first step obtains a Bearer token automatically. Before we can export the Image we need more information from the vSphere platform: The following parameters must be obtained from vSphere: vCenterHost vCenterPort datacenter datastore cluster network sourceDisk The following parameters are also required: - Prefix: name of the export job - ResourceLocationID: ID of the Resource Location of the Image - OutputStorageLocation: - Type (for example SMB) - Credential-ID (for example SMB) - Host-IP (for example 10.10.11.44): must be reachable by the Connector Appliance in the Resource Location of the Image - SharePath (for example SMB): must be reachable by the Connector Appliance in the Resource Location of the Image Parts of the parameters have to be stored in the adjacent JSON-files of the Windows application: { "vCenterHost": "vcenter.the-austrian-citrix-guy.at", "datacenter": "TACG", "datastore": "datastore1 (1)", "cluster": "TACG-CLUSTER", "network": "VM Network", "prefix": "my-cejob", "outputImageFilename": "my-exported-Image", "timeoutInSeconds": 36000, "sourceDiskName": "ds:///vmfs/volumes/65XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXd7/TACG-VSP-W11-M/TACG-VSP-W11-M.vmdk", "smb": { "type": "SMB", "host": "10.10.11.44", "sharePath": "_TRANS" } } The Resource Location-ID, the Platform-Credential, and the SMB Share-Credential are loaded from Citrix Cloud and can be selected at the adjacent drop-down boxes. When all parameters are loaded, the export can be started by clicking the Export Image to SMB share. After starting the export the application requests a status update at regular intervals as the export process is an asynchronous task. Progress is shown until the export has been completed. Part 2: Uploading the Image - Prerequisites The upload stage uploads the Image to the target cloud subscription. This process is a point-to-point transfer using the configuration and credentials supplied. In this guide we upload the Image to Microsoft Azure. The following Azure permissions are necessary for uploading the Image (the Azure Resource group must be set in the Parameters): Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/endGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/write Microsoft.Resources/deployments/operationStatuses/read Microsoft.Resources/deployments/read Microsoft.Resources/deployments/write Microsoft.Resources/subscriptions/resourcegroups/read For current permission requirements look at the product documentation https://docs.citrix.com/en-us/citrix-daas/migrate-workloads.html#microsoft-azure-required-permissions. Part 2: Uploading the Image from an on-premises SMB share to Azure using PowerShell Before we can upload the Image, at least the following parameters must be obtained: Filename Filename of the disk to be uploaded ManagedDiskName Filename of the disk in Azure SubscriptionID Subscription-ID of the Azure tenant used Location Azure location ResourceGroup Azure Resource group Timeout Timeout before the script fails When all values are set we can call the PowerShell script to export the Image. Be aware that the upload takes quite a while - in this example more than 16 hours… PS C:\TACG> $Params = @{ >> Filename = 'C:\_SW\w11exp.vhd' >> ManagedDiskName = 'IPS_W11EXP.vhd' >> SubscriptionID = '58daXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX' >> Location = 'eastus' >> ResourceGroup = 'CTX-HibTest2' >> Timeout = '86400' >> } PS C:\TACG> Copy-DiskToAzure @Params -Verbose Uploading Uploading VHD [ ] 15:06:19 689995776/85899345920 2023-11-13 11:53:30Z: VHD size for C:\_SW\w11exp.vhd is 85899346432 2023-11-13 11:53:30Z: Creating managed disk 'IPS_W11EXP.vhd' with size 85899346432 bytes in resource group CTX-HibTest2 location eastus 2023-11-13 11:54:05Z: Copying disk 'C:\_SW\w11exp.vhd' to managed disk 'IPS_W11EXP.vhd' (threads=default) 2023-11-13 11:54:05Z: Detecting source type 2023-11-13 11:54:05Z: Analyzing VHD 2023-11-13 11:54:06Z: Uploading VHD 2023-11-14 04:27:56Z: Copied disk to Azure managed disk 'IPS_W11EXP.vhd' Part 2: Uploading the Image from an on-premises SMB share to Azure using REST-API Part 2: Uploading the Image from an on-premises SMB share to Azure using REST-API The Windows .NET application encapsulates REST-API calls and invokes PowerShell scripts. The VHD file must reside on the computer where the .NET application is run. Before we can upload the Image, at least the following parameters must be obtained: Filename Filename of the disk to be uploaded ManagedDiskName Filename of the disk in Azure SubscriptionID Subscription-ID of the Azure tenant used Location Azure location ResourceGroup Azure Resource group Timeout Timeout before the script fails These parameters are set in a JSON-file or directly in the application. When all values are set, the upload can be started by pressing the “Upload VHD...” button. The upload is a long-lasting process depending on the size of the Image and the upload speed - in this example, it took over 16 hours to upload. Set the timeout parameter accordingly! After a successful upload, the next step of the Image Migration workflow can be started. Part 3: Preparing the Image - Prerequisites The Prepare stage is a complex set of steps removing the source platform components and injecting the target platform components into the Image. An important prerequisite is a correct App registration in Azure and its IAM settings. The needed permissions can be found in the IPS documentation. Create the credential in the Citrix Credential Wallet using the PowerShell script or REST-API. Example for creating Azure credentials in the Credential Wallet using a REST-API call: Correct format: { "id": "azadminjson", "name": "My Azure credential SPN", "type": "Azure", "tenantId": "e85XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "clientId": "fd7XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "clientSecret": "8cD8QXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_" } The XXXXX-marked values are the corresponding values of the App registration in Azure. Full REST-API-Call to create the Azure credential: POST https://api.eu.layering.cloud.com/credentials Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} Accept: application/json Content-Type: application/json Body (Raw): { "id": "azadminjson", "name": "My Azure credential SPN", "type": "Azure", "tenantId": "e85aXXXX-XXXX-XXXX-XXXXXXXXXXXX”, "clientId": "fd72XXXX-XXXX-XXXX-XXXXXXXXXXXX", "clientSecret": "8cDXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_" } Response if credential was successfully created: { "id": "azadminjson, "name": "My Azure credential SPN", "type": "Azure", "state": "Ok", "createdAt": "2023-11-15T18:01:49.8436445Z", "updatedAt": "2023-11-15T18:01:49.8436445Z" } Part 3: Preparing the Image on Azure using PowerShell Before we can call the PowerShell script to prepare the Image, at least the following parameters must be obtained: CustomerID CloudCwSecretId Azure credentials stored in the Credential Wallet ResourceLocationId ID of the Citrix Cloud Resource location AzureSubscriptionID Azure region where the prepared disk is placed TargetResourceGroup Resource group where the prepared disk is placed AzureVirtualNetworkResourceGroupName Resource group for placing the Compositing engine AzureVirtualNetworkName Network for placing the Compositing engine AzureVirtualNetworkSubnetName Subnet for placing the Compositing engine CloudProvisioningType Deployment with MCS or PVS DomainUnjoin Remove Image from Domain InstallMisa Install the Machine Identity Service Agent from VDA ForceMisa Install the latest Machine Identity Service Agent CloudDiskName Name of the disk to be prepared AzureLocation Azure region to deploy to XdReconfigure: ParameterName "controllers" "ParameterValue FQDN of the Cloud Controllers When all values are set we can call the PowerShell script to prepare the Image: PS C:\TACG> $PrepareParams = @{ >> CustomerId = 'uzyo2lp7eh7j' >> CloudProvisioningType = 'Mcs' >> CloudCwSecretId = 'azadXXXXXXXXX' >> DomainUnjoin = $true >> InstallMisa = $false >> ForceMisa = $false >> CloudDiskName = 'diskfromipsss' >> XdReconfigure = @( >> [pscustomobject]@{ >> ParameterName = 'controllers' >> ParameterValue = 'TACG-XXXXXX.hib.the-austrian-citrix-guy.at' >> } >> ) >> ResourceLocationId = '7691XXXX-XXXX-XXXX-XXXXXXXXXXXX' >> AzureSubscriptionID = '58daXXXX-XXXX-XXXX-XXXXXXXXXXXX' >> AzureLocation = 'eastus' >> TargetResourceGroup = 'CTX-XXXXXXXXXX >> AzureVirtualNetworkResourceGroupName = 'CTX-XXXXXXXXXX' >> AzureVirtualNetworkName = 'TACG-XXXXXXXXXX-vnet' >> AzureVirtualNetworkSubnetName = 'default' >> } PS C:\TACG> Start-IpsAzurePrepareJob @PrepareParams -Verbose | Wait-IpsJob VERBOSE: Initialize default drives. VERBOSE: Creating a new drive. VERBOSE: Creating a new drive. Logging to C:\TACG\PrepareAzure.log Verbose=True Interactively authenticating for Citrix customer uzyo2lp7eh7j. VERBOSE: __AllParameterSets VERBOSE: Get-XDAuthentication: Enter VERBOSE: invoking Get-XDAuthenticationEx: VERBOSE: Get-XDAuthentication: Exit Authenticated for Citrix customer XXXXXXXXXX. Starting prepare workflow ***** Call Method: PrepareImageJob ***** geo EU api url https://api.eu.layering.cloud.com/ VERBOSE: Cmdlet "Find-Package" is exported. VERBOSE: Cmdlet "Get-Package" is exported. VERBOSE: Cmdlet "Get-PackageProvider" is exported. VERBOSE: Cmdlet "Get-PackageSource" is exported. VERBOSE: Cmdlet "Install-Package" is exported. VERBOSE: Cmdlet "Import-PackageProvider" is exported. VERBOSE: Cmdlet "Find-PackageProvider" is exported. VERBOSE: Cmdlet "Install-PackageProvider" is exported. VERBOSE: Cmdlet "Register-PackageSource" is exported. VERBOSE: Cmdlet "Save-Package" is exported. VERBOSE: Cmdlet "Set-PackageSource" is exported. VERBOSE: Cmdlet "Uninstall-Package" is exported. VERBOSE: Cmdlet "Unregister-PackageSource" is exported. VERBOSE: Cmdlet "Find-Package" is imported. VERBOSE: Cmdlet "Find-PackageProvider" is imported. VERBOSE: Cmdlet "Get-Package" is imported. VERBOSE: Cmdlet "Get-PackageProvider" is imported. VERBOSE: Cmdlet "Get-PackageSource" is imported. VERBOSE: Cmdlet "Import-PackageProvider" is imported. VERBOSE: Cmdlet "Install-Package" is imported. VERBOSE: Cmdlet "Install-PackageProvider" is imported. VERBOSE: Cmdlet "Register-PackageSource" is imported. VERBOSE: Cmdlet "Save-Package" is imported. VERBOSE: Cmdlet "Set-PackageSource" is imported. VERBOSE: Cmdlet "Uninstall-Package" is imported. VERBOSE: Cmdlet "Unregister-PackageSource" is imported. VERBOSE: POST with -1-byte payload VERBOSE: received 327-byte response of content type application/json; charset=utf-8 Image Prepare started with id 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX Logging to C:\TACG\PrepareAzure.log Verbose=False Interactively authenticating for Citrix customer XXXXXXXXXX. Authenticated for Citrix customer XXXXXXXXXX. geo EU api url https://api.eu.layering.cloud.com/ Job 33743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 21 parameters @{name=currentStep; value=CreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 21 parameters @{name=currentStep; value=CreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 21 parameters @{name=currentStep; value=CreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 42 parameters @{name=currentStep; value=UploadAssetsToCe} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 42 parameters @{name=currentStep; value=UploadAssetsToCe} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 53 parameters @{name=currentStep; value=WaitForPrepareImage} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 58 parameters @{name=currentStep; value=GetCeLogs} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 68 parameters @{name=currentStep; value=WaitForTargetBoot} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 68 parameters @{name=currentStep; value=WaitForTargetBoot} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 68 parameters @{name=currentStep; value=WaitForTargetBoot} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 89 parameters @{name=currentStep; value=DeleteCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 89 parameters @{name=currentStep; value=DeleteCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 89 parameters @{name=currentStep; value=DeleteCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: complete percent done: 100 parameters Job 3743XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX final status: complete Job profile @{deleted compositing engine=0:03:17.912184, reconfigure VDA=00:02:45.0545592, created compositing engine=0:02:56.629924, installed windowsAzureVmAgent=00:00:56.9625675, prepared to create compositing engine=0:00:08.829692, uploaded prepare assets to compositing engine=00:00:58.2018150}) Artifacts Status LogFileDir LogFileName --------- ------ ---------- ----------- {output, tags, temporary} complete PrepareAzure.log The preparation of the Image has been completed successfully. Part 3: Preparing the Image on Azure using REST-API An important prerequisite is a correct App registration in Azure and its IAM settings and a correct Azure-based credential. The needed permissions can be found in the IPS documentation. Create the credential in the Citrix Credential Wallet using the PowerShell script or REST-API. Before we can call the REST-API to prepare the Image, at least the following parameters must be obtained: platform Azure platformCredentialId Azure credentials stored in the Credential Wallet resourceLocationId ID of the Citrix Cloud Resource location SubscriptionID Azure region where the prepared disk is placed targetDiskResourceGroupName Resource group where the prepared disk is placed virtualNetworkResourceGroupName Resource group for placing the Compositing engine virtualNetworkName Network for placing the Compositing engine virtualNetworkSubnetName Subnet for placing the Compositing engine provisioningType Deployment with MCS or PVS domainUnjoin Remove Image from Domain installMisa Install the Machine Identity Service Agent from VDA forceMisa Install the latest Machine Identity Service Agent installUpl Install User Personal Layer defrag Run DEFRAG chkdsk Run CHKDSK targetDiskName Name of the disk to be prepared outputDiskName Name of the prepared disk XdReconfigure: ParameterName "controllers" ParameterValue FQDN of the Cloud Controllers JSON-Call to prepare the Image: POST https://api.eu.layering.cloud.com/Images/$prepare?async=true ` Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} Accept: application/json Content-Type: application/json Body (Raw): { "platform": "Azure", "platformCredentialId": "azadmXXXXXXX", "resourceLocationId": "7691XXXX-XXXX-XXXX-XXXX-XXXXXXXXXX", "SubscriptionID": "58daXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX", "azureRegion": "eastus", "targetDiskResourceGroupName": "CTX-XXXXXXXXX", "virtualNetworkResourceGroupName": "CTX-XXXXXXXXX", "virtualNetworkName": "TACG-XXXXXXXXXXXXX", "virtualNetworkSubnetName": "default", "provisioningType": "Mcs", "domainUnjoin": true, "installMisa": false, "forceMisa": false, "installUpl": false, "defrag": false, "chkdsk": false, "ceVmSku": "Standard_D2s_v5", "targetDiskName": "diskromXXXXXXXX", "outputDiskName": "procdiskXXXXXXXXX", "XdReconfigure": [{ "ParameterName":"controllers", "ParameterValue":"TACGXXXXXXXX.hib.the-austrian-citrix-guy.at" } ] } Response: { "id": "6267XXXX-XXXX-XXXX-XXXX-XXXXXXXXXX", "type": "prepareImage", "overallProgressPercent": 0, "isCancellable": true, "parameters": [], "status": "notStarted", "resultLocation": null, "additionalInfo": null, "warnings": [], "error": [], "createdAt": "2023-11-15T19:07:09Z", "updatedAt": "2023-11-15T19:07:09Z", "startedAt": "2023-11-15T19:07:09Z" } As the job runs asynchronously, we need another REST-API-call to get the status of the export job using the id parameter from the response: Example: 42% progress: Example: 100% progress, preparation successful: JSON-Call to get the status of the preparation job: GET https://api.eu.layering.cloud.com/jobs/6267XXXX-XXXX-XXXX-XXXX-XXXXXXXXX Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} Accept: application/json Content-Type: application/json Body (Raw): { } Response - this example shows 100% progress and successful preparation { "id": "6267XXXX-XXXX-XXXX-XXXX-XXXXXXXXX ", "type": "prepareImage", "overallProgressPercent": 100, "isCancellable": true, "parameters": [], "status": "complete", "resultLocation": null, "additionalInfo": { "artifacts": { "tags": { "ctx-job-id": "6267XXXX-XXXX-XXXX-XXXX-XXXXXXXXXX", "component": "compositing" }, "output": [ { "description": "Managed disk containing the prepared Image", "name": "procdiskfromipsss", "resourceGroup": "CTX-XXXXXXX" } ], "temporary": [ { "description": "Temporary resource group", "name": "ctx-ce-50d0e27d" }, { "description": "Temporary Compositing Engine OS disk", "name": "ce-50d0e27d-os-disk", "resourceGroup": "ctx-ce-50d0e27d" }, { "description": "Temporary Compositing Engine NIC", "name": "ce-50d0e27d-nic", "resourceGroup": "ctx-ce-50d0e27d" }, { "description": "Temporary Compositing Engine network security group", "name": "ce-50d0e27d-nic-nsg", "resourceGroup": "ctx-ce-50d0e27d" }, { "description": "Temporary Compositing Engine VM", "name": "ce-50d0e27d", "resourceGroup": "ctx-ce-50d0e27d", "encryptionAtHost": false } ] }, "profile": { "prepared to create compositing engine": "0:00:09.071793", "created compositing engine": "0:02:58.483518", "uploaded prepare assets to compositing engine": "00:00:54.2715950", "installed windowsAzureVmAgent": "00:01:02.6609307", "reconfigure VDA": "00:03:54.8636235", "deleted compositing engine": "0:03:18.144080" }, "warnings": [], "errors": [], "telemetry": { "diskSize": "85899345920", "freeSpace": "1048576", "windowsVersion": "@{CurrentVersion=@{value = 6.3, type = REG_SZ}; UBR=@{value = 0x7c8, type = REG_DWORD}; BuildLab=@{value = 22621.ni_release.220506-1250, type = REG_SZ}; RegisteredOwner=@{value = admin, type = REG_SZ}; CurrentBuild=@{value = 22621, type = REG_SZ}; SystemRoot=@{value = C:\\Windows, type = REG_SZ}; DigitalProductId4=@{value = F804000004000000300033003600310032002D00300033003300310031002D003000300030002D003000300030003000300031002D00300033002D0031003000330031002D00320032003600320031002E0030003000300030002D003200380033003200300....000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000, type = REG_BINARY}; ProductId=@{value = 00331-10000-00001-AA757, type = REG_SZ}; DigitalProductId=@{value = A40000000300000030303333312D31303030302D30303030312D414137353700EF0C00005B54485D5831392D3938373935000000EF0C10000000343DC5394EBD6E2F090000000000D51D25653D149BCE030000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000050F96851, type = REG_BINARY}; BuildBranch=@{value = ni_release, type = REG_SZ}; SoftwareType=@{value = System, type = REG_SZ}; CurrentType=@{value = Multiprocessor Free, type = REG_SZ}; DisplayVersion=@{value = 22H2, type = REG_SZ}; InstallDate=@{value = 0x65251dd7, type = REG_DWORD}; InstallTime=@{value = 0x1d9fb5edf5aef50, type = REG_QWORD}; InstallationType=@{value = Client, type = REG_SZ}; BuildLabEx=@{value = 22621.1.amd64fre.ni_release.220506-1250, type = REG_SZ}; ProductName=@{value = Windows 10 Pro, type = REG_SZ}; CompositionEditionID=@{value = Enterprise, type = REG_SZ}; CurrentBuildNumber=@{value = 22621, type = REG_SZ}; ReleaseId=@{value = 2009, type = REG_SZ}; PathName=@{value = C:\\Windows, type = REG_SZ}; EditionID=@{value = Professional, type = REG_SZ}; BuildGUID=@{value = ffffffff-ffff-ffff-ffff-ffffffffffff, type = REG_SZ}; CurrentMajorVersionNumber=@{value = 0xa, type = REG_DWORD}; BaseBuildRevisionNumber=@{value = 0x1, type = REG_DWORD}; PendingInstall=@{value = 0x0, type = REG_DWORD}; CurrentMinorVersionNumber=@{value = 0x0, type = REG_DWORD}}", "vdaVersions": "@{AppProtectionVC=@{value = 23.8.0.2, type = REG_SZ}; Citrix HDX Audio x64=@{value = 7.39.0.33, type = REG_SZ}; WmiSdk=@{value = 7.39.0.4, type = REG_SZ}; Citrix HDX Graphics x64=@{value = 7.39.0.36, type = REG_SZ}; WMI-Maschinenverwaltungsprovider=@{value = 7.39.0.4, type = REG_SZ}; (Default)=@{value = 7.39.0.0, type = REG_SZ}; Citrix Start Menu Disconnect Button=@{value = 7.39.0.41, type = REG_SZ}; Citrix Identity Assertion VDA Plugin=@{value = 10.15.0.4, type = REG_SZ}; Citrix Director VDA Plugin=@{value = 7.39.0.4, type = REG_SZ}; Citrix Überwachungsdienst-VDA-Plug-In=@{value = 7.39.0.10, type = REG_SZ}; Citrix WMI Proxy Plugin=@{value = 7.39.0.4, type = REG_SZ}; Citrix Browser Content Redirection=@{value = 15.45.0.12, type = REG_SZ}; Citrix Gruppenrichtlinie - clientseitige Erweiterung 7.39.0.13=@{value = 7.39.0.13, type = REG_SZ}; Citrix HDX Devices x64=@{value = 7.39.0.31, type = REG_SZ}; Machine Identity Service Agent=@{value = 7.39.0.4, type = REG_SZ}; Citrix HDX App Experience x64=@{value = 7.39.0.41, type = REG_SZ}; Citrix CDF Capture Service - x64=@{value = 7.39.0.4, type = REG_SZ}; Citrix HDX Printing x64=@{value = 7.39.0.26, type = REG_SZ}; UpmVDAPlugin=@{value = 23.8.0.7, type = REG_SZ}; Citrix HDX WS x64=@{value = 15.45.0.12, type = REG_SZ}; Citrix HDX IcaManagement x64=@{value = 7.39.0.47, type = REG_SZ}; Citrix Diagnostics Facility=@{value = 7.2.1.6, type = REG_SZ}; Citrix Telemetry Service - x64=@{value = 3.24.0.1, type = REG_SZ}; Citrix Universeller Druckclient=@{value = 7.39.0.26, type = REG_SZ}; Citrix Virtual Desktop Agent - x64=@{value = 7.39.0.11, type = REG_SZ}}", "partitionStyle": "GPT", "bootMode": "UEFI" } }, "warnings": [], "error": [], "createdAt": "2023-11-15T19:07:09Z", "updatedAt": "2023-11-15T19:37:29Z", "startedAt": "2023-11-15T19:07:09Z", "endedAt": "2023-11-15T19:37:29Z" } The preparation of the Image has been completed successfully. Part 3: Preparing the Image on Azure using the Windows .NET Application The Windows .NET application encapsulates REST-API calls to prepare the Image on Azure. An important prerequisite is a correct App registration in Azure and its IAM settings and a correct Azure-based credential. Before the application can prepare the Image, the following parameters must be obtained: platform Azure platformCredentialId Azure credentials stored in the Credential Wallet resourceLocationId ID of the Citrix Cloud Resource location SubscriptionID Azure region where the prepared disk is placed targetDiskResourceGroupName Resource group where the prepared disk is placed virtualNetworkResourceGroupName Resource group for placing the Compositing engine virtualNetworkName Network for placing the Compositing engine virtualNetworkSubnetName Subnet for placing the Compositing engine provisioningType Deployment with MCS or PVS domainUnjoin Remove Image from Domain installMisa Install the Machine Identity Service Agent from VDA forceMisa Install the latest Machine Identity Service Agent installUpl Install User Personal Layer defrag Run DEFRAG chkdsk Run CHKDSK targetDiskName Name of the disk to be prepared outputDiskName Name of the prepared disk XdReconfigure: ParameterName "controllers" ParameterValue FQDN of the Cloud Controllers These parameters are set in a JSON-file. When all parameters are set, the preparation of the uploaded Image can be started by pressing the “Prepare VHD...” button. The preparation process lasted in this example for about 35 minutes… The application refreshes every 30 seconds the job status by calling a REST-API to determine the job status. The preparation of the Image has been completed successfully. Part 4: Publishing the Image - Prerequisites In the Publish phase, the prepared Image is published and ready to be streamed with Citrix Provisioning Server (PVS). The Image Portability Service provides a REST interface and PowerShell scripts to publish the migrated disk directly into the PVS vDisk store. Part 4: Publishing the Image on Azure to an SMB share for PVS using PowerShell An important prerequisite is a correct App registration in Azure and its IAM settings. Before we can call the PowerShell script to publish the Image, at least the following parameters must be obtained: CustomerID Citrix Customer-ID SecureClientID ID of the API client SecureSecret Secret of the API client SmbHost Hostname of SMB server where the exported disk will be stored SmbShare SMB server share name SmbPath Share path to the disk SmbDiskName Name of disk with no extension SmbDiskFormat Type of disk SmbCwId SMB Credential Wallet ID ResourceLocationID ID of the Resource location AzureSubscriptionID ID of the Azure subscription CloudCWSecretID ID of the Cloud credential in the Credential Wallet AzureLocation Azure location TargetResourceGroup Resource group for placing the Compositing engine AzureVirtualNetworkResourceGroupName Resource group of the Network for placing the Compositing engine AzureVirtualNetworkName Network for placing the Compositing engine AzureVirtualNetworkSubnetName Subnet for placing the Compositing engine CloudDiskName Name of the disk to be transferred to the SMB share AzureVmResourceGroup Resource group where the VM will be placed Timeout Timespan before the process times out LogFileName Name of the log file When all values are set we can call the PowerShell script to publish the Image. PS C:\TACG> $PublishParams = @{ >> CustomerId = "uzXXXXXXXXXXXX" >> SecureClientId = "26ecXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" >> SecureSecret = "vhOXXXXXXXXXXXXXXXXXXXXXXXXXXXX" >> SmbHost = "10.0.0.4" >> SmbShare = "_PVS" >> SmbDiskName = "ReadyForPVSPS" >> SmbDiskFormat = "VhdDiskFormat" >> SmbCwId = "azhibsmb" >> ResourceLocationId = "7691XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" >> AzureSubscriptionId = "58daXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX" >> CloudCwSecretId = "azadminjson" >> AzureLocation = "eastus" >> TargetResourceGroup = "CTX-Hibtest2" >> AzureVirtualNetworkResourceGroupName = "CTX-Hibtest2" >> AzureVirtualNetworkName = "TACG-HibTest-DCCC-vnet" >> AzureVirtualNetworkSubnetName = "default" >> CloudDiskName = "procdiskfromipsss" >> AzureVmResourceGroup = "CTX-Hibtest2" >> Timeout = "36000" >> LogFileName = "AzurePublish.log" >> } PS C:\TACG> Start-IpsAzurePublishJob @PublishParams -Verbose | Wait-IpsJob VERBOSE: Initialize default drives. VERBOSE: Creating a new drive. VERBOSE: Creating a new drive. Logging to C:\TACG\AzurePublish.log Verbose=True Authenticating for Citrix customer uzXXXXXXXXXXXXXX using API key 26ecXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX. VERBOSE: apikey VERBOSE: __AllParameterSets VERBOSE: Get-XDAuthentication: Enter VERBOSE: invoking Get-XDAuthenticationEx: VERBOSE: Get-XDAuthentication: Exit Authenticated for Citrix customer uzyXXXXXXXXXX. Starting export workflow ***** Call Method: ExportImageJob overwrite: False ***** geo EU api url https://api.eu.layering.cloud.com/ VERBOSE: Cmdlet "Find-Package" is exported. VERBOSE: Cmdlet "Get-Package" is exported. VERBOSE: Cmdlet "Get-PackageProvider" is exported. VERBOSE: Cmdlet "Get-PackageSource" is exported. VERBOSE: Cmdlet "Install-Package" is exported. VERBOSE: Cmdlet "Import-PackageProvider" is exported. VERBOSE: Cmdlet "Find-PackageProvider" is exported. VERBOSE: Cmdlet "Install-PackageProvider" is exported. VERBOSE: Cmdlet "Register-PackageSource" is exported. VERBOSE: Cmdlet "Save-Package" is exported. VERBOSE: Cmdlet "Set-PackageSource" is exported. VERBOSE: Cmdlet "Uninstall-Package" is exported. VERBOSE: Cmdlet "Unregister-PackageSource" is exported. VERBOSE: Cmdlet "Find-Package" is imported. VERBOSE: Cmdlet "Find-PackageProvider" is imported. VERBOSE: Cmdlet "Get-Package" is imported. VERBOSE: Cmdlet "Get-PackageProvider" is imported. VERBOSE: Cmdlet "Get-PackageSource" is imported. VERBOSE: Cmdlet "Import-PackageProvider" is imported. VERBOSE: Cmdlet "Install-Package" is imported. VERBOSE: Cmdlet "Install-PackageProvider" is imported. VERBOSE: Cmdlet "Register-PackageSource" is imported. VERBOSE: Cmdlet "Save-Package" is imported. VERBOSE: Cmdlet "Set-PackageSource" is imported. VERBOSE: Cmdlet "Uninstall-Package" is imported. VERBOSE: Cmdlet "Unregister-PackageSource" is imported. VERBOSE: POST with -1-byte payload VERBOSE: received 326-byte response of content type application/json; charset=utf-8 Image Export started with id 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX Logging to C:\TACG\AzurePublish.log Verbose=False Interactively authenticating for Citrix customer uzyXXXXXXXXXX. Authenticated for Citrix customer uzyXXXXXXXXXX. geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 20 parameters @{name=currentStep; value=PrepareForCreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 20 parameters @{name=currentStep; value=PrepareForCreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 27 parameters @{name=currentStep; value=CreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 27 parameters @{name=currentStep; value=CreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 27 parameters @{name=currentStep; value=CreateCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 67 parameters @{name=currentStep; value=WaitForExportImage} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 67 parameters @{name=currentStep; value=WaitForExportImage} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 67 parameters @{name=currentStep; value=WaitForExportImage} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 87 parameters @{name=currentStep; value=DeleteCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: inProgress percent done: 87 parameters @{name=currentStep; value=DeleteCeVm} geo EU api url https://api.eu.layering.cloud.com/ Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX status: complete percent done: 100 parameters Job 467XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXXfinal status: complete Job profile @{created compositing engine=0:03:01.698686, prepared to create compositing engine=0:00:07.768624, deleted compositing engine=0:01:35.971565, uploaded export assets to compositing eng ine=00:00:03.2609860}) Artifacts Status LogFileDir LogFileName --------- ------ ---------- ----------- {output, tags, disk, temporary} complete AzurePublish.log PS C:\TACG> The publishing of the Image has been completed successfully. All steps of the IPS workflow have completed successfully. The next step is deploying the Image using Citrix Provisioning Services (PVS). More information about Citrix Provisioning Services (PVS) and how to deploy Images/machines using PVS can be found here: https://docs.citrix.com/en-us/tech-zone/build/deployment-guides/citrix-azure-hibernation-api#creating-a-hibernation-capable-machine-catalog-using-rest-api and https://docs.citrix.com/en-us/tech-zone/learn/tech-briefs/citrix-provisioning. Part 4: Publishing the Image on Azure to an SMB share for PVS using REST-API An important prerequisite is a correct App registration in Azure and its IAM settings. Before we can call the PowerShell script to publish the Image, at least the following parameters must be obtained: platform Azure platformCredentialId Azure credentials stored in the Credential Wallet resourceLocationId ID of the Citrix Cloud Resource location SubscriptionID Azure region where the prepared disk is placed targetDiskResourceGroupName Resource group where the prepared disk is placed virtualNetworkResourceGroupName Resource group for placing the Compositing engine virtualNetworkName Network for placing the Compositing engine virtualNetworkSubnetName Subnet for placing the Compositing engine targetDiskName Name of the disk to be prepared outputImageFilename Name of the published disk resourceGroup Resource group where the process will run timeoutInSeconds Timespan before the process will time out outputDiskName: { Name of the prepared disk type "SMB” as we publish to a share credentialID Credential Wallet-ID of the SMB credential host FQDN/IP of the server hosting the SMB share sharePath Name of the share } When all values are set we can call the REST-API script to publish the Image. JSON-Call to start the publishing job: POST https://api.eu.layering.cloud.com/Images/$publish?async=true` Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} Accept: application/json Content-Type: application/json Body (Raw): { "platform": "Azure", "platformCredentialId": "azadminjson", "subscriptionId": "58dXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "resourceGroup": "CTX-Hibtest2", "virtualNetworkResourceGroupName": "CTX-Hibtest2", "virtualNetworkName": "TACG-HibTest-DCCC-vnet", "virtualNetworkSubnetName": "default", "timeoutInSeconds":86400, "targetDiskResourceGroupName": "CTX-Hibtest2", "targetDiskName": "procdiskfromipsss", "resourceLocationId": "769XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "outputStorageLocation": { "type": "SMB", "credentialId": "azhibsmb", "host": "10.0.0.4", "sharePath": "_PVS" }, "outputImageFilename": "ReadyForPVS" } Response: { "id": "d44XXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "type": "exportImage", "overallProgressPercent": 0, "isCancellable": true, "parameters": [], "status": "notStarted", "resultLocation": null, "additionalInfo": null, "warnings": [], "error": [], "createdAt": "2023-11-23T13:31:48Z", "updatedAt": "2023-11-23T13:31:48Z", "startedAt": "2023-11-23T13:31:48Z" } As the job runs asynchronously, we need another REST-API-call to get the status of the export job using the id parameter from the response: Example: 100% progress, publishing successful: JSON-Call to get the status of the publishing job: GET https://api.eu.layering.cloud.com/jobs/6267XXXX-XXXX-XXXX-XXXX-XXXXXXXXX Authorization: CWSAuth bearer= {{Bearer-Token-Value}} Citrix-CustomerId: {{Citrix-CustomerID}} Accept: application/json Content-Type: application/json Body (Raw): { } Response - this example shows 100% progress and successful publishing: { "id": "d443XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "type": "exportImage", "overallProgressPercent": 100, "isCancellable": true, "parameters": [], "status": "complete", "resultLocation": null, "additionalInfo": { "artifacts": { "tags": { "ctx-job-id": "d443XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "component": "compositing" }, "output": [ { "description": "Published disk", "path": "//10.0.0.4/_PVS/ReadyForPVS.vhd" } ], "temporary": [ { "description": "Temporary Compositing Engine OS disk", "name": "ce-f32672a0-os-disk", "resourceGroup": "CTX-Hibtest2" }, { "description": "Temporary Compositing Engine NIC", "name": "ce-f32672a0-nic", "resourceGroup": "CTX-Hibtest2" }, { "description": "Temporary Compositing Engine network security group", "name": "ce-f32672a0-nic-nsg", "resourceGroup": "CTX-Hibtest2" }, { "description": "Temporary Compositing Engine VM", "name": "ce-f32672a0", "resourceGroup": "CTX-Hibtest2", "encryptionAtHost": false }, { "description": "Temporary copy of the target disk", "name": "procdiskfromipsss-f5276f68", "resourceGroup": "CTX-Hibtest2" } ], "disk": { "diskPath": "\\\\10.0.0.4\\_PVS\\ReadyForPVS.vhd", "md5Hash": "f96519ee184ba47ba33f03d09384188b" } }, "profile": { "prepared to create compositing engine": "0:00:07.449354", "created compositing engine": "0:03:01.894559", "uploaded export assets to compositing engine": "00:00:04.7360280", "deleted compositing engine": "0:01:40.349551" }, "warnings": [], "errors": [] }, "warnings": [], "error": [], "createdAt": "2023-11-23T13:31:48Z", "updatedAt": "2023-11-23T14:39:54Z", "startedAt": "2023-11-23T13:31:48Z", "endedAt": "2023-11-23T14:39:54Z" } The publishing of the Image has been completed successfully. All steps of the IPS workflow have completed successfully. The next step is deploying the Image using Citrix Provisioning Services (PVS). More information about Citrix Provisioning Services (PVS) and how to deploy Images/machines using PVS can be found here: https://docs.citrix.com/en-us/tech-zone/build/deployment-guides/citrix-azure-hibernation-api#creating-a-hibernation-capable-machine-catalog-using-rest-api and https://docs.citrix.com/en-us/tech-zone/learn/tech-briefs/citrix-provisioning. Part 4: Publishing the Image on Azure to an SMB share for PVS using the .NET application The Windows .NET application encapsulates REST-API calls to prepare the Image on Azure. An important prerequisite is a correct App registration in Azure and its IAM settings. Before we can use the application to publish the Image, at least the following parameters must be obtained: platform Azure platformCredentialId Azure credentials stored in the Credential Wallet resourceLocationId ID of the Citrix Cloud Resource location SubscriptionID Azure region where the prepared disk is placed targetDiskResourceGroupName Resource group where the prepared disk is placed virtualNetworkResourceGroupName Resource group for placing the Compositing engine virtualNetworkName Network for placing the Compositing engine virtualNetworkSubnetName Subnet for placing the Compositing engine targetDiskName Name of the disk to be prepared timeoutInSeconds Timespan before the process will time out SmbHost FQDN/IP of the server hosting the SMB share SmbShare Name of the SMB share SmbDiskName Name of the published disk on the SMB share SmbDiskFormat Disk format These parameters are set in a JSON-file: When all parameters are set, the publishing of the uploaded Image can be started by pressing the “Publish VHD...” button. The publishing process lasted in this example for about 65 minutes… The publishing of the Image has been completed successfully. The next step is deploying the Image using Citrix Provisioning Services (PVS). More information about Citrix Provisioning Services (PVS) and how to deploy Images/machines using PVS can be found here: https://docs.citrix.com/en-us/tech-zone/build/deployment-guides/citrix-azure-hibernation-api#creating-a-hibernation-capable-machine-catalog-using-rest-api and https://docs.citrix.com/en-us/tech-zone/learn/tech-briefs/citrix-provisioning. Appendix Code Snippets Example of Objects Public Class CCAPI_Token Public Property CustomerID As String Public Property ClientID As String Public Property ClientSecret As String Public Property GrantType As String End Class Public Class CCAPI_ExportParameters 'Nested object Public Property platform As String Public Property platformCredentialId As String Public Property vCenterHost As String Public Property vCenterPort As Integer Public Property vCenterSslNoCheckHostname As String Public Property datacenter As String Public Property datastore As String Public Property cluster As String Public Property network As String Public Property prefix As String Public Property resourceLocationId As String Public Property outputStorageLocation As CCAPI_OutputStorageLocation Public Property outputImageFilename As String Public Property timeoutInSeconds As Integer Public Property sourceDiskName As String End Class Public Class CCAPI_OutputStorageLocation Public Property type As String Public Property credentialId As String Public Property host As String Public Property sharePath As String End Class Example of JSON-based variables CCAPI-Token.json -> read into Class CCAPI_Token at startup: { "CustomerID": "uzXXXXXXXXXXXXX", "ClientID": "5075XXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX", "ClientSecret": "8MXXXXXXXXXXXXXXXXXXXXX==", "GrantType": "client_credentials" } Obtain a Bearer token from Citrix Cloud Private Sub FetchBearerToken(ByVal obj_CCAPI_Token As CCAPI_Token) 'Create POST request to Citrix Cloud-API to retrieve BearerToken Dim rest As New Chilkat.Rest ' Connect to the CCAPI server. Dim bTls As Boolean = True Dim port As Integer = 443 Dim bAutoReconnect As Boolean = True Dim success As Boolean = rest.Connect("api-eu.cloud.com", port, bTls, bAutoReconnect) If (success <> True) Then Debug.WriteLine(rest.LastErrorText) Exit Sub End If success = rest.AddQueryParam("grant_type", obj_CCAPI_Token.GrantType) success = rest.AddQueryParam("client_id", obj_CCAPI_Token.ClientID) success = rest.AddQueryParam("client_secret", obj_CCAPI_Token.ClientSecret) Dim APIPostCallPath As String = "/cctrustoauth2/" & obj_CCAPI_Token.CustomerID & "/tokens/clients" Dim s_Response As String = Nothing obj_BearerToken = New CCAPI_BearerToken Try s_Response = rest.FullRequestFormUrlEncoded("POST", APIPostCallPath) obj_BearerToken = JsonConvert.DeserializeObject(Of CCAPI_BearerToken)(s_Response, JSONSettings) Catch exc As Exception Console.WriteLine(exc.ToString) End Try If s_Response <> "" Then Dim json As JObject = JObject.Parse(s_Response) obj_BearerToken.TokenType = json.SelectToken("token_type") obj_BearerToken.Expiry = json.SelectToken("expires_in") obj_BearerToken.AccessToken = json.SelectToken("access_token") txtParams(0) = txt_status txtParams(1) = "Successfully obtained Bearer-Token..." & vbCrLf Me.Invoke(New WriteTextDelegate(AddressOf WriteText), txtParams) txt_bearertoken.Text = obj_BearerToken.AccessToken.ToString txtParams(0) = txt_status txtParams(1) = "-------------------------------" & vbCrLf & vbCrLf Me.Invoke(New WriteTextDelegate(AddressOf WriteText), txtParams) End If End Sub Set all Export parameters and export Image from vSphere Private Sub SetAllExportParameters(ByVal obj_CCAPI_Token As CCAPI_Token) s_APIBody = New Chilkat.StringBuilder s_APIBody.AppendLine("{", True) obj_AllExportParameters = New CCAPI_ExportParameters obj_AllExportParameters.outputStorageLocation = New CCAPI_OutputStorageLocation obj_AllExportParameters.vCenterPort = 443 s_APIBody.AppendLine((Chr(34) & "vCenterPort" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.vCenterPort & Chr(34) & ","), True) obj_AllExportParameters.vCenterHost = obj_Export.vCenterHost s_APIBody.AppendLine((Chr(34) & "vCenterHost" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.vCenterHost & Chr(34) & ","), True) obj_AllExportParameters.vCenterSslNoCheckHostname = True s_APIBody.AppendLine((Chr(34) & "vCenterSslNoCheckHostname" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.vCenterSslNoCheckHostname & Chr(34) & ","), True) obj_AllExportParameters.platformCredentialId = s_VSPCredID s_APIBody.AppendLine((Chr(34) & "platformCredentialId" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.platformCredentialId & Chr(34) & ","), True) obj_AllExportParameters.datacenter = obj_Export.datacenter s_APIBody.AppendLine((Chr(34) & "datacenter" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.datacenter & Chr(34) & ","), True) obj_AllExportParameters.network = obj_Export.network s_APIBody.AppendLine((Chr(34) & "network" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.network & Chr(34) & ","), True) obj_AllExportParameters.datastore = obj_Export.datastore s_APIBody.AppendLine((Chr(34) & "datastore" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.datastore & Chr(34) & ","), True) obj_AllExportParameters.cluster = obj_Export.cluster s_APIBody.AppendLine((Chr(34) & "cluster" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.cluster & Chr(34) & ","), True) obj_AllExportParameters.outputImageFilename = obj_Export.outputImageFilename s_APIBody.AppendLine((Chr(34) & "outputImageFilename" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.outputImageFilename & Chr(34) & ","), True) s_APIBody.AppendLine((Chr(34) & "outputStorageLocation" & Chr(34) & ": {"), True) obj_AllExportParameters.outputStorageLocation.credentialId = s_SMBCredID s_APIBody.AppendLine((Chr(34) & "credentialId" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.outputStorageLocation.credentialId & Chr(34) & ","), True) obj_AllExportParameters.outputStorageLocation.type = obj_Export.smb.type s_APIBody.AppendLine((Chr(34) & "type" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.outputStorageLocation.type & Chr(34) & ","), True) obj_AllExportParameters.outputStorageLocation.host = obj_Export.smb.host s_APIBody.AppendLine((Chr(34) & "host" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.outputStorageLocation.host & Chr(34) & ","), True) obj_AllExportParameters.outputStorageLocation.sharePath = obj_Export.smb.sharePath s_APIBody.AppendLine((Chr(34) & "sharePath" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.outputStorageLocation.sharePath & Chr(34)), True) s_APIBody.AppendLine(("},"), True) obj_AllExportParameters.platform = "vSphere" s_APIBody.AppendLine((Chr(34) & "platform" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.platform & Chr(34) & ","), True) obj_AllExportParameters.prefix = obj_Export.prefix s_APIBody.AppendLine((Chr(34) & "prefix" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.prefix & Chr(34) & ","), True) obj_AllExportParameters.resourceLocationId = s_RLID s_APIBody.AppendLine((Chr(34) & "resourceLocationId" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.resourceLocationId & Chr(34) & ","), True) obj_AllExportParameters.sourceDiskName = obj_Export.sourceDiskName s_APIBody.AppendLine((Chr(34) & "sourceDiskName" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.sourceDiskName & Chr(34) & ","), True) obj_AllExportParameters.timeoutInSeconds = obj_Export.timeoutInSeconds s_APIBody.AppendLine((Chr(34) & "timeoutInSeconds" & Chr(34) & ": " & Chr(34) & obj_AllExportParameters.timeoutInSeconds & Chr(34)), True) s_APIBody.AppendLine(("}"), True) End Sub Private Sub but_export_Click(sender As Object, e As EventArgs) Handles but_export.Click Dim rest As New Chilkat.Rest ' Connect to the CCAPI server. Dim bTls As Boolean = True Dim port As Integer = 443 Dim bAutoReconnect As Boolean = True Dim success As Boolean = rest.Connect("api.eu.layering.cloud.com", port, bTls, bAutoReconnect) If (success <> True) Then Debug.WriteLine(rest.LastErrorText) Exit Sub End If success = rest.AddHeader("Citrix-CustomerId", obj_CCAPI_Token.CustomerID) success = rest.AddHeader("Authorization", "CWSAuth bearer=" & obj_BearerToken.AccessToken.ToString) success = rest.AddHeader("Accept", "application/json") success = rest.AddHeader("Content-Type", "application/json") Dim APIPostCallPath As String = "/Images/$export?async=true" Dim s_Response As String = Nothing s_APIResponse = New Chilkat.StringBuilder Dim s_REsp As String Try success = rest.FullRequestSb("POST", APIPostCallPath, s_APIBody, s_APIResponse) s_REsp = s_APIResponse.GetAsString obj_ExportResponse = New CCAPI_ExportResponse obj_ExportResponse = JsonConvert.DeserializeObject(Of CCAPI_ExportResponse)(s_REsp, JSONSettings) ‘Enable timer to get progress of asynchronous process timer_progress.Enabled = True Catch exc As Exception Console.WriteLine(exc.ToString) End Try End Sub Invoke PowerShell to upload Image to Azure Private Sub Invoke_PowerShellToUpload() Dim sessionState = InitialSessionState.CreateDefault() sessionState.ExecutionPolicy = Microsoft.PowerShell.ExecutionPolicy.Unrestricted Using powershell As PowerShell = PowerShell.Create(sessionState) powershell.AddCommand("Copy-DiskToAzure") powershell.AddParameter("Filename", obj_AllUploadParams.Filename) powershell.AddParameter("ManagedDiskName", obj_AllUploadParams.ManagedDiskname) powershell.AddParameter("SubscriptionId", obj_AllUploadParams.SubscriptionID) powershell.AddParameter("Location", obj_AllUploadParams.Location) powershell.AddParameter("ResourceGroup", obj_AllUploadParams.ResourceGroup) powershell.AddParameter("Timeout", obj_AllUploadParams.Timeout) lbl_upload.Text = "Uploading..." pb_upload.Value = 25 ***** SNippet mus be completed, sync to GitHub failed ***** End Using End Sub Start preparation of Image Private Sub but_prepareVHD_Click(sender As Object, e As EventArgs) Handles but_prepareVHD.Click Dim rest As New Chilkat.Rest ' Connect to the CCAPI server. Dim bTls As Boolean = True Dim port As Integer = 443 Dim bAutoReconnect As Boolean = True Dim success As Boolean = rest.Connect("api.eu.layering.cloud.com", port, bTls, bAutoReconnect) If (success <> True) Then Debug.WriteLine(rest.LastErrorText) Exit Sub End If success = rest.AddHeader("Citrix-CustomerId", obj_CCAPI_Token.CustomerID) success = rest.AddHeader("Authorization", "CWSAuth bearer=" & obj_BearerToken.AccessToken.ToString) success = rest.AddHeader("Accept", "application/json") success = rest.AddHeader("Content-Type", "application/json") Dim APIPostCallPath As String = "/images/$prepare?async=true" Dim s_Response As String = Nothing txtParams(0) = txt_status txtParams(1) = "Sending REST-Call to prepare Image to SMB share..." & vbCrLf Me.Invoke(New WriteTextDelegate(AddressOf WriteText), txtParams) s_APIResponse = New Chilkat.StringBuilder Dim s_REsp As String Try success = rest.FullRequestSb("POST", APIPostCallPath, s_APIBody, s_APIResponse) s_REsp = s_APIResponse.GetAsString obj_PrepareResponse = New CCAPI_PrepareResponse obj_PrepareResponse = JsonConvert.DeserializeObject(Of CCAPI_PrepareResponse)(s_REsp, JSONSettings) timer_prepare.Enabled = True Catch exc As Exception Console.WriteLine(exc.ToString) End Try End Sub Public Sub GetPrepareProgress(ByVal obj_CCAPI_Token As CCAPI_Token, ByVal obj_PrepareResponse As CCAPI_PrepareResponse) pb_prepare.Value = 5 Dim rest As New Chilkat.Rest ' Connect to the CCAPI server. Dim bTls As Boolean = True Dim port As Integer = 443 Dim bAutoReconnect As Boolean = True Dim success As Boolean = rest.Connect("api.eu.layering.cloud.com", port, bTls, bAutoReconnect) If (success <> True) Then Debug.WriteLine(rest.LastErrorText) Exit Sub End If success = rest.AddHeader("Citrix-CustomerId", obj_CCAPI_Token.CustomerID) success = rest.AddHeader("Authorization", "CWSAuth bearer=" & obj_BearerToken.AccessToken.ToString) success = rest.AddHeader("Accept", "application/json") success = rest.AddHeader("Content-Type", "application/json") Dim APIPostCallPath As String = "/jobs/" & obj_PrepareResponse.id Dim s_Response As String = Nothing Dim i_Progress As Integer Try s_Response = rest.FullRequestFormUrlEncoded("GET", APIPostCallPath) 's_REsp = s_APIResponse.GetAsString obj_JobProgressResponse = New cc obj_JobProgressResponse = JsonConvert.DeserializeObject(Of CCAPI_JobProgressResponse)(s_Response, JSONSettings) 'GetExportProgress(obj_CCAPI_Token, obj_ExportResponse) i_OverallProgress = CInt(obj_JobProgressResponse.overallProgressPercent) If i_OverallProgress <> 0 Then pb_prepare.Value = i_OverallProgress End If s_ProgressStatus = obj_JobProgressResponse.status lbl_progressstatus.Text = "Progress-Status: " & s_ProgressStatus If obj_JobProgressResponse.error.Length <> 0 Then Dim s_error As String s_error = obj_JobProgressResponse.error(0).ToString pb_prepare.BackColor = Color.Red pb_prepare.ForeColor = Color.Red pb_prepare.Value = 100 lbl_progressstatus.Text = "Error - see logs!" timer_progress.Enabled = False txtParams(0) = txt_status txtParams(1) = "Export failed - see logs!..." & vbCrLf Me.Invoke(New WriteTextDelegate(AddressOf WriteText), txtParams) txtParams(0) = txt_status txtParams(1) = "-------------------------------" & vbCrLf & vbCrLf Me.Invoke(New WriteTextDelegate(AddressOf WriteText), txtParams) End If If s_ProgressStatus = "Complete" Then pb_prepare.Value = 100 lbl_progressstatus.Text = s_ProgressStatus timer_progress.Enabled = False txtParams(0) = txt_status txtParams(1) = "Export successful!" & vbCrLf Me.Invoke(New WriteTextDelegate(AddressOf WriteText), txtParams) txtParams(0) = txt_status txtParams(1) = "-------------------------------" & vbCrLf & vbCrLf Me.Invoke(New WriteTextDelegate(AddressOf WriteText), txtParams) End If ' s_ProgressError = obj_JobProgressResponse.error Catch exc As Exception Console.WriteLine(exc.ToString) End Try End Sub
  26. Overview This document serves as a guide to prepare an IT organization for successfully evaluating Unified Communications (UC) in desktop and application virtualization environments using Microsoft Teams. Over 500,000 organizations, including 91 of the Fortune 100 (as of Mar 2019) use Teams in 44 languages across 181 markets. Without proper consideration and design for optimization, virtual desktop and virtual application users will likely find the Microsoft Teams experience to be subpar. Citrix provides technologies to optimize this experience, to make Teams more responsive with crisp video and audio, even when working remotely in a virtual desktop. However, with multiple combinations of Teams infrastructures, clients, endpoint types, and user locations one must find the right “recipe” to deliver Teams optimally. The Citrix® HDX™ Optimization for Microsoft® Teams offers clear, crisp 720p high-definition video calls @30 fps, in an optimized architecture. Users can seamlessly participate in audio-video or audio-only calls to and from other Teams users, Optimized Teams’ users and other standards-based video desktop and conference room systems. Support for screen sharing is also available. This document guides administrators in evaluating the Teams delivery solution in their Citrix environment. It contains best practices, tips, and tricks to ensure that the deployment is the most robust. Optimized versus Generic delivery of Microsoft Teams This choice is often what causes the most confusion about delivering a Microsoft Teams experience in a Citrix environment. The main reason is that without optimization the media must “hairpin” from your client to the server in the data center and then back to the endpoint. This additional traffic can put significant load on the server (especially for video) and can cause delays and an overall degraded experience, especially if the other party in a Teams call is originating from a user in a similar virtualized experience. This method for delivering a Microsoft Teams experience is referred to as “Generic” delivery. The preferred method of delivery is the “Optimized” method. In this case, the architect or administrator uses Optimization for Microsoft Teams in their environment. The “Optimized” method is like splitting the Teams client in two, as illustrated in the following comparison diagram. The user interface lives inside the virtual host, and is seen completely in the virtual desktop or application display. However, the media rendering, or media engine is separated off to run on the endpoint. This method allows for an exquisite rendering of the audio and video and a great desktop sharing experience. Choosing the right Teams optimization for your environment Optimization for Teams is not a “one size fits all” technology. For the Teams desktop app, with Windows, Linux, Mac, and ChromeOS clients, the Citrix HDX Optimization for Microsoft Teams with Citrix Workspace app is the way to go. For Web based teams, with Windows and Linux clients using a Chrome browser, the Citrix HDX Optimization for Microsoft Teams with Browser Content Redirection would be the right solution. Optimization for mobile OSs is not available right now. Typically, mobile users who desire access to Teams on their devices use Teams native apps from the appropriate app store. Pros of using Citrix HDX Optimization for Microsoft Teams Richest experience, all media rendered on endpoint No hair-pinning effect, media communications go point to point between clients and the Teams conferencing service homed in Office 365 Less resource impact on the Citrix Virtual Apps and Desktops hosts Less HDX bandwidth consumed over “generic” approach Supports delivery with Citrix Virtual Apps using Windows Server OSs Simple installation on client devices, minimal prerequisites Can be used remotely from the enterprise network with Office 365 Support for Windows, Mac, Linux, and ChromeOS endpoints Wide choice of supported HDX Premium thin client devices (see Citrix Ready list) Support provided by both Microsoft and Citrix support No requirement for both the sides of the optimized architecture to authenticate to the back-end Requires no modification to the Teams back end Citrix HDX Optimization for Microsoft Teams These components are by default bundled into Citrix Workspace app and the Virtual Delivery Agent (VDA) Conceptual Architecture Call Flow Launch Microsoft Teams. Teams authenticates to O365. Tenant policies are pushed down to the Teams client, and relevant TURN and signaling channel information is relayed to the app. Teams detects that it is running in a VDA and makes API calls to the Citrix JavaScript API. Citrix JavaScript in Teams opens a secure WebSocket connection to WebSocketService.exe running on the VDA (127.0.0.1:9002). WebSocketService.exe runs as a Local System account on session 0. WebSocketService.exe performs TLS termination and user session mapping, and spawns WebSocketAgent.exe, which now runs inside the user session. WebSocketAgent.exe instantiates a generic virtual channel by calling into the Citrix HDX Browser Redirection Service (CtxSvcHost.exe). Citrix Workspace app’s wfica32.exe (HDX engine) spawns a new process called HdxRtcEngine.exe, which is the new WebRTC engine used for Teams optimization. HdxRtcEngine.exe and Teams.exe have a 2-way virtual channel path and can start processing multimedia requests. —–User calls—— Peer A clicks the call button. Teams.exe communicates with the Teams services in Azure establishing an end-to-end signaling path with Peer B. Teams asks HdxTeams for a series of supported call parameters (codecs, resolutions, and so forth, which is known as a Session Description Protocol (SDP) offer). These call parameters are then relayed using the signaling path to the Teams services in Azure and from there to the other peer. The SDP offer/answer (single-pass negotiation) and the Interactive Connectivity Establishment (ICE) connectivity checks (NAT and Firewall traversal using Session Traversal Utilities for NAT (STUN) bind requests) complete. Then, Secure Real-time Transport Protocol (SRTP) media flows directly between HdxRtcEngine.exe and the other peer (or O365 conference servers if it is a Meeting). For a detailed list of system requirements, please check the Microsoft Teams article in edocs. Supported Teams headsets and handsets The list of devices that are supported by Microsoft for Teams and Skype for Business Installation Steps Prerequisites Download the latest Citrix Virtual Apps and Desktops VDA installer. On Citrix.com, select the Downloads Tab. Select Citrix Virtual Apps and Desktops as the product and select Product Software as the download type. Select Citrix Virtual Apps and Desktops 1906 or later, it is under Components Ensure that the Teams service is reachable from the client in addition to the VDA Ensure the latest Microsoft Teams Client version is installed on the Virtual Delivery Agent hosts or base image or on Citrix Virtual Apps servers, which will be used to deliver Microsoft Teams or on both. See instructions on how to install it below Download the latest Citrix Workspace app from here. The installation procedures are simple Citrix Virtual Apps and Desktops VDA install on the host virtual machines The HDX Optimization for Teams is bundled as part of VDA in Citrix Virtual Apps and Desktops. It is installed on the host or base image of the catalog and Citrix Virtual Apps servers, which may be used to deliver Teams. Application requirements The VDA installer automatically installs the following items, which are available on the Citrix installation media in the Support folders Microsoft .NET Framework 4.7.1 or later, if it is not already installed Microsoft Visual C++ 2013 and 2015 Runtimes, 32-bit and 64-bit BCR_x64.msi - the MSI that contains the Microsoft Teams optimization code and starts automatically from the GUI. If you’re using the command-line interface for the VDA installation, don’t exclude it For Windows Server, if you did not install and enable the Remote Desktop Services roles, the installer automatically installs and enables those roles. 3 GB of free disk space for each user profile (recommended by Microsoft) Ensure that the Microsoft Teams client application is installed in per-machine mode on the VDA Install the Citrix Virtual Delivery Agent on the host or base image, following the instructions here. Using this image, create the appropriate machine catalogs and delivery groups in the Citrix Studio / Citrix Cloud Manage tab before trying to establish sessions and accessing the Teams client. Microsoft Teams install Note: Install Microsoft Teams after the VDA is installed. Teams installer has a detection logic for underlying VDAs, which is critical for optimization. The installation must be done on the golden image of your catalog or in the office layer (if you are using App Layering). We recommend you follow the Microsoft Teams installation guidelines. Avoid installing Teams under AppData (unless you are using dedicated/assigned virtual desktops). Instead, install in C:\Program Files by using the ALLUSER=1 flag, which is the recommended mode for pooled VDI/Windows Server/Windows 10 multiuser. For more information, see Install Microsoft Teams using MSI If Teams was installed in user mode before on the image: Users from EXE installer: Have all users in the environment manually uninstall from Control Panel > Programs & Features Admin from MSI: Admin uninstalls in the normal way All users in the environment must sign in for uninstallation to be completed Admin from Office Pro Plus: Admin may need to uninstall as if MSI were directly installed (above) Office Pro Plus must be configured to not include Teams Windows client device – Citrix Workspace app for Windows install (latest Current release recommended) The Citrix Workspace app for Windows has the optimization components built into it. When you install the application on your client, the components are already present. System Requirements Approximately 1.8–2.0 GHz quad core CPU required for 720p HD resolution during a peer-to-peer video conference call. Quad core CPUs with lower speeds (~1.5 GHz) but equipped with Intel Turbo Boost or AMD Turbo Core that can boost up to 2.0 GHz are also supported Citrix Workspace app requires a minimum of 600 MB free disk space and 1 GB RAM. Microsoft .NET Framework version 4.6.2 or later is installed automatically, if it is not already installed. Follow the instructions to install the Citrix Workspace app for Windows here. Policy Settings To enable optimization, ensure the Microsoft Teams redirection Studio policy is set to Allowed The policy is enabled by default Note: In addition to this policy being enabled, HDX checks to verify that the version of Citrix Workspace app is equal to or greater than the minimum required version. If both conditions are met, the following registry key is set to 1 on the VDA. The Microsoft Teams application reads the key to load in VDI mode Key: HKEY_CURRENT_USER\Software\Citrix\HDXMediaStream Name: MSTeamsRedirSupport Value: DWORD (1 - on, 0 - off) Network Requirements Microsoft Teams relies on Media Processor servers in Microsoft Azure for meetings or multiparty calls. Microsoft Teams relies on Azure Transport Relays for scenarios where two peers in a point-to-point call do not have direct connectivity or where a participant does not have direct connectivity to the Media Processor. Therefore, the network health between the peer and the Office 365 cloud determines the performance of the call. We recommend evaluating your environment to identify any risks and requirements that can influence your overall cloud voice and video deployment. Use the Prepare your organization’s network for Microsoft Teams page to evaluate if your network is ready for Microsoft Teams. Port / Firewall settings Teams traffic flows via Transport Relay on UDP 3478-3481, TCP 443 (fallback), and the clients need access to these address ranges: 13.107.64.0/18, 52.112.0.0/14, 52.120.0.0/14. Optimized traffic for peer to peer connections is routed on higher ports (1 - 65535 UDP) at random, if they are open. For more info read. Be sure that all computers running the Workspace app client with Teams optimization can resolve external DNS queries to discover the TURN/STUN services provided by Microsoft 365 (for example, worldaz.turn.teams.microsoft.com) and that your firewalls are not preventing access. For support information, see Support section of our documentation. Summary of key network recommendations for Real Time Protocol (RTP) traffic Connect to the Office 365 network as directly as possible from the branch office. Bypass proxy servers, network SSL intercept, deep packet inspection devices, and VPN hairpins (use split tunneling if possible) at the branch office. If you must use them, make sure that RTP/UDP Teams traffic is unhindered. Plan and provide enough bandwidth. Check each branch office for network connectivity and quality. The WebRTC media engine in the Workspace app (HdxRtcEngine.exe) uses the Secure RTP protocol for multimedia streams that are offloaded to the client. The following metrics are recommended for guaranteeing a great user experience Latency (one way) < 50 milliseconds Latency (RTT) < 100 milliseconds Packet Loss < 1% during any 15-second interval Packet inter-arrival jitter < 30 ms during any 15-second interval In terms of bandwidth requirements, optimization for Microsoft Teams can use a wide variety of codecs for audio (OPUS/G.722/PCM/G711) and video (H264/VP9). The peers negotiate these codecs during the call establishment process using the Session Description Protocol (SDP) Offer/Answer. Citrix minimum recommendations for bandwidth and codes for specific type of content are: Audio (each way) ~90 kbps using G.722 Audio (each way) ~60 kbps using Opus* Video (each way) ~700 kbps using H264 360p @ 30 fps and 16:9 Video (each way) ~2500 kbps using H264 720p @ 30 fps and 16:9 Screen sharing ~300 kbps using H264 1080p @ 15 fps (*) Opus supports constant and variable bitrate encoding from 6 kbps up to 510 kbps, and it is the preferred codec for Peer to Peer calls between two VDI users Common deployment related tips and questions Teams Tips To update the Teams desktop client, Uninstall the currently installed version, then install the new version. To uninstall the Teams desktop client MSI, if it was first installed using the per-machine mode, use one of the following commands: msiexec /passive /x Teams_windows_x64.msi /l*v msi_uninstall_x64.log msiexec /passive /x Teams_windows.msi /l*v msi_uninstall.log Troubleshooting Here are a few ways to resolve the issues users may face: Symptom: Installation Failure Cause: Inconsistent state of Citrix redirection services Resolution: Validate the following: Teams automatically launches for all users after sign-in to Windows Existence of directories and files Program Files (x86) or Program Files Microsoft\Teams\current folder with Teams.exe, which is the main application Teams Installer folder with Teams.exe, which is an EXE installer (do not ever run this manually!) %LOCALAPPDATA% Microsoft\Teams is either not there, or mostly empty (only a couple of files) Existence of shortcuts: Teams desktop client shortcut, pointing to Program Files…, in the following places: On desktop In the Start menu Existence of Windows Registry information: A value named Teams, of type REG_SZ, in one of the following key paths in the registry: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Run Computer\HKEY_LOCAL_MACHINE\Microsoft\Windows\CurrentVersion\Run Symptom: Failure while placing an audio/video call and cannot find the audio/video devices connected Cause: Inconsistent state of Citrix redirection services Resolution: Validate that the HdxRtcEngine.exe process is running on the client machine. If the process is not running then we need to restart Citrix Redirection Services, do the following - in this order - to check if HdxRtcEngine.exe is getting launched Exit Teams on VDA Start services.msc on VDA Stop "Citrix HDX Teams Redirection Service" Disconnect the HDX session Reconnect to the HDX session Start "Citrix HDX Teams Redirection Service" Restart "Citrix HDX HTML5 Video Redirection Service" Launch Teams on VDA Symptom: No incoming ring notification tone on a Citrix Session Cause: Audio being played on the VDA host Resolution: No audio devices on Citrix session / incorrect local default audio device Make sure that a remote audio device is present on the Citrix session. Make sure that the Citrix Redirection service is running on a remote host. Restart it (solves most problems). In case multiple audio sources are available, make sure that the default playback device on the client machine is selected to the device where the user expects to hear the ring notification. Summary We support Microsoft Teams infrastructures: whether on-prem or Office 365 (cloud) as long as configuration allows for successful internal and external client communication. We have walked through the way to go about evaluating the Citrix Optimization for Teams and pointed you to the resources for deploying the rest. The Optimization for Microsoft Teams greatly increases server scalability and offers zero degradation in audio-video quality and optimal network bandwidth efficiency. It is the Microsoft recommended solution for a VDI deployment.
  27. Overview Citrix Analytics for Performance allows you to track, aggregate, and visualize key performance indicators of your Citrix Virtual Apps and Desktops environment. It quantifies user experience and gives customers end-to-end visibility on what the root cause for end user experience is. It also provides multi-site aggregation and reporting. Customers who have multiple sites can consume data from a single pane of glass instead of having to log into multiple consoles. More information on Citrix Analytics for Performance can be found here and videos demonstrating the Citrix Analytics for Performance can be found here. Pre-requisites Please review the following Analytics on-boarding guide prior to moving into your POC build. On-premises Citrix Virtual Apps and Desktops Sites Delivery Controller and Director 1909 or later Citrix Cloud account with Citrix Analytics entitlements Your Citrix Cloud account is an Administrator account with rights to the Product Registration Experience. Full details on pre-requisites can be found here On-premises StoreFront Your StoreFront version must be 1906 or later. The StoreFront deployment must be able to connect to the following address: https://.cloud.com *https:// api.analytics.cloud.com The StoreFront deployment must have port 443 open for outbound internet connections. Any proxy servers on the network must allow this communication with Citrix Analytics. Deployment Steps Connecting Citrix Analytics with on-premises Citrix Virtual Apps and Desktop sites Log into Director as a full administrator and pick which Site you want to configure with Performance Analytics Click the Analytics tab Select the terms and click Get Started Click Connect Site Click Copy Code and then click Register on Citrix Cloud Log in to Citrix Cloud Paste the 8 digit code you copied in Director and click continue Connecting to on-premises StoreFront Log into Citrix Cloud and click Manage under the Analytics console from your StoreFront server Click Manage Click on settings and then click on data sources Click the ellipses next to Virtual Apps and Desktops and select Connect to StoreFront Deployment Click download file Open powershell and run the following command: Import-STFCasConfiguration -Path "configuration file path" You will now be able to see that the StoreFront database has been added Citrix Desktop-as-a-Service (DaaS) Citrix Analytics automatically detects Citrix DaaS sites and no action by the administrator is needed. Relevant Data Once data is flowing into the environment some things to keep in mind: Multi-site aggregation Allows you to get insights into multi-sites regardless of whether they are hosted on-premises, cloud, or hybrid. User-Experience Score The User Experience score is a comprehensive measurement of the quality of the session established by the user. Click on the number of poor users User experience score are based on: Session availability Session responsiveness Session logon duration Session resiliency You can click on each factor to get corresponding subfactors and insights into what might be causing the issue and proactively resolve those issues. Failure Insights Failure Insights provide insights into the root causes for session failures in your environment. Click on the number of machines to see which machines are black hole machines Click on a specific machine to see further details on the black-hole machine
  28. Overview This Proof of Concept guide is designed to help you quickly configure Citrix DaaS to include Remote PC Access in your environment. At the end of this Proof of Concept guide you are able to give users who are working remotely access to the on-premises physical desktops using Citrix DaaS. You are able to let your users access their on-premises workstations on any device of their choice without having to connect over a VPN. Conceptual Architecture Scope In this Proof of Concept guide, you experience the role of a Citrix administrator and you create a connection between your organization’s on-premises deployment of physical desktops and Citrix DaaS. You provide access to those on-premises workstations to an end user with Citrix DaaS using Citrix Workspace. This guide showcases how to perform the following actions: Create a Citrix Cloud account (if you don’t have one already) Obtain a Citrix DaaS account Create a new Resource Location (your office) and install the Citrix Cloud Connectors in it Install Citrix Virtual Delivery Agent on the Remote PC Access hosts Create a Machine Catalog in Citrix DaaS Create a Delivery Group Launch a session from Citrix Workspace Prerequisites Host machine requirements The in-office workstations that your users must connect to are Windows single-session operating system machines, and are joined to a Windows Active Directory (AD) domain. Citrix Cloud Connector To install the Citrix Cloud Connectors in your environment, you require (at least two) Windows Server 2012 R2 or later server machines/VMs. You require static IPs for these two machines. Windows installation and domain join of these machines must have been done in advance. The system requirements for the Cloud Connectors are here. Review the guidance on the cloud connector installation here. The machine the Citrix Cloud Connector runs on must have network access to all the physical machines that are to be made available on the internet via the Citrix Workspace. Some requirements for Citrix Cloud Connector installation (installer performs checks for these) are: The Citrix Cloud Connector machine must have outbound Internet access on port 443, and port 80 to only *.digicert.com. The port 80 requirement is for X.509 certificate validation. See more info here Microsoft .NET Framework 4.7.2 or later must be pre-installed on the machine Time on the machine must be synced with UTC This guide provides detailed instructions on how to configure your environment including office workstations, connecting your on-premises setup up to Citrix Cloud. As a Citrix Cloud administrator, you enable your users to connect to their office workstations remotely with Citrix DaaS. Create a Citrix Cloud Account If you are an existing Citrix Cloud customer, skip to the next section: Request Citrix DaaS Trial. Ensure that you have an active Citrix Cloud account. If your account has expired, contact your account manager to enable it. If you need to sign up for a new Citrix Cloud account please follow the step by step instructions here: Signing up for Citrix Cloud Request a Citrix DaaS Trial Sign in to your Citrix Cloud account From the management console, select Request Trial for the service you want to trial, in this case Citrix DaaS. Note For some services you must request a demo from a Citrix sales representative before you can try out the service. Requesting a demo allows you to discuss your organization’s cloud service needs with a Citrix sales representative. Also, the sales representative ensures you have all the information needed to use the service successfully. When your trial is approved and ready to use, Citrix sends you an email notification. Create a new Resource Location While the service is being provisioned, we can keep going. Return to the Citrix Cloud administration page. Scroll up, under Resource Locations Click Edit or Add New Click Add a Resource Location or + Resource Location (if there is already a resource location) Click the ellipses on the top right of the new resource location. Click Manage Resource Location. Enter a new name of the New Resource Location. Click Confirm. Under the newly created resource location click + Cloud Connectors Click Download. Click Run once the download completes. Citrix Cloud connectivity test successful message is displayed. Click Close. Note: If the test fails, check the following link to resolve the issue Click Sign In and sign in to Citrix Cloud to authenticate the Citrix Cloud Connector. From the drop-down lists select the appropriate Customer and Resource Location (Resource location drop-down list is not displayed if there is only one resource location). Click Install Once the installation completes, a service connectivity test runs. Let it complete and you will again see a successful result. Click Close Click Refresh all to refresh the Resource Location page in Citrix Cloud Click Cloud Connectors The newly added Cloud Connector is listed. Repeat the last 8 steps to install another Cloud Connector in the Resource Location on the second Windows server machine that you had prepared. Install Citrix Virtual Delivery Agent on the Remote PC Access hosts We now install the Citrix Virtual Desktops, Virtual Delivery Agent on the physical machines that we are going to give users access to. If you want to install the Citrix Virtual Delivery Agent using scripts or a deployment tool like SCCM follow the appropriate links. Ensure to use the install command line parameters as shown in the following instructions. Connect to the physical machine via RDP as the a local admin. Open Citrix.com in your browser. Hover over Sign In and click My Citrix account Sign in with your username and password. Click Downloads. From the Select a product... drop-down list, select Citrix Virtual Apps and Desktops In the page that opens, select the latest version of Citrix Virtual Apps and Desktops 7 (without the .x at the end) Scroll down to Components that are on the product ISO but also packaged separately. Click chevron to expand the section. Click Download File under the Single-session OS Virtual Delivery Agent version Check “I have read and certify that I comply with the above Export Control Laws” checkbox, if you agree. Click Accept. The download begins. Save the file. When the download completes move to the next step. Search for PowerShell from the Start menu search bar and Click Run as administrator Traverse to the directory that you downloaded the installer in. Run the following command. (Replace the name of the executable with the one you downloaded and the cloud connector FQDN). Note: The Citrix Profile Management and the Citrix Profile Management WMI plug-in are essential for monitoring and Citrix Analytics to collect data from the endpoint, so that logon duration, session resiliency and UX score can be reported. VDAWorkstationSetup_version.exe /quiet /remotepc /includeadditional “Citrix User Profile Manager”,“Citrix User Profile Manager WMI Plugin” /controllers “cloudconnecotrFQDN” /enable_hdx_ports /noresume /noreboot Wait for the installation to complete. Reboot the physical machine. Repeat the procedure for all the physical hosts that you want to make available remotely. Create a machine catalog in Citrix DaaS Use Citrix DaaS to create a catalog of the physical machines Once the trial is approved, Log in to Citrix Cloud from your local machine. Scroll to My Services, and locate DaaS service tile, click Manage The service overview page is displayed. In the left menu, Click Machine Catalogs Click Create Machine Catalog. Select Remote PC Access. Click Next Select I want users to connect to same(static) desktop each time they login. Click Next Click Add Machine Accounts or click Add OUs based on whether you want to add machines or OUs (all the physical machines in the OU). In our example we are adding a machine. In the Select Computers pop up, enter the first few characters of the machine hostname you want to add. Click Check Names If the search returns more than one machine name, choose the ones you want to add (hold down the CTRL key to choose more than one). Once you have selected all the machines. Click OK Repeat the last 2 steps to add all the machines you want to add to the catalog. Then click Save in the Select Computers dialog From the Select the Zone and minimum functional level for this catalog drop-down list, select 1811 (or newer). Click Next Leave the default select on Scopes selection, Click Next Leave the default select on WEM selection, Click Next Do not select Enable VDA upgrade selection, Click Next Enter a name for the machine catalog. Click Finish. You are returned to the Machine Catalogs page. Create a Delivery group From the left side menu click Delivery Groups to start creating your delivery group. From the Actions menu(right side), click Create Delivery Group. Select the catalog you created earlier. Click Next Specify which users can access these desktops. For our example we assign the desktops to a group of users, that have a 1:1 mapping for each of the machines in the delivery group for enhanced security. Click the Restrict use to this Delivery Group to the following users’ radio button. Click Add Add domain users / groups that you want to have access to the delivery group. You can check their names by clicking Check Names. Once you are done click OK If the search returns more than one user name, choose the ones you want to add (hold down the CTRL key to choose more than one). Once you have selected all the users that you want to add. Click OK Repeat the last 2 steps for all the users that you want to add to the delivery group. Then click Save in the Select Users or Groups dialog. Click Next in the Create Delivery group dialog Click Add In the Add Desktops Assignment Rule dialog. Enter Display Name for the delivery group. Click Add and add the same or a subset of the users you chose earlier again. Ensure Enable desktop assignment rule checkbox is checked. Click OK Click Next Click Next Select the appropriate License Type. Click Next Enter a Delivery Group name. Click Finish Once the delivery group is created, the Delivery Group Manage link looks like this. Click the Desktops tab in the Details section. Click x machine(s) is/are not assigned to a user. Select the machine you want to assign to a user. Click Change User from the Action menu Click Add Search for the user you want to assign to the machine using the Check Names button. Once found, click OK. Click Save. Repeat the steps for the rest of the machines to assign each user to their physical machine. Note: The last 4 steps are needed, if you want to assign specific users to specific desktops, else the users are auto assigned to the next available desktop in the delivery group or you can use PowerShell scripts to perform the assignment. Launch the session from Citrix Workspace Open the Workspace URL you had saved earlier (from Citrix Cloud) to gain access to the Citrix Workspace. Log in as a domain user you have assigned the remote desktop to. If this is the first time you are launching a session from the browser, you may get the following pop-up. Ensure Citrix Workspace App is installed and click Detect Workspace Click View All Desktops. Click the Remote PC Access delivery group The session launches giving the user access to the remote physical PC Summary The guide walked you through connecting your physical desktops in your office to the Citrix DaaS, so users access them remotely. You learned how use Citrix DaaS to allow users to access their desktops on any device from any location. The process included how to install a Citrix Cloud Connector in your on-premises office location, installing Citrix Virtual Delivery Agents on the desktop machines. Creating a machine catalog from those machines and then a delivery group. Assigning users to their machines and then allowing them to connect to those desktops using Citrix Workspace app. To learn more about Citrix solutions for Business Continuity, read the Tech Brief
  1. Load more activity
×
×
  • Create New...