Citrix DaaS

Local Host Cache

Tip:

In Full Configuration > Home, the service health alerts feature gives you proactive alerts to make sure that your Local Host Cache and zones are configured correctly. So, when an outage happens, Local Host Cache works and your users are not impacted. Alerts come at two levels — site-wide alerts shown in Home (flag icon) and zone-related alerts shown on the Troubleshoot tab of each zone. For more information, see Zones.

Local Host Cache enables connection brokering operations in a Citrix DaaS (formerly Citrix Virtual Apps and Desktops service) deployment to continue when a Cloud Connector cannot communicate with Citrix Cloud. Local Host Cache engages when the network connection is lost for 60 seconds.

With Local Host Cache, users who are connected when an outage occurs can continue working uninterrupted. Reconnections and new connections experience minimal connection delays.

Important:

If using an on-premises StoreFront deployment, you must add all Cloud Connectors that have (or can have) VDAs registered with them to the StoreFront as Delivery Controllers. A Cloud Connector that is not added to the StoreFront cannot transition to outage mode, which might result in user launch failures.

For deployments with no on-premises StoreFront, use the service continuity Citrix Workspace platform feature to allow users to connect to resources during outages. For more information, see Service continuity.

Data content

Local Host Cache includes the following information, which is a subset of the information in the main database:

  • Identities of users and groups who are assigned rights to resources published from the site.
  • Identities of users who are currently using, or who have recently used, published resources from the site.
  • Identities of VDA machines (including Remote PC Access machines) configured in the site.
  • Identities (names and IP addresses) of client Citrix Workspace app machines being actively used to connect to published resources.

It also contains information for currently active connections that were established while the main database was unavailable:

  • Results of any client machine endpoint analysis performed by Citrix Workspace app.
  • Identities of infrastructure machines (such as Citrix Gateway and StoreFront servers) involved with the site.
  • Dates, times, and types of recent activity by users.

How it works

View how Local Host Cache interacts with Citrix Cloud.

During normal operations

Normal operations image

  • The Brokering Principal (also known as the Citrix Remote Broker Provider Service) on a Cloud Connector accepts connection requests from StoreFront. The Brokering Principal communicates with Citrix Cloud to connect users with VDAs that are registered with the Cloud Connector.
  • The Citrix Config Synchronizer Service (CSS) checks with the broker in Citrix Cloud approximately every 5 minutes to see if any configuration changes were made. Those changes can be administrator-initiated (such as changing a delivery group property) or system actions (such as machine assignments).
  • If a configuration change occurred since the previous check, the CSS synchronizes (copies) information to a secondary broker on the Cloud Connector. (The secondary broker is also known as the High Availability Service, or HA broker, as shown in the preceding figure.)

    All configuration data is copied, not just items that changed since the previous check. The CSS imports the configuration data into a Microsoft SQL Server Express LocalDB database on the Cloud Connector. This database is referred to as the Local Host Cache database. The CSS ensures that the information in the Local Host Cache database matches the information in the site database in Citrix Cloud. The Local Host Cache database is re-created each time synchronization occurs.

    Microsoft SQL Server Express LocalDB (used by the Local Host Cache database) is installed automatically when you install a Cloud Connector. The Local Host Cache database cannot be shared across Cloud Connectors. You do not need to back up the Local Host Cache database. It is recreated every time a configuration change is detected.

  • If no changes occurred since the last check, the configuration data is not copied.

During an outage

Outage image

When an outage begins:

  • The secondary broker starts listening for and processing connection requests.
  • When the outage begins, the secondary broker does not have current VDA registration data, but when a VDA communicates with it, a registration process is triggered. During that process, the secondary broker also gets current session information about that VDA.
  • While the secondary broker is handling connections, the Brokering Principal continues to monitor the connection to Citrix Cloud. When the connection is restored, the Brokering Principal instructs the secondary broker to stop listening for connection information, and the Brokering Principal resumes brokering operations. The next time a VDA communicates with the Brokering Principal, a registration process is triggered. The secondary broker removes any remaining VDA registrations from the previous outage. The CSS resumes synchronizing information when it learns that configuration changes have occurred in Citrix Cloud.

In the unlikely event that an outage begins during a synchronization, the current import is discarded and the last known configuration is used.

The event log indicates when synchronizations and outages occur.

There is no time limit imposed for operating in outage mode.

You can also intentionally trigger an outage. See Force an outage for details about why and how to do this.

Resource locations with multiple Cloud Connectors

Among its other tasks, the CSS routinely provides the secondary broker with information about all Cloud Connectors in the resource location. Having that information, each secondary broker knows about all peer secondary brokers running on other Cloud Connectors in the resource location.

The secondary brokers communicate with each other on a separate channel. Those brokers use an alphabetical list of FQDN names of the machines they’re running on to determine (elect) which secondary broker will broker operations in the zone if an outage occurs. During the outage, all VDAs re-register with the elected secondary broker. The non-elected secondary brokers in the zone actively reject incoming connection and VDA registration requests.

Important:

Connectors within a resource location must be able to reach each other at http://<FQDN_OF_PEER_CONNECTOR>:80/Citrix/CdsController/ISecondaryBrokerElection. If Connectors cannot communicate at this address, multiple brokers may be elected and intermittent launch failures may occur during a Local Host Cache event.

If an elected secondary broker fails during an outage, another secondary broker is elected to take over, and VDAs register with the newly elected secondary broker.

During an outage, if a Cloud Connector is restarted:

  • If that Cloud Connector is not the elected broker, the restart has no impact.
  • If that Cloud Connector is the elected broker, a different Cloud Connector is elected, causing VDAs to register. After the restarted Cloud Connector powers on, it automatically takes over brokering, which causes VDAs to register again. In this scenario, performance can be affected during the registrations.

The event log provides information about elections.

What is unavailable during an outage, and other differences

There is no time limit imposed for operating in outage mode. However, if the outage is due to loss of Citrix Cloud connectivity from their resource location, Citrix recommends restoring connectivity from the resource location as quickly as possible.

During an outage:

  • You cannot use the Manage interfaces.
  • You have limited access to the Remote PowerShell SDK.

    • You must first:
      • Add a registry key EnableCssTestMode with a value of 1: New-ItemProperty -Path HKLM:\SOFTWARE\Citrix\DesktopServer\LHC -Name EnableCssTestMode -PropertyType DWORD -Value 1
      • Set the SDK auth to OnPrem so that the SDK proxy does not try to redirect the cmdlet calls: $XDSDKAuth="OnPrem"
      • Use port 89: Get-BrokerMachine -AdminAddress localhost:89 | Select MachineName, ContollerDNSName, DesktopGroupName, RegistrationState
    • After running those commands, you can access:
      • All Get-Broker* cmdlets.
  • Monitoring data is not sent to Citrix Cloud during an outage. So, the Monitor functions do not show activity from an outage interval.
  • Hypervisor credentials cannot be obtained from the Host Service. All machines are in the unknown power state, and no power operations can be issued. However, VMs on the host that are powered-on can be used for connection requests.
  • An assigned machine can be used only if the assignment occurred during normal operations. New assignments cannot be made during an outage.
  • Automatic enrollment and configuration of Remote PC Access machines is not possible. However, machines that were enrolled and configured during normal operation are usable.
  • Server-hosted applications and desktop users might use more sessions than their configured session limits, if the resources are in different zones.
  • Users can launch applications and desktops only from registered VDAs in the zone containing the currently active/elected broker. Launches across zones (from a broker in one zone to a VDA in a different zone) are not supported during an outage.
  • If a site database outage occurs before a scheduled restart begins for VDAs in a delivery group, the restarts begin when the outage ends. This scenario can have unintended results. For more information, see Scheduled restarts delayed due to database outage.
  • Zone preference cannot be configured. If configured, preferences are not considered for session launch.
  • Tag restrictions where tags are used to designate resource locations are not supported for session launches. When such tag restrictions are configured, and a StoreFront store’s advanced health check option is enabled, sessions might intermittently fail to launch.

StoreFront requirement

If using an on-premises StoreFront deployment, you must add all Cloud Connectors that have (or can have) VDAs registered with them to the StoreFront as Delivery Controllers. A Cloud Connector that is not added to the StoreFront cannot transition to outage mode, which might result in user launch failures.

Resource availability

You can ensure the availability of resources (apps and desktops) during an outage in two ways:

  • Publish the resources in every resource location in your deployment.
  • If you are using StoreFront 1912 CU4 or later, publish the resources to at least one resource location and turn on advanced health check on all StoreFront servers. For versions earlier than StoreFront 2308, the advanced health check is off by default, and must be enabled by an administrator. For StoreFront version 2308 and later, this feature is enabled by default. For more information and instructions on turning on the advanced health check, see advanced health check.

Application and desktop support

Local Host Cache supports server-hosted applications and desktops, and static (assigned) desktops.

Local Host Cache supports desktop (single-session) VDAs in pooled delivery groups, as follows.

  • By default, power-managed desktop VDAs in pooled delivery groups (created by MCS or Citrix Provisioning) that have the ShutdownDesktopsAfterUse property enabled are not available for new connections during a Local Host Cache event. You can change this default, to allow those desktops to be used during Local Host Cache.

    However, you cannot necessarily rely on the power management during the outage. (Power management resumes after normal operations resume.) Also, those desktops might contain data from the previous user, because they have not been restarted.

  • To override the default behavior, it must be enabled site-wide and for each affected delivery group, using PowerShell commands.

    For site-wide, run the following command:

    Set-BrokerSite -ReuseMachinesWithoutShutdownInOutageAllowed $true

    By default, all delivery groups are not enabled for this feature. There are two options to enable it at the delivery group level:

    • Enable for selected delivery groups: For each affected delivery group, run the following command.

      Set-BrokerDesktopGroup -Name "name" -ReuseMachinesWithoutShutdownInOutage $true

    • Enable for all delivery groups: To enable the delivery group level setting by default, run the following command. This setting applies to all newly created delivery groups (that is, all delivery groups you create after enabling the setting).

      Set-BrokerSite -DefaultReuseMachinesWithoutShutdownInOutage $true

      To enable this for existing delivery groups, run the command noted previously (Set-BrokerDesktopGroup -Name "name" -ReuseMachinesWithoutShutdownInOutage $true).

    Enabling this feature in the site and the delivery groups does not affect how the configured ShutdownDesktopsAfterUse property works during normal operations.

Important:

Without enabling ReuseMachinesWithoutShutdownInOutageAllowed at the Site level and ReuseMachinesWithoutShutdownInOutage at the delivery group level, all session launch attempts to power-managed desktop VDAs in pooled delivery groups will fail during a Local Host Cache event.

Verify that Local Host Cache is working

View how to verify that Local Host Cache is configured correctly.

To verify that Local Host Cache is set up and working correctly:

  • If using StoreFront, verify that the local StoreFront deployment points to all of the Cloud Connectors in that resource location.
  • Ensure that synchronization imports complete successfully. Check the event logs.
  • Ensure that the Local Host Cache database was created on each Cloud Connector. This confirms that the High Availability Service can take over, if needed.
    • On the Cloud Connector server, browse to c:\Windows\ServiceProfiles\NetworkService.
    • Verify that HaDatabaseName.mdf and HaDatabaseName_log.ldf are created.
  • Force an outage on all Cloud Connectors in the resource location. After you’ve verified that Local Host Cache works, remember to place all the Cloud Connectors back into normal mode. This can take approximately 15 minutes.

Event logs

Event logs indicate when synchronizations and outages occur. In event viewer logs, outage mode is referred to as HA mode.

Config Synchronizer Service

During normal operations, the following events can occur when the CSS imports the configuration data into the Local Host Cache database using the Local Host Cache broker.

  • 503: The Citrix Config Sync Service received an updated configuration. This event occurs each time an updated configuration is received from Citrix Cloud. It indicates the start of the synchronization process.
  • 504: The Citrix Config Sync Service imported an updated configuration. The configuration import completed successfully.
  • 505: The Citrix Config Sync Service failed an import. The configuration import did not complete successfully. If a previous successful configuration is available, it is used if an outage occurs. However, it will be out-of-date from the current configuration. If there is no previous configuration available, the service cannot participate in session brokering during an outage. In this case, see the Troubleshoot section, and contact Citrix Support.
  • 507: The Citrix Config Sync Service abandoned an import because the system is in outage mode and the Local Host Cache broker is being used for brokering. The service received a new configuration, but the import was abandoned because an outage occurred. This is expected behavior.
  • 510: No Configuration Service configuration data received from primary Configuration Service.
  • 517: There was a problem communicating with the primary Broker.
  • 518: Config Sync script aborted because the secondary Broker (High Availability Service) is not running.

High Availability Service

This service is also known as the Local Host Cache broker.

  • 3502: An outage occurred and the Local Host Cache broker is performing broker operations.
  • 3503: An outage was resolved and normal operations have resumed.
  • 3504: Indicates which Local Host Cache broker is elected, plus other Local Host Cache brokers involved in the election.
  • 3507: Provides a status update of Local Host Cache every 2 minutes which indicates that Local Host Cache mode is active on the elected broker. Contains a summary of the outage including outage duration, VDA registration, and session information.
  • 3508: Announces Local Host Cache is no longer active on the elected broker and normal operations have been restored. Contains a summary of the outage including outage duration, number of machines that registered during the Local Host Cache event, and number of successful launches during the LHC event.
  • 3509: Notifies that Local Host Cache is active on the non-elected broker(s). Contains an outage duration every 2 minutes and indicates the elected broker.
  • 3510: Announces Local Host Cache is no longer active on the non-elected broker(s). Contains the outage duration and indicates the elected broker.

Force an outage

You might want to deliberately force an outage.

  • If your network is going up and down repeatedly. Forcing an outage until the network issues are resolved prevents continuous transition between normal and outage modes (and the resulting frequent VDA registration storms).
  • To test a disaster recovery plan.
  • To help ensure that Local Host Cache is working correctly.

Although a Cloud Connector can be updated during a forced outage, unforeseen issues can occur. We recommend you set a schedule for Cloud Connector updates that avoids forced outage mode intervals.

To force an outage, edit the registry of each Cloud Connector server. In HKLM\Software\Citrix\DesktopServer\LHC, create and set OutageModeForced as REG_DWORD to 1. This setting instructs the Local Host Cache broker to enter outage mode, regardless of the state of the connection to Citrix Cloud. Setting the value to 0 takes the Local Host Cache broker out of outage mode.

To verify events, monitor the Current_HighAvailabilityService log file in C:\ProgramData\Citrix\workspaceCloud\Logs\Plugins\HighAvailabilityService.

Troubleshoot

Several troubleshooting tools are available when a synchronization import to the Local Host Cache database fails and a 505 event is posted.

CDF tracing: Contains options for the ConfigSyncServer and BrokerLHC modules. Those options, along with other broker modules, can identify the problem.

Report: If a synchronization import fails, you can generate a report. This report stops at the object causing the error. This report feature affects synchronization speed, so Citrix recommends disabling it when not in use.

To enable and produce a CSS trace report, enter the following command:

New-ItemProperty -Path HKLM:\SOFTWARE\Citrix\DesktopServer\LHC -Name EnableCssTraceMode -PropertyType DWORD -Value 1

The HTML report is posted at: C:\Windows\ServiceProfiles\NetworkService\AppData\Local\Temp\CitrixBrokerConfigSyncReport.html

After the report is generated, enter the following command to disable the reporting feature:

Set-ItemProperty -Path HKLM:\SOFTWARE\Citrix\DesktopServer\LHC -Name EnableCssTraceMode -Value 0

Local Host Cache PowerShell commands

You can manage Local Host Cache on your Cloud Connectors using PowerShell commands.

The PowerShell module is at the following location on the Cloud Connectors:

C:\Program Files\Citrix\Broker\Service\ControlScripts

Important:

Run this module only on the Cloud Connectors.

Import PowerShell module

To import the module, run the following on your Cloud Connector:

cd C:\Program Files\Citrix\Broker\Service\ControlScripts Import-Module .\HighAvailabilityServiceControl.psm1

PowerShell commands to manage LHC

The following cmdlets help you to activate and manage the LHC mode on the Cloud Connectors.

Cmdlets Function
Enable-LhcForcedOutageMode Place the Broker in LHC mode. Local Host Cache database files must have been successfully created by the ConfigSync Service for Enable-LhcForcedOutageMode to function properly. This cmdlet only forces LHC on the Cloud Connector that it was run on. For LHC to become active, this cmdlet must be run on all Cloud Connectors within the resource location.
Disable-LhcForcedOutageMode Takes the Broker out of the LHC mode. This cmdlet only disables LHC mode on the Cloud Connector that it was run on. Disable-LhcForcedOutageMode must be run on all Cloud Connectors within the resource location.
Set-LhcConfigSyncIntervalOverride Sets the interval at which the Citrix Config Synchronizer Service (CSS) checks for configuration changes within the Citrix DaaS site. The time interval can range from 60 seconds(one minute) to 3600 seconds(one hour). This setting only applies to the Cloud Connector on which it was run. For consistency across the Cloud Connectors, consider running this cmdlet on each Cloud Connector. For example: Set-LhcConfigSyncIntervalOverride -Seconds 1200
Clear-LhcConfigSyncIntervalOverride Sets the interval at which the Citrix Config Synchronizer Service (CSS) checks for configuration changes within the Citrix DaaS site to the default value of 300 seconds (five minutes). This setting only applies to the Cloud Connector on which it was run. For consistency across the Cloud Connectors, consider running this cmdlet on each Cloud Connector.
Enable-LhcHighAvailabilitySDK Enables access to all the Get-Broker* cmdlet within the Cloud Connector that it was run.
Disable-LhcHighAvailabilitySDK Disables access to the Broker PowerShell commands within the Cloud Connector that it was run.

Note:

  • Use port 89 when running the Get-Broker* cmdlets on the Cloud Connector. For example:
    • Get-BrokerMachine -AdminAddress localhost:89
  • When not in LHC mode, the LHC Broker on the Cloud Connector only holds configuration information.
  • During LHC mode, the LHC Broker on the elected Cloud Connector holds the following information:
    • Resource states
    • Session details
    • VDA registrations
    • Configuration information

More information

See Scale and size considerations for Local Host Cache for information about:

  • Testing methodologies and results
  • RAM size considerations
  • CPU core and socket configuration considerations
  • Storage considerations
Local Host Cache