Product Documentation

Examples of highly available multi-site store configurations

Oct 08, 2015

StoreFront enables you to configure load balancing and failover between the deployments providing resources for stores, map users to deployments, and designate specific disaster recovery deployments for increased resiliency. To illustrate how you can configure StoreFront deployments distributed over multiple sites to provide high availability for your stores, consider the example configuration below.

The figure shows an example highly available multi-site configuration.


The example consists of two main locations, each with separate, load-balanced groups of StoreFront servers, providing desktops and application for users. A third location provides disaster recovery resources that are only intended to be used in the event that all resources provided by both the other locations are unavailable. Location 1 contains a group of identical deployments (XenDesktop sites, XenApp farms, or VDI-in-a-Box grids) providing exactly the same desktops and applications. Location 2 consists of a similar group of identical deployments delivering largely the same resources provided in Location 1, but with a few differences. Some specific resources that are not available in Location 2 are provided by a separate unique deployment in Location 1.

Example — Load balancing and failover

In this example, you want users at both locations to be able to log on to their local StoreFront servers and access desktops and applications provided locally, where possible. In the event that local resources are not available, either due to a failure or capacity issues, users must be automatically and silently redirected to resources delivered from the other location. If all resources provided by both locations are unavailable, users must be able to continue working with a subset of the most business-critical desktops and applications.

To achieve this user experience, you configure the store in Location 1 as shown below.

<resourcesWingConfigurations> 
  <resourcesWingConfiguration name="Default" wingName="Default"> 
    <userFarmMappings> 
      <clear /> 
      <userFarmMapping name="user_mapping"> 
        <groups> 
          <group name="Everyone" sid="everyone" /> 
        </groups> 
        <equivalentFarmSets> 
          <equivalentFarmSet name="Location1" loadBalanceMode="LoadBalanced"  
           aggregationGroup="AggregationGroup1"> 
            <primaryFarmRefs> 
              <farm name="Location1Deployment1" /> 
              <farm name="Location1Deployment2" /> 
              <farm name="Location1Deployment3" /> 
            </primaryFarmRefs> 
            <backupFarmRefs> 
              <farm name="DisasterRecoveryDeployment" /> 
            </backupFarmRefs> 
          </equivalentFarmSet> 
          <equivalentFarmSet name="Location2" loadBalanceMode="Failover"  
           aggregationGroup="AggregationGroup1"> 
            <primaryFarmRefs> 
              <farm name="Location2Deployment1" /> 
              <farm name="Location2Deployment2" /> 
              <farm name="Location2Deployment3" /> 
            </primaryFarmRefs> 
            <backupFarmRefs> 
              <farm name="DisasterRecoveryDeployment" /> 
            </backupFarmRefs> 
          </equivalentFarmSet> 
          <equivalentFarmSet name="Location1Unique"  
           loadBalanceMode="LoadBalanced" aggregationGroup=""> 
            <primaryFarmRefs> 
              <farm name="Location1UniqueDeployment" /> 
            </primaryFarmRefs> 
            <backupFarmRefs> 
            </backupFarmRefs> 
          </equivalentFarmSet> 
        </equivalentFarmSets> 
      </userFarmMapping> 
    </userFarmMappings> 
  </resourcesWingConfiguration> 
</resourcesWingConfigurations> 

There is a single mapping available to all users, listing the Location 1 deployments first and the Location 2 deployments second. In both cases, the disaster recovery deployment is configured as the backup and all the deployments are assigned to the same aggregation group. The configuration of the store in Location 2 is almost identical, differing only in that the order in which the deployments are listed is reversed such that the Location 2 deployments are listed first. In both cases, the deployment providing the Location 1 unique resources is listed last with no backup deployment or aggregation group defined.

When users at Location 1 log on to their local store, StoreFront contacts a Location 1 deployment to enumerate the desktops and applications available. Because the loadBalanceMode attribute is set to LoadBalanced, the exact deployment contacted is selected randomly to evenly distribute requests across the available deployments. If the selected Location 1 deployment is unavailable, StoreFront randomly selects another Location 1 deployment to contact.

In the case of the Location 2 deployments, the loadBalanceMode attribute is set to Failover. This means that StoreFront always contacts the deployments in the specified order. As a result, resources are enumerated from Location 2 Deployment 1 for every user request until Deployment 1 stops responding. Subsequent requests are then routed to Deployment 2 until Deployment 1 becomes available again. This minimizes the number of deployments in use at Location 2 at any given time.

When a response is received from a Location 1 deployment, StoreFront does not contact any further Location 1 deployments. Including all the Location 1 deployments in a single <equivalentFarmSet> element specifies that these deployments provide exactly the same resources. Similar behavior also occurs during enumeration of the Location 2 resources. Finally, the Location 1 unique deployment is contacted, although since there is no alternative in this case, the unique resources are not enumerated if the deployment is unavailable.

Where a desktop or application with the same name and path on the server is available from both Location 1 and Location 2, StoreFront aggregates these resources and presents users with a single icon. This behavior is a result of setting the aggregationGroup attribute to AggregationGroup1 for both the Location 1 and Location 2 deployments. Users clicking on an aggregated icon are typically connected to the resource in their location, where available. However, if a user already has an active session on another deployment that supports session reuse, the user is preferentially connected to the resource on that deployment to minimize the number of sessions used.

Because an aggregation group is not specified for the Location 1 unique resources, users see separate icons for each of the unique resources. In this example, none of the unique resources are available on the other deployments. However, if a desktop or application with the same name and path on the server were available from another deployment, users would see two icons with the same name.

Only when resources cannot be enumerated from any of the Location 1 or Location 2 deployments does StoreFront contact the disaster recovery deployment. Because the same disaster recovery deployment is configured for both Location 1 and Location 2, all of these deployments must be unavailable before StoreFront will attempt to enumerate the disaster recovery resources. In this example, a disaster recovery alternative is not configured for the Location 1 unique deployment, so the availability of the unique deployment does not affect this determination.

Example — User mapping

In this example, you want to provide different mixtures of resources for different users on the basis of their membership of Microsoft Active Directory user groups. Standard users in Location 1 and Location 2 only need access to the desktops and applications provided locally. These users do not need to access resources in the other locations. You also have a group of power users for whom you want to provide access to all the available resources, including the Location 1 unique resources, with high availability and disaster recovery. For this example, it is assumed that Location 1 and Location 2 share a common Active Directory domain.

To achieve this user experience, you configure the stores in both locations as shown below.

<resourcesWingConfigurations> 
  <resourcesWingConfiguration name="Default" wingName="Default"> 
    <userFarmMappings> 
      <clear /> 
      <userFarmMapping name="UserMapping1"> 
        <groups> 
          <group name="Location1Users"  
           sid="S-1-5-21-1004336348-1177238915-682003330-1001" /> 
        </groups> 
        <equivalentFarmSets> 
          <equivalentFarmSet name="Location1" loadBalanceMode="LoadBalanced"  
           aggregationGroup="AggregationGroup1"> 
            <primaryFarmRefs> 
              <farm name="Location1Deployment1" /> 
              <farm name="Location1Deployment2" /> 
              <farm name="Location1Deployment3" /> 
            </primaryFarmRefs> 
            <backupFarmRefs> 
              <farm name="DisasterRecoveryDeployment" /> 
            </backupFarmRefs> 
          </equivalentFarmSet> 
        </equivalentFarmSets> 
      </userFarmMapping> 
      <userFarmMapping name="UserMapping2"> 
        <groups> 
          <group name="Location2Users"  
           sid="S-1-5-21-1004336348-1177238915-682003330-1002" /> 
        </groups> 
        <equivalentFarmSets> 
          <equivalentFarmSet name="Location2" loadBalanceMode="Failover"  
           aggregationGroup="AggregationGroup1"> 
            <primaryFarmRefs> 
              <farm name="Location2Deployment1" /> 
              <farm name="Location2Deployment2" /> 
              <farm name="Location2Deployment3" /> 
            </primaryFarmRefs> 
            <backupFarmRefs> 
              <farm name="DisasterRecoveryDeployment" /> 
            </backupFarmRefs> 
          </equivalentFarmSet> 
        </equivalentFarmSets> 
      </userFarmMapping> 
      <userFarmMapping name="UserMapping3"> 
        <groups> 
          <group name="Location1Users"  
           sid="S-1-5-21-1004336348-1177238915-682003330-1001" /> 
          <group name="Location2Users"  
           sid="S-1-5-21-1004336348-1177238915-682003330-1002" /> 
        </groups> 
        <equivalentFarmSets> 
          <equivalentFarmSet name="Location1Unique"  
           loadBalanceMode="LoadBalanced" aggregationGroup=""> 
            <primaryFarmRefs> 
              <farm name="Location1UniqueDeployment" /> 
            </primaryFarmRefs> 
            <backupFarmRefs> 
            </backupFarmRefs> 
          </equivalentFarmSet> 
        </equivalentFarmSets> 
      </userFarmMapping> 
    </userFarmMappings> 
  </resourcesWingConfiguration> 
</resourcesWingConfigurations> 

Instead of creating a mapping that applies to all users, as in the load balancing and failover example, you create mappings for specific user groups. The main Location 1 deployments are mapped to the domain user group for Location 1 users. Similarly, the Location 2 deployments are mapped to the Location 2 user group. The mapping for the Location 1 unique resources specifies both user groups, which means that users must be members of both groups to access the unique resources.

Users who are members of the Location 1 user group see only resources from Location 1 when they log on to a store, even if that store is in Location 2. Likewise, Location 2 user group members are only presented with resources from Location 2. Neither group have access to the Location 1 unique resources. Domain users who are not members of either group can log on to the store, but do not see any desktops or applications.

To give your power users access to all the resources, including the unique resources, you add them to both user groups. When users who are members of both the Location 1 and Location 2 user groups log on to the store, they see an aggregate of the resources available from both locations, plus the Location 1 unique resources. As in the load balancing and failover example, the Location 1 and Location 2 deployments are assigned to the same aggregation group. The resource aggregation process functions in exactly the same way as described for the load balancing and failover example.

Disaster recovery also operates as described in the load balancing and failover example. Users only see the disaster recovery resources when all the Location 1 and Location 2 deployments are unavailable. Unfortunately, this means that there are some scenarios when standard users are not able to access any desktops or applications. For example, if all the deployments in Location 1 are unavailable, but the Location 2 deployments are still accessible, StoreFront does not enumerate the disaster recovery resources. So, users who are not members of the Location 2 user group do not see any resources in the store.

To resolve this issue, you would need to configure separate disaster recovery deployments for the Location 1 and Location 2 mappings. You would then add the disaster recovery deployments to the same aggregation group to aggregate the disaster recovery resources for your power users.

Example — Subscription synchronization

In the load balancing and failover and user mapping examples, users moving between Location 1 and Location 2 would benefit from synchronization of their application subscriptions between the two deployments. For example, a user based in Location 1 could log on to the StoreFront deployment in Location 1, access the store, and subscribe to some applications. If the same user then traveled to Location 2 and accessed the similar store provided by the Location 2 StoreFront deployment, the user would need to resubscribe to all the applications again to access them from Location 2. By default, StoreFront deployments in each location maintain details of users' application subscriptions separately.

To ensure that users need to subscribe only to applications in one location, you can configure subscription synchronization between the stores of the two StoreFront deployments.

Important: The StoreFront and PowerShell consoles cannot be open at the same time. Always close the StoreFront admin console before using the PowerShell console to administer your StoreFront configuration. Likewise, close all instances of PowerShell before opening the StoreFront console.

Delivery Controller names are case sensitive. Failing to duplicate the Delivery Controller names exactly may lead to inaccurate resource IDs across subscription synchronization stores.

 

  1. Both stores must have the same name in both deployment locations. For example GlobalStore in Location 1 – London and Location 2 – New York could be configured as:

  2. Ensure the Delivery Controllers configured within the “GlobalStore” store have the same names and are case sensitive.

  3. Start a new PowerShell session on a StoreFront server in Location 1 – London and run these commands:
    # Import the required StoreFront modules 
    Import-Module "C:\Program Files\Citrix\Receiver StoreFront\Scripts\ImportModules.ps1" 
     
    # Add the New York cluster as one to synchronize from. 
    # The clusterName is used to identify the cluster. 
    # The clusterAddress is either the address of the single 
    # StoreFront server when not in a group or the loadbalanced address when it is. 
    # The storeFriendlyNames is the display name of the store. 
     
    Add-DSSubscriptionsRemoteSyncClusterAndStores –clusterName "NewYork"  
    -clusterAddress "newyork.citrix.com" -storeFriendlyNames @("GlobalStore") 
     
    # Add the servers from the New York deployment on the 
    # XenDesktop domain to the Windows permissions group on the 
    # London1 server. 
     
    Add-DSLocalGroupMember -GroupName "CitrixSubscriptionsSyncUsers"  
    -AccountName "my.xendesktop.com/newyork1$" 
    Add-DSLocalGroupMember -GroupName "CitrixSubscriptionsSyncUsers"  
    -AccountName "my.xendesktop.com/newyork2$" 
     
    # Add a schedule to pull subscription data from New York to London starting at 18:00  
    # repeating every 24 hours. 
     
    Add-DSSubscriptionsSyncReoccuringSchedule -scheduleName "SyncFromNewYork" -startTime "18:00:00"  
    -repeatMinutes 1440 
     
    # Restart the synchronization service and propagate settings to the other servers in the  
    # London deployment. 
     
    Restart-Service "CitrixSubscriptionsStore" 
    Start-DSConfigurationReplicationClusterUpdate 
    Get-DSSubscriptionsRemoteClusterSyncSummary 
    Get-DSSubscriptionsSyncScheduleSummary  
     
    
  4. Close the PowerShell session.
  5. Start a PowerShell session on a server in Location 2 – New York and run the following commands:
    # Import the required StoreFront modules 
    Import-Module "C:\Program Files\Citrix\Receiver StoreFront\Scripts\ImportModules.ps1" 
     
    # Add the London cluster as one to synchronize from. 
    # The clusterName is used to identify the cluster. 
    # The clusterAddress is either the address of the single 
    # StoreFront server when not in a group or the loadbalanced address when it is. 
    # The storeFriendlyNames is the display name of the store. 
    Add-DSSubscriptionsRemoteSyncClusterAndStores –clusterName "London"  
    -clusterAddress "london.citrix.com" -storeFriendlyNames @("GlobalStore") 
     
    # Add the servers from the London deployment on the 
    # XenDesktop domain to the Windows permissions group on the NewYork1 server. 
     
    Add-DSLocalGroupMember -GroupName "CitrixSubscriptionsSyncUsers"  
    –AccountName “my.xendesktop.com/london1$” 
    Add-DSLocalGroupMember -GroupName "CitrixSubscriptionsSyncUsers"  
    –AccountName "my.xendesktop.com/london2$" 
     
    # Add a schedule to pull subscription data from London to New York starting at 20:00  
    # repeating every 24 hours. 
     
    Add-DSSubscriptionsSyncReoccuringSchedule –scheduleName "SyncFromNewYork" –startTime "20:00:00"  
    -repeatMinutes 1440 
     
    # Restart the synchronization service and propagate settings to the other servers in  
    # the New York deployment. 
     
    Restart-Service "CitrixSubscriptionsStore" 
    Start-DSConfigurationReplicationClusterUpdate 
    Get-DSSubscriptionsRemoteClusterSyncSummary 
    Get-DSSubscriptionsSyncScheduleSummary  
    

Example — Optimal NetScaler Gateway routing

In this example, you want to configure separate NetScaler Gateway appliances in Location 1 and Location 2. Because Location 1 resources are available to users in Location 2, you want to ensure that user connections to Location 1 resources are always routed through the NetScaler Gateway appliance in Location 1, regardless of the way in which users access the store. A similar configuration is required for Location 2.

In the case of the Location 1 unique resources, you have made these desktops and applications accessible only to local users on the internal network. However, you still require users to authenticate to NetScaler Gateway to access a store. So, you want to ensure that user connections to Location 1 unique resources are not routed through NetScaler Gateway, despite the fact that users connect to the stores through NetScaler Gateway.

To achieve this user experience, you configure the stores in both locations as shown below.

<optimalGatewayForFarmsCollection> 
  <optimalGatewayForFarms enabledOnDirectAccess="true"> 
    <farms> 
      <farm name="Location1Deployment1" /> 
      <farm name="Location1Deployment2" /> 
      <farm name="Location1Deployment3" /> 
    </farms> 
    <optimalGateway key="_" name="Location1Appliance" stasUseLoadBalancing="false" 
     stasBypassDuration="02:00:00" enableSessionReliability="true" 
     useTwoTickets="false"> 
      <hostnames> 
        <add hostname="location1appliance.example.com" /> 
      </hostnames> 
      <staUrls> 
        <add staUrl="https://location1appliance.example.com/scripts/ctxsta.dll" /> 
      </staUrls> 
    </optimalGateway> 
  </optimalGatewayForFarms> 
  <optimalGatewayForFarms enabledOnDirectAccess="true"> 
    <farms> 
      <farm name="Location2Deployment1" /> 
      <farm name="Location2Deployment2" /> 
      <farm name="Location2Deployment3" /> 
    </farms> 
    <optimalGateway key="_" name="Location2Appliance" stasUseLoadBalancing="false" 
     stasBypassDuration="02:00:00" enableSessionReliability="true" 
     useTwoTickets="false"> 
      <hostnames> 
        <add hostname="location2appliance.example.com" /> 
      </hostnames> 
      <staUrls> 
        <add staUrl="https://location2appliance.example.com/scripts/ctxsta.dll" /> 
      </staUrls> 
    </optimalGateway> 
  </optimalGatewayForFarms> 
  <optimalGatewayForFarms enabledOnDirectAccess="false"> 
    <farms> 
      <farm name="Location1UniqueDeployment" /> 
    </farms> 
  </optimalGatewayForFarms> 
</optimalGatewayForFarmsCollection> 

You map the main Location 1 deployments to the NetScaler Gateway appliance in Location 1. This configuration ensures that users always connect to Location 1 resources through the NetScaler Gateway appliance in that location, even for users that logged on to the store through the appliance in Location 2. A similar mapping is configured for Location 2. For both deployments, you set the value of the enabledOnDirectAccess attribute to true to route all connections to resources through the optimal appliance specified for the deployment, even for local users on the internal network who log on to StoreFront directly. As a result, the responsiveness of remote desktops and applications is improved for local users because data do not traverse the corporate WAN.

For the Location 1 unique resources, you configure a mapping for the deployment but do not specify a NetScaler Gateway appliance. This configuration ensures that connections to Location 1 unique resources are not routed through NetScaler Gateway, even for users that logged on to the store through NetScaler Gateway. As a result, only local users on the internal network can access these desktops and applications.

You must also configure a specific internal virtual server IP address for the appliance and an inaccessible internal beacon point. Making the internal beacon point inaccessible to local users prompts Citrix Receiver to access stores through NetScaler Gateway from devices connected to the internal network. This enables you, for example, to apply NetScaler Gateway endpoint analysis to local users on the internal network without the overhead of routing all user connections to resources through the appliance.