StoreFront employs Microsoft .NET technology running on Microsoft Internet Information Services (IIS) to provide enterprise app stores that aggregate resources and make them available to users. StoreFront integrates with your XenDesktop, XenApp, and App Controller deployments, providing users with a single, self-service access point for their desktops and applications.
StoreFront comprises the following core components.
StoreFront can be configured either on a single server or as a multiple server deployment. Multiple server deployments not only provide additional capacity, but also greater availability. The modular architecture of StoreFront ensures that configuration information and details of users' application subscriptions are stored on and replicated between all the servers in a server group. This means that if a StoreFront server becomes unavailable for any reason, users can continue to access their stores using the remaining servers. Meanwhile, the configuration and subscription data on the failed server are automatically updated when it reconnects to the server group. Subscription data is updated when the server comes back online but you must propagate configuration changes if any were missed by the server while offline. In the event of a hardware failure that requires replacement of the server, you can install StoreFront on a new server and add it to the existing server group. The new server is automatically configured and updated with users' application subscriptions when it joins the server group.
For multiple server deployments, external load balancing through, for example, NetScaler or Windows Network Load Balancing is required. Configure the load balancing environment for failover between servers to provide a fault-tolerant deployment. For more information about load balancing with NetScaler, see Load Balancing. For more information about Windows Network Load Balancing, see http://technet.microsoft.com/en-us/library/hh831698.aspx.
Active load balancing of requests sent from StoreFront to XenDesktop sites and XenApp farms is recommended for deployments with thousands of users or where high loads occur, such as when a large number of users log on over a short period of time. Use a load balancer with built-in XML monitors and session persistency, such as NetScaler.
If you deploy SSL-terminating load balancer or if you need to troubleshoot, you can use the PowerShell cmdlet Set-STFWebReceiverCommunication.
Set-STFWebReceiverCommunication [-WebReceiverService] <WebReceiverService> [[-Loopback] <On | Off | OnUsingHttp>] [[-LoopbackPortUsingHttp] <Int32>]
The valid values are:
- On - This is the default value for new Citrix Receiver for Web sites. Citrix Receiver for Web uses the schema (HTTPS or HTTP) and port number from the base URL but replaces the host with the loopback IP address to communicate with StoreFront Services. This works for a single server deployment and a deployments with a non SSL-terminating load balancer.
- OnUsingHttp - Citrix Receiver for Web uses HTTP and the loopback IP address to communicate with StoreFront Services. If you are using an SSL-terminating load balancer, select this value. You must also specify the HTTP port if it is not the default port 80.
- Off - This turns off loopback and Citrix Receiver for Web uses the StoreFront base URL to communicate with StoreFront Services. If you perform an in-place upgrade, this is the default value to avoid disruption to your existing deployment.
For example, if you are using an SSL-terminating load balancer, your IIS is configured to use port 81 for HTTP and the path of your Citrix Receiver for Web site is /Citrix/StoreWeb, you can run the following command to configure the Citrix Receiver for Web site:
$wr = Get-STFWebReceiverService -VirtualPath /Citrix/StoreWeb
Set-STFWebReceiverCommunication -WebReceiverService $wr -Loopback OnUsingHttp -LoopbackPortUsingHttp 81
Note that you have to switch off loopback to use any web proxy tool like Fiddler to capture the network traffic between Citrix Receiver for Web and StoreFront Services.
Active Directory considerations
For single server deployments you can install StoreFront on a nondomain-joined server (but certain functionality will be unavailable); otherwise, StoreFront servers must reside either within the Active Directory domain containing your users' accounts or within a domain that has a trust relationship with the user accounts domain unless you enable delegation of authentication to the XenApp/XenDesktop sites/farms. All the StoreFront servers in a group must reside within the same domain.
In a production environment, Citrix recommends using HTTPS to secure communications between StoreFront and users' devices. To use HTTPS, StoreFront requires that the IIS instance hosting the authentication service and associated stores is configured for HTTPS. In the absence of the appropriate IIS configuration, StoreFront uses HTTP for communications. You can change from HTTP to HTTPS at any time, provided the appropriate IIS configuration is in place.
If you plan to enable access to StoreFront from outside the corporate network, NetScaler Gateway is required to provide secure connections for remote users. Deploy NetScaler Gateway outside the corporate network, with firewalls separating NetScaler Gateway from both the public and internal networks. Ensure that NetScaler Gateway is able to access the Active Directory forest containing the StoreFront servers.
The number of Citrix Receiver users supported by a StoreFront server group depends on the hardware you use and on the level of user activity. Based on simulated activity where users log on, enumerate 100 published applications, and start one resource, expect a single StoreFront server with the minimum recommended specification of two virtual CPUs running on an underlying dual Intel Xeon L5520 2.27Ghz processor server to enable up to 30,000 user connections per hour.
Expect a server group with two similarly configured servers in the group to enable up to 60,000 user connections per hour; three nodes up to 90,000 connections per hour; four nodes up to 120,000 connections per hour; five nodes up to 150,000 connections per hour; six nodes up to 175,000 connections per hour.
The throughput of a single StoreFront server can also be increased by assigning more virtual CPUs to the system, with four virtual CPUs enabling up to 55,000 user connections per hour and eight virtual CPUs enabling 80,000 connections per hour.
The minimum recommended memory allocation for each server is 4GB. When using Citrix Receiver for Web, assign an additional 700 bytes per resource, per user in addition to the base memory allocation. As with using Web Receiver, when using Citrix Receiver, design environments to allow an extra 700 bytes per resource, per user on top of the base 4GB memory requirements for this version of StoreFront.
As your usage patterns might be different than those simulated above, your servers might support more or fewer numbers of users connections per hour.
Important: All servers in a server group must reside in the same location. StoreFront server groups containing mixtures of operating system versions and locales are not supported.
Occasionally, network issues or other problems can occur between a StoreFront store and the servers that it contacts, causing delays or failures for users. You can use the timeout settings for a store to tune this behavior. If you specify a short timeout setting, StoreFront quickly abandons a server and tries another one. This is useful if, for example, you have configured multiple servers for failover purposes.
If you specify a longer timeout, StoreFront waits longer for a response from a single server. This is beneficial in environments where network or server reliability is uncertain and delays are common.
Citrix Receiver for Web also has a timeout setting, which controls how long a Citrix Receiver for Web site waits for a response from the store. Set this timeout setting to a value at least as long as the store timeout. A longer timeout setting allows for better fault tolerance, but users might experience long delays. A shorter timeout setting reduces delays for users, but they might experience more failures.
For information about setting timeouts, see Communication time-out duration and server retry attempts and Communication time-out duration and retry attempts.