You can install
multiple instances of the App Controller virtual machine (VM) to create a
cluster. One App Controller VM acts as the cluster head. This App Controller is
considered to be the host and, as such, hosts the database for all of the VMs
in the cluster.
Note: The App
Controller cluster head requires 4 VCPUs.
All other App
Controller VMs in the cluster are called service nodes. Each service node has a
local database that is used by the service node only. Updating user information
from the service node to the cluster head requires writing to the database. A
service node connects to the database on the cluster head by using a secure
App Controller VMs
deployed as service nodes obtain their configuration from the App Controller
that acts as the cluster head. Citrix recommends deploying two App Controller
VMs in a high availability pair. With high availability, one of the paired VMs
serves as the primary node, acting as the cluster head, while the other is the
secondary node, monitoring the primary. If the primary fails, the secondary
assumes the role of the cluster head.
You use the
command-line console to configure App Controller clustering. You can create a
cluster, join an App Controller VM to a cluster, and remove a VM from the
When you add an App
Controller service node to the cluster and then log on by using the management
console, only the Dashboard and the home page appear.
Load Balancing for an App Controller Cluster
As shown in the
following figure, App Controller works with NetScaler to provide load balancing
to all of the service nodes in the cluster. All VMs run behind a load balancer
that is responsible for terminating SSL connections from Citrix Receiver. You
install certificates for App Controller on the load balancer.
Figure 1. Deploying
App Controller with NetScaler in a Cluster
that you configure the following three virtual servers on NetScaler for load
- One virtual server as a
Content Switching load balancer
- One virtual server for
rule-based load balancing
- One virtual server for
custom serverID persistence load balancing
You configure the
Content Switching load balancer to route requests with the session ID in the
URL to go through the custom serverID load balancer. All other requests go
through the rule-based load balancer. You need to configure cookie (LB1) and
serverID (LB2) persistency policies on the virtual servers. If the session ID
parameter is not in the request URL, the request is sent to LB1 for cookie
persistency. If the session ID is present, the request is sent to LB for
The load balancer
needs to query the cluster node to determine if the cluster is running, to
obtain information about the load, or to perform health checks by sending user
requests to App Controller VMs that are working correctly. Health checks
provide greater application availability by ensuring user requests are directed
only to correctly behaving servers.
connections go through the load balancer to an App Controller service node, the
cluster uses HTTP connections. You can use HTTPS if you want additional
Note: App Controller clustering uses TCP port 9737
information about load balancing, see
Load Balancing in
the NetScaler documentation.
When you create a
cluster in the command-line console, the cluster node identifier, the current
App Controller role, and a prompt to enter the shared key appear. You create
the shared key on the cluster head and then use the same shared key for each VM
in the cluster.
You must configure
the host name on the cluster head. This information is replicated to the
service nodes automatically as part of configuration sharing. The host name
appears in the request URLs to be distributed in the cluster. Each service node
in the cluster must also have the same shared key to establish secure tunnels.
If the shared key on the cluster head and service node do not match, the two
VMs cannot communicate.
The shared key
must be eight characters with at least one uppercase letter, one lowercase
letter, one numeric, and one special character that includes one of the
following symbols: [!#$&].
After you enter
the shared key, you restart App Controller. After App Controller restarts and
you log on again to the command-line console, a message appears that states
that the App Controller VM is now the cluster head.
If you have two
App Controller VMs in a high availability pair, use the virtual IP address when
you configure the service node. Both VMs function as a cluster head. For
example, if the primary VM in the high availability pair fails for any reason,
the secondary VM takes over and serves user requests.
Nodes to the Cluster
When you join an
App Controller VM to a cluster, you provide the IP address and shared key of
the cluster head. After you enter this information, you restart App Controller.
When App Controller restarts, you can use the command-line console to show the
following cluster information:
- Cluster node ID
- Current role as the service
- Cluster head IP address
- Shared key
Removing App Controller Nodes in a Cluster
To join App
Controller virtual machines (VMs) together in a cluster, you first designate
one VM as the cluster head. To do so, you use the command-line console to
create the cluster. Then, you can add other VMs to the cluster. These VMs are
designated as service nodes. You can log on to the command-line console in one
of the following ways:
- Hyper-V Powershell.
- Secure Shell (SSH)
connection, such as from PuTTY.
To join a node
to a cluster
You can remove
an App Controller VM from the cluster at any time.
- Log on to
the App Controller command-line console.
- In the main
menu, type 2 and then press ENTER.
- Type 5 and
then press ENTER.
you want to leave the cluster, press y and then press ENTER.
prompted, type y to restart App Controller.