Product Documentation

Configuring High Availability on Device Manager

Jan 31, 2011

You can deploy up to three instances of Device Manager to create a high availability pair, also known as a cluster. The primary server, or Apple Push Notification Service (APNS) master, for a cluster is elected through a broadcast, with the server that has been running for the longest amount of time elected as the primary server. If the current primary server in a cluster stops responding for any reason, the surviving node runs an election and the longest running node is elected as the new APNS master.

Before creating a cluster, ensure that the following prerequisites are in place.

  • A separate server running Windows Server 2008 R2 is required for each Device Manager instance.
  • A network load balancer is required to create the high availability pair as well as to distribute the load between the Device Manager servers.
  • Configure a virtual IP address or host name on the load balancer. Device Manager uses this information to route user requests.
  • Configure SSL session persistence for ports 443 and 8443 on the load balancer.
  • If you plan to use an existing Microsoft SQL Server database, ensure that it is accessible from every Device Manager node as each node connects to the same database. Credentials for an account with permissions to connect to the database are required. The PostgreSQL database can also be used for high availability.
  • A Network Time Protocol (NTP) server to synchronize time for all the nodes and the database server.

To install and configure a Device Manager cluster, complete the following steps in order.

  1. Install Device Manager on the cluster nodes following the steps in To install Device Manager on the cluster nodes.
  2. Configure Apache Tomcat clustering on the Device Manager nodes following the steps in To configure a Device Manager Tomcat cluster.

    Tomcat clustering is used to replicate session information on all cluster nodes so that device connections can fail over from one node to another.

  3. Configure clustering and Tomcat properties on the Device Manager nodes following the steps in To configure the Device Manager server.
  4. If you are using a PostgreSQL database, run the administrator utility pgadmin3.exe located in the \postgres\bin\ directory of the installation and connect to the database instance. Using the Query tool, import and then run the file update-hilo.sql located in the \tomcat\webapps\zdm\sql-scripts\sql_update\PostgreSQL\ directory of the installation.
  5. Back up and then copy the following certificates from the \tomcat\conf\ directory of the installation on cluster node 1 to the corresponding directory on cluster node 2, overwriting the existing files.
    • cacerts.pem
    • cacerts.pem.jks
    • certchain.pem
    • https.crt.pem
    • https.p12.pem
  6. Start the Device Manager Windows service on both nodes and verify that each individual instance is running by browsing to http://127.0.0.1/zdm using a web browser. Then, create a test user on one of the Device Manager instances.
  7. Check that the virtual IP address configured on the load balancer is accessible and then verify that ports 80, 443, and 8443 are open for the virtual IP address.

    You can test connectivity by using a Telnet client to connect to the IP address and port or by using a port scanner utility.

  8. Using a web browser, browse to http://IPaddress/zdm and https://IPaddress/zdm, where IPaddress is the load balancer virtual IP address.

    In both cases you should be redirected to the Device Manager web console on one of the nodes.

To install Device Manager on the cluster nodes

  1. On a machine running Windows Server 2008 R2, install Device Manager following the steps in Installing Device Manager.

    This server is cluster node 1.

  2. If you plan to use an existing Microsoft SQL Server database, on the Choose Components page of the installer, clear the Database server check box to disable installation of the PostgreSQL database. On the Configure database connection page, create a Device Manager database on SQL Server.
  3. When creating the certificates, use the public virtual IP address or fully qualified domain name (FQDN) of the host name configured on the load balancer.
  4. When the installation is complete, using a local web browser on cluster node 1, browse to http://localhost/zdm and verify that you are redirected to the Device Manager web console. Then, stop the Device Manager Windows service.
  5. On a second machine running Windows Server 2008 R2, install Device Manager following the steps in Installing Device Manager.

    This server is cluster node 2.

  6. On the Choose Components page of the installer, clear the Database install check box. Enter details of the database you created when you installed Device Manager on cluster node 1.
  7. When prompted, do not create new certificates. Instead, back up and then import the following certificates from the \tomcat\conf\ directory of the installation on cluster node 1 to the corresponding directory on cluster node 2. Enter the passwords with which the certificates were created when you installed Device Manager on cluster node 1.
    • https.p12
    • pki-ca-devices.p12
    • pki-ca-root.p12
    • pki-ca-servers.p12
    • pki-ca-root.crt.pem
  8. When prompted, enter the same keystore password used when you installed Device Manager on cluster node 1.
  9. When the installation is complete, using a local web browser on cluster node 2, browse to http://localhost/zdm and verify that you are redirected to the Device Manager web console. Then, stop the Device Manager Windows service.

To configure a Device Manager Tomcat cluster

Apache Tomcat clustering is used to replicate session information on all the cluster nodes. In an event of a Tomcat server being unavailable on a cluster node, device connections can fail over to servers on other nodes because the state is preserved across all the nodes in the cluster.

  1. On each cluster node, use a text editor to open the server.xml file in the \tomcat\conf\ directory of the installation.
  2. Locate the following element in the file.
    <Engine name="Catalina" defaultHost="localhost">
  3. Add a Cluster section within the Engine container as shown below.
    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"> 
    <Manager className="org.apache.catalina.ha.session.DeltaManager" 
             expireSessionsOnShutdown="false" 
             notifyListenersOnReplication="true"/> 
    <Channel className="org.apache.catalina.tribes.group.GroupChannel"> 
      
    <Membership className="org.apache.catalina.tribes.membership.McastService" 
      
             address="228.0.0.8" 
             port="45560" 
             frequency="500" 
             dropTime="3000"/> 
      
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
      
             address="auto" 
             port="4000" 
             autoBind="100" 
             selectorTimeout="5000" 
             minThreads="3" 
             maxThreads="6"/> 
      
    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> 
    <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> 
    </Sender> 
      
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> 
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> 
    </Channel> 
      
    <!-- 
    <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" 
          tempDir="${catalina.base}/war-temp" 
          deployDir="${catalina.base}/war-deploy" 
          watchDir="${catalina.base}/war-listen" 
          watchEnabled="true"/> 
       --> 
      
    </Cluster>

    The Membership element identifies the nodes in the cluster. Set the value of the multicast address attribute to 228.0.0.8 and the port attribute to 45560. The combination of the multicast address and port determines the cluster membership. Set the value of the frequency attribute to 500, which specifies the time period in milliseconds between heartbeats. Ensure that this value is lower than the timeToExpiration value. Set the value of the dropTime attribute to 3000, which specifies the time period in milliseconds between heartbeats before a cluster node is timed out.

    The Receiver element controls listening for Tomcat session replication messages. Set the value of the listening address attribute to auto and the port attribute to 4000. To avoid port conflicts, set the value of the autoBind attribute to 100. This configuration means that a server socket will be opened up on the first available port in the range 4000–4099. Set the value of the selectorTimeout attribute to 5000, which specifies the listening timeout period in milliseconds. Set the value of the minThreads attribute to 3 and the maxThreads attribute to 6, which specify the minimum number of threads created at startup and the maximum number of threads in the pool, respectively.

  4. Ensure that the settings above are the same in the server.xml files on all the nodes.

To configure the Device Manager server

Complete the following steps on each cluster node.

  1. Use a text editor to open the ew-config.properties file in the \tomcat\webapps\zdm\WEB-INF\classes\ directory of the installation.
  2. Locate the following line in the file and set the value to true.
    cluster.everywan.enabled=false
  3. Immediately below, add the following line.
    cluster.hibernate.cache-provider=com.opensymphony.oscache.hibernate.OSCacheProvider
    Your cluster configuration should now read:
     
    cluster.everywan.enabled=true 
    cluster.hibernate.cache-provider=com.opensymphony.oscache.hibernate.OSCacheProvider
  4. Verify that the appropriate property for your database exists in the DAO configuration. Add the property if it is missing.
    • For Microsoft SQL Server databases:
      dao.configLocation=classpath:com/sparus/nps/dao/hibernate-native.cfg.xml
    • For MySQL databases:
      dao.configLocation=classpath:com/sparus/nps/dao/hibernate-mysql-hilo.cfg.xml
    • For all other databases:
      dao.configLocation=classpath:com/sparus/nps/dao/hibernate-hilo.cfg.xml
  5. Add the following lines to the ew-config.properties file.
     
    # Everywan cluster shared secret for application connection 
    everywan.secret=everywan 
     
    # Everywan node name (used on load balancer front end) 
    cluster.everywan.nodeName=nodex 
     
    # Everywan direct IP access (ex. used by remote support) 
    cluster.everywan.directAccess=[auto | ip:192.168.1.251 | eth1 | lo] 
     
    # Everywan broadcast 
    cluster.everywan.broadcast.address=228.0.0.8 
    cluster.everywan.broadcast.port=45561 
    

    Set the value of the cluster.everywan.nodeName parameter to nodex, where x is the node number for the server on which you are editing the ew-config.properties file. That is, node1, node2, and so on.

    When the value of the cluster.everywan.directAccess parameter is set to auto, Device Manager searches for the first IP address of the first network interface. To assign a specific IP address, set the value to ip:192.168.1.251. If the node has two or more network interface controllers, you might need to specify the IP address of the node to enable Remote Support to function correctly. Setting the value of the cluster.everywan.directAccess parameter to eth1 causes Device Manager to use the first IP address of the eth1 interface. If you want to use the first IP address of the lo interface (127.0.0.1), set the value to lo.

    Set the value of the cluster.everywan.broadcast.address parameter to 228.0.0.8 and the cluster.everywan.broadcast.port parameter to 45561. Ensure that this combination of UDP broadcast address and port, that is 228.0.0.8:45561, is different from that used by Apache Tomcat in the server.xml file.

  6. Use a text editor to open the oscache.properties file in the \tomcat\webapps\zdm\WEB-INF\classes\ directory of the installation.
  7. Locate the following lines in the JGroups configuration.
    cache.cluster.properties=UDP(mcast_addr=228.0.0.8;mcast_port=45566; 
    diagnostics_addr=228.0.0.8;diagnostics_port=45567;mcast_send_buf_size=150000; 
    mcast_recv_buf_size=80000)... 
    cache.cluster.multicast.ip=228.0.0.8

    Ensure that the value of the mcast_addr parameter is set to 228.0.0.8 and the mcast_port parameter is set to 45566. Verify that the value of the diagnostics_addr parameter is set to 228.0.0.8 and the diagnostics_port parameter is set to 45567. These four parameters are used to check the Hibernate cache consistency among the cluster nodes and must have the same values on all the nodes.

    Check that the value of the cache.cluster.multicast.ip parameter is set to 228.0.0.8. This IP address must be the same as that for the mcast_addr parameter.

  8. Use a text editor to open the applicationContext.xml file in the \tomcat\webapps\zdm\WEB-INF\ directory of the installation.
  9. Verify that the following element is present in the file.
     
    <import resource="classpath:cluster_configuration.xml" />