Product Documentation

Configuring High Availability on Device Manager

Mar 31, 2015

You can deploy up to three instances of Device Manager to create a high availability pair, which is also called a cluster. You configure one Device Manager instance as the primary role in the cluster and the other Device Manager instances as the secondary role in the cluster. In this deployment, the primary Device Manager listens for requests, and serves user requests. The secondary Device Manager synchronizes its data with the data on the primary. The two instances of Device Manager work as an active-passive pair, in which only one instance of Device Manager is active at a time.

If the current primary Device Manager stops responding for any reason, the current secondary Device Manager takes over and becomes the primary. The new primary Device Manager begins to serve user requests.

Device Manager in a cluster configuration requires a network load balancer to create a high availability pair as well as to distribute the load between Device Manager servers.

You need to configure the following:

  • Windows Server 2008 R2. Install each Device Manager instance on a separate Windows server.
  • Configure the Windows servers as a cluster.
  • Virtual IP address or host name on the load balancer. Device Manager uses this information to route user requests.
  • SSL session persistence for ports 443 and 8443 on the load balancer.
  • SQL Server database accessible from the Device Manager node(s) and user credentials to connect to the database. Each node connects to the same database.
  • Network Time Protocol (NTP) server to synchronize time for all nodes and SQL DB server.

After you install Device Manager and configure the initial settings, there are some additional configuration steps. These include:

  • Editing an xml file to replicate session information on all cluster nodes in the Tomcat cluster.
  • Enabling clustering on Device Manager.
  • Configuring properties on the Tomcat server.
  • Copy certificates from cluster node 1 to cluster node 2.
  • Stopping and starting the Device Manager Windows service.

You can also use the PostGRE SQL database for high availablity. If you use this database, you need to run a utility to import database information to Device Manager.

Installing Device Manager on cluster node 1

  1. Clear the Database server check box if there is already a MS SQL server in your network.
  2. On the Configure database connection screen, create a Device Manager database on your MS SQL server.
  3. Install Device Manager on Cluster Node 1.
  4. On the certificate creation screen, use the public virtual IP address or FDQN of the hostname configured in the virtual server configuration.

After Device Manager is successfully installed, open a web browser from the same host; for example, Device Manager cluster node 1. Then, open http://localhost/zdm and verify that the Device Manager web console appears. Then, stop the Device Manager Windows service.

Installing Device Manager on cluster node 2

  1. Install Device Manager on Cluster Node 2 and clear database install. Remember to use the same database name as that of Cluster Node 1.
  2. Copy the following files from Cluster Node 1 in <installation_dir>\tomcat\conf to the same place on Cluster Node 2.
    • https.p12
    • pki-ca-devices.p12
    • pki-ca-root.p12
    • pki-ca-servers.p12
    • pki-ca-root.crt.pem
  3. Import the certificates; do not create new certificates. The installer prompts you to enter passwords with which certificates were created (during installation of cluster node 1). Only the Keystore password text box appears.
  4. Enter the same keystore password which was used in 'cluster node 1' for the following screens.

After Device Manager is successfully installed, open a web browser from the same host - cluster node 2, go to http://localhost/zdm and then verify that the Device Manager web console appears. Stop the Device Manger Windows service.

Configuring a Device Manager Tomcat Cluster

Tomcat clustering is used to replicate session information on all cluster nodes. In an event of a Tomcat server being unavailable on a cluster node, device connections can fail over to Tomcat servers on other cluster nodes because the state is being preserved across all nodes in the cluster.
Note: Make sure to update the configuration files/command to all cluster nodes.
  1. Open file <installation_dir>\tomcat\conf \server.xml in wordpad and add a <cluster> section after the following element: <Engine name="Catalina" defaultHost="localhost">:
    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"> 
    <Manager className="org.apache.catalina.ha.session.DeltaManager" 
             expireSessionsOnShutdown="false" 
             notifyListenersOnReplication="true"/> 
    <Channel className="org.apache.catalina.tribes.group.GroupChannel"> 
      
    <Membership className="org.apache.catalina.tribes.membership.McastService" 
      
             address="228.0.0.8" 
             port="45560" 
             frequency="500" 
             dropTime="3000"/> 
      
    <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
      
             address="auto" 
             port="4000" 
             autoBind="100" 
             selectorTimeout="5000" 
             minThreads="3" 
             maxThreads="6"/> 
      
    <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> 
    <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> 
    </Sender> 
      
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> 
    <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> 
    </Channel> 
      
    <!-- 
    <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" 
          tempDir="${catalina.base}/war-temp" 
          deployDir="${catalina.base}/war-deploy" 
          watchDir="${catalina.base}/war-listen" 
          watchEnabled="true"/> 
       --> 
      
    </Cluster>
  2. After copying the above contents, check for the following elements in server.xml:
    • Membership. Determines cluster membership. address: 228.0.0.8 (multicast address)
    • port. 45560 (multicast address and the port determine cluster membership)
    • frequency. 500 (broadcast ping send frequency. Must be smaller than timeToExpiration)
    • dropTime.3000
    • Receiver. Responsible for listening to session tomcat session replication messages
    • address. auto (listening address)
    • port. 4000 (port number used to listen for session replication messages)
    • autoBind. 100 (number of ports to try : 13000 to 13099)
    • selectorTimeout. 5000 (select operation selector timeout)
    • minThreads. 3 (work thread pool configuration)
    • maxThreads. 6 (work thread pool configuration)

Configuring the Device Manager Server

  1. Edit the ew-config.properties file (<installation_dir>\tomcat\webapps\zdm\WEB-INF\classes).
  2. Change the following line from false to true:
    ######################################################## 
    # 
    #  CLUSTERING 
    # 
    ######################################################## 
    cluster.everywan.enabled=false 
     
    To 
     
    cluster.everywan.enabled=true
  3. Add the following line: cluster.hibernate.cache-provider=com.opensymphony.oscache.hibernate.OSCacheProvider Your cluster configuration should look like the following example:
    ######################################################## 
    # 
    #  CLUSTERING 
    # 
    ######################################################## 
    cluster.everywan.enabled=true 
    cluster.hibernate.cache-provider=com.opensymphony.oscache.hibernate.OSCacheProvider
  4. For the DAO configuration, verify that the following properties exist. If not, add them.
    • For MS SQL. dao.configLocation=classpath:com/sparus/nps/dao/hibernate-native.cfg.xml
    • For MySQL database. dao.configLocation=classpath:com/sparus/nps/dao/hibernate-mysql-hilo.cfg.xml
    • For other databases: dao.configLocation=classpath:com/sparus/nps/dao/hibernate-hilo.cfg.xml
  5. Please add the following properties in ew-config.properties:
     
    # Everywan cluster shared secret for application connection 
     
    everywan.secret=everywan 
     
    # Everywan node name (used on load balancer front end) 
     
    cluster.everywan.nodeName=auto 
     
    # Everywan direct IP access (ex. used by remote support) 
     
    cluster.everywan.directAccess=auto 
     
    # Everywan broadcast 
     
    cluster.everywan.broadcast.address=228.0.0.8 
     
    cluster.everywan.broadcast.port=45561 
    
    Note: It is recommended that you change the cluster.everywan.nodeName=auto to node1 and node2 rather than leave as auto, as follows:

    The following parameters are used:

    • cluster.everywan.nodeName. "node1" (or node2, node3. and so on).
    • cluster.everywan.directAccess. "auto" (search for the first IP address of the first network interface). If you want to assign a specific IP address, use : "ip:192.168.1.251".
    • cluster.everywan.broadcast.address. " 228.0.0.8 " (UDP broadcast address).
    • cluster.everywan.broadcast.port. "45561" (UDP broadcast port).
      Important: This broadcast address, " 228.0.0.8 :45561" must be different from the one used by Tomcat server in server.xml.
    For cluster.everywan.directAccess, you can use the following parameters:
    Important: In order for Remote Support to work if the node has two or more nics, you might need to put the node IP here.
    • eth1. Use the first IP address of eth1 interface.
    • ip:192.168.1.128. Use the specified IP address.
    • lo. Use the first IP address of the lo interface (127.0.0.1).

Configuring Tomcat oscache.properties

File oscache.properties is located under <installation_dir>\tomcat\webapps\zdm\WEB-INF\classes.
  1. Use wordpad to open the file. At the end of the file, look for JGroups configuration. It looks like the following example:
    cache.cluster.properties=UDP(mcast_addr=228.0.0.8;mcast_port=45566;diagnostics_addr=228.0.0.8;diagnostics_port=45567;mcast_send_buf_size=150000;mcast_recv_buf_size=80000):PING(timeout=1500;num_initial_members=2):MERGE2(min_interval=5000;max_interval=10000):FD_SOCK:VERIFY_SUSPECT(timeout=1000):pbcast.NAKACK(gc_lag=50;retransmit_timeout=300,600,1200,2400,4800;max_xmit_size=8192):UNICAST(timeout=300,600,1200,2400):pbcast.STABLE(desired_avg_gossip=20000):FRAG(frag_size=8096;down_thread=false;up_thread=false):pbcast.GMS(join_timeout=5000;join_retry_timeout=2000;shun=false;print_local_addr=true) 
    cache.cluster.multicast.ip=228.0.0.8
  2. Check the following parameters:
    • mcast_addr= 228.0.0.8
    • mcast_port=45566
    • diagnostics_addr= 228.0.0.8
    • diagnostics_port=45567
    • cache.cluster.multicast.ip= 228.0.0.8
      • mcast_addrandmcast_port, diagnostics_addr and diagnostics_port are used to check the Hibernate cache consistency among the cluster nodes. They must have the same values on all the cluster nodes.
      • cache.cluster.multicast.ipmut have the same address as mcast_addr.

Configuring the Tomcat applicationcontext.xml file

  1. Open applicationContext.xml file under <installation_dir>\tomcat\webapps\zdm\WEB-INF\ and verify the following values:
    <import resource="classpath:push_services.xml" /> 
    <import resource="classpath:ios_configuration.xml" /> 
      
    <import resource="classpath:cluster_configuration.xml" /> 
      
    <import resource="classpath:deploy-scheduler.xml" />

Running update-hilo-sql on all databases besides MS-SQL

Only for PostGres database, run the PostGres administrator utility pgadmin3.exe located under <installation ˆdirectory>\postgres\bin.
  1. Open File > Add Server and thenconnect to the postgres database name/instance.
  2. Open the query tool and then import update-hilo.sqllocated under <installationdir>\tomcat\webapps\zdm\sql-scripts\sql_update\PostgreSQL and then execute the same.

Overwriting the .pem file

  1. Back up the following files:
    • cacerts.pem
    • cacerts.pem.jks
    • certchain.pem
    • https.crt.pem
    • https.p12.pem
  2. Copy and overwrite the files from 'Cluster node 1' <installation_dir>\tomcat\conf to 'Cluster Node 2' <installation_dir>\tomcat\conf.

Starting the Device Manager windows service

  1. Start Device Manager windows service on both nodes.
  2. Verify that individual instances are working. (For example, open browser with URL http://127.0.0.1/zdm).
  3. Create a test user on any Device Manager instance.

Testing the cluster setup

  1. The Virtual Server IP address (in this case, IP 172.30.1.221) should be reachable.
  2. Verify that ports 80, 443, and 8443 are open on the virtual server IP address. You can telnet to the virtual server IP address and port 80, 443 and 8443 or a port scanner utility.
  3. Open a browser and go to URL http://172.30.1.221/zdm. This should redirect to one of the cluster nodes and eventually open the Device Manager web console.
  4. Open a browser and go to URL https://172.30.1.221/zdm. This should redirect to the one of the cluster nodes and eventually open the Device Manager web console.