Arctera

vxconfigd fails to start on some nodes after full InfoScale upgrade on CVR or VVR configuration (4056958)

After a full upgrade of InfoScale on an existing CVR or VVR configuration, the vxconfigd service fails to start on some of the cluster nodes. This issue occurs on Solaris, because the required /dev/vx folder is not created on the node.

Workaround: Perform the following tasks on the nodes where the service has failed to start.

  1. Check the state of the .aslapm-configured and the .vxvm-configured files.

    # ls -la /etc/vx/reconfig.d/state.d/

  2. Remove these files, if they are present.

    # cd /etc/vx/reconfig.d/state.d/

    # rm -rf .vxvm-configured

    # rm -rf .aslapm-configured

  3. Reboot the node.
  4. Set the appropriate cluster protocol version.

    # vxdctl setversion 260

  5. Verify that the cluster has been formed successfully.

    # /opt/VRTS/bin/hastatus -sum

  6. Optionally, check whether the cvm_clus resource in a CVM configuration has failed. This issue occurs when the protocol version is set but it has not yet reflected on the cluster. Perform the following steps sequentially to recover.

    • Clear the FAULTED state from the resource.

      # /opt/VRTS/bin/hagrp -clear cvm

    • Bring the CVM configuration online.

      # /opt/VRTS/bin/hagrp -online cvm -any

vxconfigd fails to start on some nodes after full InfoScale upgrade on CVR or VVR configuration (4056958)

In this article