3.2.8. Adding a new Cluster/Dataservice

Warning

The procedures in this section are designed for the pre-v6.x Multisite/Multimaster topology ONLY. Do NOT use these procedures with version 6.x Multisite Clusters.

For version 6.x Multisite Clustering, please refer to Deploying Composite Multimaster Clustering.

To add an entirely new cluster (dataservice) to the mesh, follow the below simple procedure.

Note

There is no need to set the Replicator starting points, and no downtime/maintenance window is required!

  1. Choose a cluster to take a node backup from:

    • Choose a cluster and slave node to take a backup from.

    • Enable maintenance mode for the cluster:

      shell> cctrl
      cctrl> set policy maintenance
    • Shun the selected slave node and stop both local and cross-site replicator services:

      shell> cctrl
      cctrl> datasource {slave_hostname_here} shun
      slave shell> trepctl offline
      slave shell> replicator stop
      slave shell> mm_trepctl offline
      slave shell> mm_replicator stop
    • Take a backup of the shunned node, then copy to/restore on all nodes in the new cluster.

    • Recover the slave node and put cluster back into automatic mode:

      slave shell> replicator start
      slave shell> trepctl online
      slave shell> mm_replicator start
      slave shell> mm_trepctl online
      shell> cctrl
      cctrl> datasource {slave_hostname_here} online
      cctrl> set policy automatic
  2. On ALL nodes in all three (3) clusters, ensure the /etc/tungsten/tungsten.ini has all three clusters defined and all the correct cross-site combinations.

  3. Install the Tungsten Clustering software on new cluster nodes to create a single standalone cluster and check the cctrl command to be sure the new cluster is fully online.

  4. Install the Tungsten Replicator software on all new cluster nodes and start it.

    Replication will now be flowing INTO the new cluster from the original two.

  5. On the original two clusters, run tools/tpm update from the cross-site replicator staging software path:

    shell> mm_tpm query staging
    shell> cd {replicator_staging_directory}
    shell> tools/tpm update --replace-release
    shell> mm_trepctl online
    shell> mm_trepctl services

    Check the output from the mm_trepctl services command output above to confirm the new service appears and is online.

Note

There is no need to set the cross-site replicators at a starting position because:

  • Replicator feeds from the new cluster to the old clusters start at seqno 0.

  • The tungsten_olda and tungsten_oldb database schemas will contain the correct starting points for the INBOUND feed into the new cluster, so when the cross-site replicators are started and brought online they will read from the tracking table and carry on correctly from the stored position.