3.4.8. Adding a Cluster to a Composite Active/Active Topology

This procedure explains how to add additional clusters to an existing v6.x (or newer) Composite Active/Active configuration.

The example in this procedure adds a new 3-node cluster consisting of nodes db7, db8 and db9 within a service called Tokyo. The existing cluster contains two dataservices, NYC and London, made up of nodes db1, db2, db3 and db4, db5, db6 respectively.

3.4.8.1. Pre-Requisites

Ensure the new nodes have all the necessary pre-requisites in place, specifically paying attention to the following:

  • MySQL auto_increment parameters set appropriately on existing and new clusters

  • All new nodes have full connectivity to the existing nodes and the hosts file contains correct hostnames

  • All existing nodes have full connectivity to the new nodes and hosts file contains correct hostnames

3.4.8.2. Backup and Restore

We need to provision all the new nodes in the new cluster with a backup taken from one node in any of the existing clusters. In this example we are using db6 in the London dataservice as the source for the backup.

  1. Shun and stop the services on the node used for the backup

    db6-shell> cctrl
    cctrl> datasource db6 shun
    cctrl> replicator db6 offline
    cctrl> exit
    db6-shell> stopall
    db6-shell> sudo service mysqld stop
  2. Next, use whichever method you wish to copy the mysql datafiles from db6 to all the nodes in the new cluster (scp, rsync, xtrabackup etc). Ensure ALL database files are copied.

  3. Once backup copied across, restart the services on db6

    db6-shell> sudo service mysqld start
    db6-shell> startall
    db6-shell> cctrl
    cctrl> datasource db6 recover
    cctrl> exit
  4. Ensure all files copied to the target nodes have the correct file ownership

  5. Start mysql on the new nodes

3.4.8.3. Update Existing Configuration

Next we need to change the configuration on the existing hosts to include the configuration of the new cluster.

You need to add a new service block that includes the new nodes and append the new service to the composite-datasource parameter in the composite dataservice, all within /etc/tungsten/tungsten.ini

Example of a new service block and composite-datasource change added to existing hosts configuration:

[tokyo]
topology=clustered
master=db7
members=db7,db8,db9
connectors=db7,db8,db9
[global]
topology=composite-multi-master
composite-datasources=nyc,london,tokyo

3.4.8.4. New Host Configuration

To avoid any differences in configuration, once the changes have been made to the tungsten.ini on the existing hosts, copy this file from one of the nodes to all the nodes in the new cluster.

Ensure start-and-report is false or not set in the config.

3.4.8.5. Install on new nodes

On the 3 new nodes, validate the software:

shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm validate

This may produce Warnings that the tracking schemas for the existing cluster already exist - this is OK and they can be ignored. Assuming no other unexpected errors are reported, then go ahead and install the software:

shell> tools/tpm install

3.4.8.6. Update existing nodes

Before we start the new cluster, we now need to update the existing clusters

  1. Put entire cluster into MAINTENANCE

    shell> cctrl
    cctrl> use {composite-dataservice}
    cctrl> set policy maintenance
    cctrl> ls
    COORDINATOR[db3:MAINTENANCE:ONLINE]
       london:COORDINATOR[db4:MAINTENANCE:ONLINE]
          nyc:COORDINATOR[db3:MAINTENANCE:ONLINE]
    cctrl> exit
  2. Update the software on each node. This needs to be executed from the software staging directory using the replace-release option as this will ensure the new cross-site dataservices are setup correctly. Update the Primaries first followed by the Replicas, cluster by cluster:

    shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
    shell> tools/tpm update --replace-release

3.4.8.7. Start the new cluster

On all the nodes in the new cluster, start the software:

shell> startall

3.4.8.8. Validate and check

Using cctrl, check that the new cluster appears and that all services are correctly showing online, it may take a few moments for the cluster to settle down and start everything

shell> cctrl
cctrl> use {composite-dataservice}
cctrl> ls
cctrl> exit

Check the output of trepctl and ensure all replicators are online and new cross-site services appear in the pre-existing clusters

shell> trepctl -service {service} status
shell> trepctl services

Place entire cluster back into AUTOMATIC

shell> cctrl
cctrl> use {composite-dataservice}
cctrl> set policy automatic
cctrl> ls
COORDINATOR[db2:AUTOMATIC:ONLINE]
   london:COORDINATOR[db5:AUTOMATIC:ONLINE]
   nyc:COORDINATOR[db2:AUTOMATIC:ONLINE]
cctrl> exit