3.6.5. Adding a remote Composite Cluster

Adding an entire new cluster provides significant level of availbility and capacity. The new cluster nodes that form the cluster will be fully aware of the original cluster(s) and communicate with existing managers and datasources within the cluster.

SUMMARY: On the staging host, update the configuration to include the new hosts, services and optionally a new composite service you are adding. For example, you could have a simple non-composite service you wish to convert to composite, or you could be adding another cluster to an existing composite dataservice.

To add an additional cluster to an existing composite cluster:

  1. On the new host(s), ensure the Appendix C, Prerequisites have been followed.

  2. Let's assume that we have a composite cluster dataservice called global with two clusters, east and west, with three nodes each, defined as follows:

    shell> ./tools/tpm configure defaults \
    --application-password=secret \
    --application-port=3306 \
    --application-user=app_user \
    --install-directory=/opt/continuent \
    --replication-password=secret \
    --replication-user=tungsten \
    --skip-validation-check=MySQLPermissionsCheck \
    --start-and-report=true \
    --user=tungsten
    
    shell> ./tools/tpm configure east \
    --connectors=db1,db2,db3 \
    --master=db1 \
    --slaves=db2,db3 \
    --topology=clustered
    
    shell> ./tools/tpm configure west \
    --connectors=db4,db5,db6 \
    --relay-source=east \
    --relay=db4 \
    --slaves=db5,db6 \
    --topology=clustered
    
    shell> ./tools/tpm configure global \
    --composite-datasources=east,west
  3. Locate the staging host and directory and go to it

    shell> tpm query staging
    shell> ssh {staging_host_from_above}
    shell> cd {staging_directory_from_above}
  4. Set the cluster to maintenance mode using cctrl:

    shell> cctrl
    [LOGICAL] / > set policy maintenance
  5. Add the definition for the new slave cluster service north:

    shell> ./tools/tpm configure north \
    --connectors=db7,db8,db9 \
    --relay-source=east \
    --relay=db7 \
    --slaves=db8,db9 \
    --topology=clustered
  6. Finally, add the definition for new cluster north to the composite cluster service global:

    shell> ./tools/tpm configure global \
    --composite-datasources=east,west,north
  7. Update the configuration, which will install Tungsten on any new nodes:

    shell> ./tools/tpm update --no-connectors -i

    Using the --no-connectors option updates the current deployment without restarting the existing connectors.

  8. On every node in the original clusters, make sure all replicators are online:

    shell> trepctl online; trepctl services
  9. Update the old cluster via cctrl to recognize the new cluster (i.e. north):

    shell> cctrl -multi
    cctrl> use global
    cctrl> create composite datasource north
    cctrl> ls
    tungsten@db1:~  $ cctrl -multi
    Continuent Tungsten 4.0.5 build 3594890
    east: session established
    
    [LOGICAL] / > set policy maintenance
    
    [LOGICAL] /global > ls
    
    COORDINATOR[db1:MAINTENANCE:ONLINE]
    
    DATASOURCES:
    +----------------------------------------------------------------------------+
    |east(composite master:ONLINE)                                               |
    |STATUS [OK] [2017/01/07 01:44:35 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    +----------------------------------------------------------------------------+
    |west(composite slave:ONLINE)                                                |
    |STATUS [OK] [2017/01/07 01:48:53 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    
    [LOGICAL] /global > create composite datasource north
    CREATED COMPOSITE DATA SOURCE 'north' AS A SLAVE
    
    
    [LOGICAL] /global > ls
    
    COORDINATOR[db1:MAINTENANCE:ONLINE]
    
    DATASOURCES:
    +----------------------------------------------------------------------------+
    |east(composite master:ONLINE)                                               |
    |STATUS [OK] [2017/01/07 01:44:35 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    +----------------------------------------------------------------------------+
    |north(composite slave:ONLINE)                                               |
    |STATUS [OK] [2017/01/07 01:50:32 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    +----------------------------------------------------------------------------+
    |west(composite slave:ONLINE)                                                |
    |STATUS [OK] [2017/01/07 01:48:53 PM UTC]                                    |
    +----------------------------------------------------------------------------+
  10. Go to the relay (master) node of the new cluster (i.e. db7) and provision it from a slave of the original cluster (i.e. db2):

    shell> tungsten_provision_slave --source db2
    tungsten@db7:~  $ tungsten_provision_slave --source db2
    NOTE  >> Put north replication service offline
    NOTE  >> Create a backup of db2 in /opt/continuent/backups/provision_xtrabackup_2017-01-07_13-53_58
    NOTE  >> db2 >> Run innobackupex sending the output to db7:/opt/continuent/backups/provision_xtrabackup_2017-01-07_13-53_58
    NOTE  >> db2 >> Transfer extra files to db7:/opt/continuent/backups/provision_xtrabackup_2017-01-07_13-53_58
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the north replication service online
    NOTE  >> Clear THL and relay logs for the north replication service
  11. Go to a slave node of the new cluster (i.e. db8) and provision it from the relay node of the new cluster (i.e. db7):

    shell> tungsten_provision_slave --source db7
    tungsten@db8:~  $  tungsten_provision_slave --source db7
    NOTE  >> Put north replication service offline
    NOTE  >> Create a backup of db7 in /opt/continuent/backups/provision_xtrabackup_2017-01-07_13-54_71
    NOTE  >> db7 >> Run innobackupex sending the output to db8:/opt/continuent/backups/provision_xtrabackup_2017-01-07_13-54_71
    NOTE  >> db7 >> Transfer extra files to db8:/opt/continuent/backups/provision_xtrabackup_2017-01-07_13-54_71
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the north replication service online
    NOTE  >> Clear THL and relay logs for the north replication service
  12. Go to a slave node (i.e. db9) of the new cluster and provision it from the newly-provisioned slave node of the new cluster (i.e. db8):

    shell> tungsten_provision_slave --source db8
    tungsten@db9:~  $  tungsten_provision_slave --source db8
    NOTE  >> Put north replication service offline
    NOTE  >> Create a backup of db8 in /opt/continuent/backups/provision_xtrabackup_2017-01-07_13-55_78
    NOTE  >> db8 >> Run innobackupex sending the output to db9:/opt/continuent/backups/provision_xtrabackup_2017-01-07_13-55_78
    NOTE  >> db8 >> Transfer extra files to db9:/opt/continuent/backups/provision_xtrabackup_2017-01-07_13-55_78
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the north replication service online
    NOTE  >> Clear THL and relay logs for the north replication service
  13. Set the composite cluster to automatic mode using cctrl:

    shell> cctrl -multi
    [LOGICAL] / > set policy automatic
  14. During a period when it is safe to restart the connectors:

    shell> ./tools/tpm promote-connector

To convert from a single cluster to a composite cluster:

  1. On the new host(s), ensure the Appendix C, Prerequisites have been followed.

  2. Let's assume that we have a single cluster dataservice called east with three nodes, defined as follows:

    shell> ./tools/tpm configure defaults \
    --application-password=secret \
    --application-port=3306 \
    --application-user=app_user \
    --install-directory=/opt/continuent \
    --replication-password=secret \
    --replication-user=tungsten \
    --skip-validation-check=MySQLPermissionsCheck \
    --start-and-report=true \
    --user=tungsten
    
    shell> ./tools/tpm configure east \
    --connectors=db1,db2,db3 \
    --master=db1 \
    --slaves=db2,db3 \
    --topology=clustered
  3. Locate the staging host and directory and go to it

    shell> tpm query staging
    shell> ssh {staging_host_from_above}
    shell> cd {staging_directory_from_above}
  4. Set the cluster to maintenance mode using cctrl:

    shell> cctrl
    [LOGICAL] / > set policy maintenance
  5. Add the definition for the new slave cluster service west:

    shell> ./tools/tpm configure west \
    --connectors=db4,db5,db6 \
    --relay-source=east \
    --relay=db4 \
    --slaves=db5,db6 \
    --topology=clustered
  6. Finally, add the definition for the composite cluster service global:

    shell> ./tools/tpm configure global \
    --composite-datasources=east,west
  7. Update the configuration, which will install Tungsten on any new nodes:

    shell> ./tools/tpm update --no-connectors -i

    Using the --no-connectors option updates the current deployment without restarting the existing connectors.

  8. On every node in the original cluster, make sure all replicators are online:

    shell> trepctl online; trepctl services
  9. Update the old cluster via cctrl to create the new composite cluster (i.e. global):

    shell> cctrl -multi
    cctrl> create composite dataservice global
    cctrl> use global
    cctrl> create composite datasource east
    cctrl> create composite datasource west
    cctrl> ls
    tungsten@db1:~  $ cctrl -multi
    Continuent Tungsten 4.0.5 build 3594890
    east: session established
    
    [LOGICAL] / > set policy maintenance
    
    [LOGICAL] / > ls
    +----------------------------------------------------------------------------+
    |DATA SERVICES:                                                              |
    +----------------------------------------------------------------------------+
    east
    west
    
    
    [LOGICAL] / > create composite dataservice global
    west: session established
    composite data service 'global' was created
    
    
    [LOGICAL] / > ls
    +----------------------------------------------------------------------------+
    |DATA SERVICES:                                                              |
    +----------------------------------------------------------------------------+
    east
    global
    west
    
    [LOGICAL] / > use global 
    [LOGICAL] /global > ls
    
    COORDINATOR[db1:AUTOMATIC:ONLINE]
    
    DATASOURCES:
    
    
    
    [LOGICAL] /global > create composite datasource east   
    CREATED COMPOSITE DATA SOURCE 'east' AS A MASTER
    
    [LOGICAL] /global > create composite datasource west
    CREATED COMPOSITE DATA SOURCE 'west' AS A SLAVE
    
    
    [LOGICAL] /global > ls
    
    COORDINATOR[db1:AUTOMATIC:ONLINE]
    
    DATASOURCES:
    +----------------------------------------------------------------------------+
    |east(composite master:ONLINE)                                               |
    |STATUS [OK] [2017/01/06 09:52:14 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    +----------------------------------------------------------------------------+
    |west(composite slave:ONLINE)                                                |
    |STATUS [OK] [2017/01/06 09:52:21 PM UTC]                                    |
    +----------------------------------------------------------------------------+
  10. Go to the relay (master) node of the new cluster (i.e. db4) and provision it from a slave of the original cluster (i.e. db2):

    shell> tungsten_provision_slave --source db2
    tungsten@db4:~  $ tungsten_provision_slave --source db2
    NOTE  >> Put west replication service offline
    NOTE  >> Create a backup of db2 in /opt/continuent/backups/provision_xtrabackup_2017-01-06_21-47_55
    NOTE  >> db2 >> Run innobackupex sending the output to db4:/opt/continuent/backups/provision_xtrabackup_2017-01-06_21-47_55
    NOTE  >> db2 >> Transfer extra files to db4:/opt/continuent/backups/provision_xtrabackup_2017-01-06_21-47_55
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the west replication service online
    NOTE  >> Clear THL and relay logs for the west replication service
  11. Go to a slave node of the new cluster (i.e. db5) and provision it from the relay node of the new cluster (i.e. db4):

    shell> tungsten_provision_slave --source db5
    tungsten@db5:~  $  tungsten_provision_slave --source db4
    NOTE  >> Put west replication service offline
    NOTE  >> Create a backup of db4 in /opt/continuent/backups/provision_xtrabackup_2017-01-06_21-54_94
    NOTE  >> db4 >> Run innobackupex sending the output to db5:/opt/continuent/backups/provision_xtrabackup_2017-01-06_21-54_94
    NOTE  >> db4 >> Transfer extra files to db5:/opt/continuent/backups/provision_xtrabackup_2017-01-06_21-54_94
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the west replication service online
    NOTE  >> Clear THL and relay logs for the west replication service
  12. Go to a slave node (i.e. db6) of the new cluster and provision it from the newly-provisioned slave node of the new cluster (i.e. db5):

    shell> tungsten_provision_slave --source db5
    tungsten@db6:~  $ tungsten_provision_slave --source db5
    NOTE  >> Put west replication service offline
    NOTE  >> Create a backup of db5 in /opt/continuent/backups/provision_xtrabackup_2017-01-06_21-56_41
    NOTE  >> db5 >> Run innobackupex sending the output to db6:/opt/continuent/backups/provision_xtrabackup_2017-01-06_21-56_41
    NOTE  >> db5 >> Transfer extra files to db6:/opt/continuent/backups/provision_xtrabackup_2017-01-06_21-56_41
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the west replication service online
    NOTE  >> Clear THL and relay logs for the west replication service
  13. Set the composite cluster to automatic mode using cctrl:

    shell> cctrl -multi
    [LOGICAL] / > set policy automatic
  14. During a period when it is safe to restart the connectors:

    shell> ./tools/tpm promote-connector