3.5.5. Adding a remote Composite Cluster

Adding an entire new cluster provides significant level of availability and capacity. The new cluster nodes that form the cluster will be fully aware of the original cluster(s) and communicate with existing managers and datasources within the cluster.

The following steps guide you through updating the configuration to include the new hosts and services you are adding.

  1. On the new host(s), ensure the Appendix B, Prerequisites have been followed.

  2. Let's assume that we have a composite cluster dataservice called global with two clusters, east and west, with three nodes each.

    In this worked exmple, we show how to add an additional cluster service called north with three new nodes.

  3. Set the cluster to maintenance mode using cctrl:

    shell> cctrl
    [LOGICAL] / > set policy maintenance
  4. Using the following as an example, update the configuration to include the new cluster and update the additional composite service block

    Click the link to switch between staging or ini examples

    Show Staging

    Show INI

    shell> tpm query staging
    tungsten@db1:/opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> echo The staging USER is `tpm query staging| cut -d: -f1 | cut -d@ -f1`
    The staging USER is tungsten
    
    shell> echo The staging HOST is `tpm query staging| cut -d: -f1 | cut -d@ -f2`
    The staging HOST is db1
    
    shell> echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
    The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> ssh {STAGING_USER}@{STAGING_HOST}
    shell> cd {STAGING_DIRECTORY}
    shell> ./tools/tpm configure north \
        --connectors=db7,db8,db9 \
        --relay-source=east \
        --relay=db7 \
        --slaves=db8,db9 \
        --topology=clustered
    
    shell> ./tools/tpm configure global \
        --composite-datasources=east,west,north
    

    Run the tpm command to update the software with the Staging-based configuration:

    shell> ./tools/tpm update --no-connectors

    For information about making updates when using a Staging-method deployment, please see Section 9.3.7, “Configuration Changes from a Staging Directory”.

    shell> vi /etc/tungsten/tungsten.ini
    [north]
    ...
    connectors=db7,db8,db9
    relay-source=east
    relay=db7
    slaves=db8,db9
    topology=clustered
    
    [global]
    ...
    composite-datasources=east,west,north
    

    Run the tpm command to update the software with the INI-based configuration:

    shell> tpm query staging
    tungsten@db1:/opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
    The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> cd {STAGING_DIRECTORY}
    
    shell> ./tools/tpm update --no-connectors

    For information about making updates when using an INI file, please see Section 9.4.4, “Configuration Changes with an INI file”.

  5. Using the --no-connectors option updates the current deployment without restarting the existing connectors.

  6. On every node in the original clusters, make sure all replicators are online:

    shell> trepctl online; trepctl services
  7. Update the old cluster via cctrl to recognize the new cluster (i.e. north):

    shell> cctrl -multi
    cctrl> use global
    cctrl> create composite datasource north
    cctrl> ls
    tungsten@db1:~  $ cctrl -multi
    Continuent Tungsten 6.0.5 build 41
    east: session established
    
    [LOGICAL] / > set policy maintenance
    
    [LOGICAL] /global > ls
    
    COORDINATOR[db1:MAINTENANCE:ONLINE]
    
    DATASOURCES:
    +----------------------------------------------------------------------------+
    |east(composite master:ONLINE)                                               |
    |STATUS [OK] [2019/01/07 01:44:35 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    +----------------------------------------------------------------------------+
    |west(composite slave:ONLINE)                                                |
    |STATUS [OK] [2019/01/07 01:48:53 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    
    [LOGICAL] /global > create composite datasource north
    CREATED COMPOSITE DATA SOURCE 'north' AS A SLAVE
    
    
    [LOGICAL] /global > ls
    
    COORDINATOR[db1:MAINTENANCE:ONLINE]
    
    DATASOURCES:
    +----------------------------------------------------------------------------+
    |east(composite master:ONLINE)                                               |
    |STATUS [OK] [2019/01/07 01:44:35 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    +----------------------------------------------------------------------------+
    |north(composite slave:ONLINE)                                               |
    |STATUS [OK] [2019/01/07 01:50:32 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    +----------------------------------------------------------------------------+
    |west(composite slave:ONLINE)                                                |
    |STATUS [OK] [2019/01/07 01:48:53 PM UTC]                                    |
    +----------------------------------------------------------------------------+
  8. Go to the relay (master) node of the new cluster (i.e. db7) and provision it from a slave of the original cluster (i.e. db2):

    shell> tungsten_provision_slave --source db2
    tungsten@db7:~  $ tungsten_provision_slave --source db2
    NOTE  >> Put north replication service offline
    NOTE  >> Create a backup of db2 in /opt/continuent/backups/provision_xtrabackup_2019-01-07_13-53_58
    NOTE  >> db2 >> Run innobackupex sending the output to db7:/opt/continuent/backups/provision_xtrabackup_2019-01-07_13-53_58
    NOTE  >> db2 >> Transfer extra files to db7:/opt/continuent/backups/provision_xtrabackup_2019-01-07_13-53_58
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the north replication service online
    NOTE  >> Clear THL and relay logs for the north replication service
  9. Go to a slave node of the new cluster (i.e. db8) and provision it from the relay node of the new cluster (i.e. db7):

    shell> tungsten_provision_slave --source db7
    tungsten@db8:~  $  tungsten_provision_slave --source db7
    NOTE  >> Put north replication service offline
    NOTE  >> Create a backup of db7 in /opt/continuent/backups/provision_xtrabackup_2019-01-07_13-54_71
    NOTE  >> db7 >> Run innobackupex sending the output to db8:/opt/continuent/backups/provision_xtrabackup_2019-01-07_13-54_71
    NOTE  >> db7 >> Transfer extra files to db8:/opt/continuent/backups/provision_xtrabackup_2019-01-07_13-54_71
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the north replication service online
    NOTE  >> Clear THL and relay logs for the north replication service
  10. Go to a slave node (i.e. db9) of the new cluster and provision it from the newly-provisioned slave node of the new cluster (i.e. db8):

    shell> tungsten_provision_slave --source db8
    tungsten@db9:~  $  tungsten_provision_slave --source db8
    NOTE  >> Put north replication service offline
    NOTE  >> Create a backup of db8 in /opt/continuent/backups/provision_xtrabackup_2019-01-07_13-55_78
    NOTE  >> db8 >> Run innobackupex sending the output to db9:/opt/continuent/backups/provision_xtrabackup_2019-01-07_13-55_78
    NOTE  >> db8 >> Transfer extra files to db9:/opt/continuent/backups/provision_xtrabackup_2019-01-07_13-55_78
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the north replication service online
    NOTE  >> Clear THL and relay logs for the north replication service
  11. Set the composite cluster to automatic mode using cctrl:

    shell> cctrl -multi
    [LOGICAL] / > set policy automatic
  12. During a period when it is safe to restart the connectors:

    shell> ./tools/tpm promote-connector