3.5.6. Converting from a single cluster to a composite cluster

There are two possible scenarios for converting from a single standalone cluster to a composite cluster. The two following sections will guide you through examples of each of these.

3.5.6.1. Convert and add new nodes as a new service

The following steps guide you through updating the configuration to include the new hosts as a new service and convert to a Composite Cluster.

  1. On the new host(s), ensure the Appendix B, Prerequisites have been followed.

  2. Let's assume that we have a single cluster dataservice called east with three nodes, defined as db1, db2 and db3 with db1 as the master

  3. Set the cluster to maintenance mode using cctrl:

    shell> cctrl
    [LOGICAL] / > set policy maintenance
  4. Add the definition for the new slave cluster service west and create the composite service global:

    Click the link to switch between staging or ini examples

    Show Staging

    Show INI

    shell> tpm query staging
    tungsten@db1:/opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> echo The staging USER is `tpm query staging| cut -d: -f1 | cut -d@ -f1`
    The staging USER is tungsten
    
    shell> echo The staging HOST is `tpm query staging| cut -d: -f1 | cut -d@ -f2`
    The staging HOST is db1
    
    shell> echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
    The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> ssh {STAGING_USER}@{STAGING_HOST}
    shell> cd {STAGING_DIRECTORY}
    shell> ./tools/tpm configure west \
        --connectors=db4,db5,db6 \
        --relay-source=east \
        --relay=db4 \
        --slaves=db5,db6 \
        --topology=clustered
    
    shell> ./tools/tpm configure global \
        --composite-datasources=east,west
    

    Run the tpm command to update the software with the Staging-based configuration:

    shell> ./tools/tpm update --no-connectors

    For information about making updates when using a Staging-method deployment, please see Section 9.3.7, “Configuration Changes from a Staging Directory”.

    shell> vi /etc/tungsten/tungsten.ini
    [west]
    ...
    connectors=db4,db5,db6
    relay-source=east
    relay=db4
    slaves=db5,db6
    topology=clustered
    
    [global]
    ...
    composite-datasources=east,west
    

    Run the tpm command to update the software with the INI-based configuration:

    shell> tpm query staging
    tungsten@db1:/opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
    The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> cd {STAGING_DIRECTORY}
    
    shell> ./tools/tpm update --no-connectors

    For information about making updates when using an INI file, please see Section 9.4.4, “Configuration Changes with an INI file”.

  5. Using the --no-connectors option updates the current deployment without restarting the existing connectors.

  6. On every node in the original cluster, make sure all replicators are online:

    shell> trepctl online; trepctl services
  7. Update the old cluster via cctrl to create the new composite cluster (i.e. global):

    shell> cctrl -multi
    cctrl> create composite dataservice global
    cctrl> use global
    cctrl> create composite datasource east
    cctrl> create composite datasource west
    cctrl> ls
    tungsten@db1:~  $ cctrl -multi
    Continuent Tungsten 6.0.5 build 41
    east: session established
    
    [LOGICAL] / > set policy maintenance
    
    [LOGICAL] / > ls
    +----------------------------------------------------------------------------+
    |DATA SERVICES:                                                              |
    +----------------------------------------------------------------------------+
    east
    west
    
    
    [LOGICAL] / > create composite dataservice global
    west: session established
    composite data service 'global' was created
    
    
    [LOGICAL] / > ls
    +----------------------------------------------------------------------------+
    |DATA SERVICES:                                                              |
    +----------------------------------------------------------------------------+
    east
    global
    west
    
    [LOGICAL] / > use global
    [LOGICAL] /global > ls
    
    COORDINATOR[db1:AUTOMATIC:ONLINE]
    
    DATASOURCES:
    
    [LOGICAL] /global > create composite datasource east
    CREATED COMPOSITE DATA SOURCE 'east' AS A MASTER
    
    [LOGICAL] /global > create composite datasource west
    CREATED COMPOSITE DATA SOURCE 'west' AS A SLAVE
    
    
    [LOGICAL] /global > ls
    
    COORDINATOR[db1:AUTOMATIC:ONLINE]
    
    DATASOURCES:
    +----------------------------------------------------------------------------+
    |east(composite master:ONLINE)                                               |
    |STATUS [OK] [2019/01/06 09:52:14 PM UTC]                                    |
    +----------------------------------------------------------------------------+
    
    +----------------------------------------------------------------------------+
    |west(composite slave:ONLINE)                                                |
    |STATUS [OK] [2019/01/06 09:52:21 PM UTC]                                    |
    +----------------------------------------------------------------------------+
  8. Go to the relay (master) node of the new cluster (i.e. db4) and provision it from a slave of the original cluster (i.e. db2):

    shell> tungsten_provision_slave --source db2
    tungsten@db4:~  $ tungsten_provision_slave --source db2
    NOTE  >> Put west replication service offline
    NOTE  >> Create a backup of db2 in /opt/continuent/backups/provision_xtrabackup_2019-01-06_21-47_55
    NOTE  >> db2 >> Run innobackupex sending the output to db4:/opt/continuent/backups/provision_xtrabackup_2019-01-06_21-47_55
    NOTE  >> db2 >> Transfer extra files to db4:/opt/continuent/backups/provision_xtrabackup_2019-01-06_21-47_55
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the west replication service online
    NOTE  >> Clear THL and relay logs for the west replication service
  9. Go to a slave node of the new cluster (i.e. db5) and provision it from the relay node of the new cluster (i.e. db4):

    shell> tungsten_provision_slave --source db5
    tungsten@db5:~  $  tungsten_provision_slave --source db4
    NOTE  >> Put west replication service offline
    NOTE  >> Create a backup of db4 in /opt/continuent/backups/provision_xtrabackup_2019-01-06_21-54_94
    NOTE  >> db4 >> Run innobackupex sending the output to db5:/opt/continuent/backups/provision_xtrabackup_2019-01-06_21-54_94
    NOTE  >> db4 >> Transfer extra files to db5:/opt/continuent/backups/provision_xtrabackup_2019-01-06_21-54_94
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the west replication service online
    NOTE  >> Clear THL and relay logs for the west replication service
  10. Go to a slave node (i.e. db6) of the new cluster and provision it from the newly-provisioned slave node of the new cluster (i.e. db5):

    shell> tungsten_provision_slave --source db5
    tungsten@db6:~  $ tungsten_provision_slave --source db5
    NOTE  >> Put west replication service offline
    NOTE  >> Create a backup of db5 in /opt/continuent/backups/provision_xtrabackup_2019-01-06_21-56_41
    NOTE  >> db5 >> Run innobackupex sending the output to db6:/opt/continuent/backups/provision_xtrabackup_2019-01-06_21-56_41
    NOTE  >> db5 >> Transfer extra files to db6:/opt/continuent/backups/provision_xtrabackup_2019-01-06_21-56_41
    NOTE  >> Prepare the files for MySQL to run
    NOTE  >> Stop MySQL and empty all data directories
    NOTE  >> Stop the MySQL service
    NOTE  >> Transfer data files to the MySQL data directory
    NOTE  >> Start the MySQL service
    NOTE  >> Backup and restore complete
    NOTE  >> Put the west replication service online
    NOTE  >> Clear THL and relay logs for the west replication service
  11. Set the composite cluster to automatic mode using cctrl:

    shell> cctrl -multi
    [LOGICAL] / > set policy automatic
  12. During a period when it is safe to restart the connectors:

    shell> ./tools/tpm promote-connector

3.5.6.2. Convert and move nodes to a new service

Our example starting cluster has 5 nodes (1 master and 4 slaves) and uses service name alpha. Our target cluster will have 6 nodes (3 per cluster) in 2 member clusters alpha_east and alpha_west in composite service alpha.

This means that we will reuse the existing service name alpha as the name of the new composite service, and create two new service names, one for each cluster (alpha_east and alpha_west).

To convert the above configuration, follow the steps below:

  1. First you must stop all services on all existing nodes.

    shell> stopall
  2. Update tungsten.ini on all nodes.

    Create the two new services and put the correct information into all three stanzas.

    For example, below is an INI file extract example for our target composite cluster with 6 nodes, showing the two new service stanzas and the change required to the original alpha stanza:

    shell> vi /etc/tungsten/tungsten.ini
    [alpha_east]
    topology=clustered
    master=db1
    members=db1,db2,db3
    connectors=db1,db2,db3
    
    [alpha_west]
    topology=clustered
    relay=db4
    members=db4,db5,db6
    connectors=db4,db5,db6
    relay-source=alpha_east
    
    [alpha]
    composite-datasources=alpha_east,alpha_west
    
  3. Invoke the conversion using the tpm command from the software extraction directory:

    shell> tpm query staging
    shell> cd {software_staging_dir_from_tpm_query}
    shell> ./tools/tpm update --replace-release
    shell> rm /opt/cont/tung/cluster-home/conf/cluster/*/datasource/*
  4. Finally, start all services on all existing nodes.

    shell> startall