3.5.6. Converting from a single cluster to a composite cluster

There are two possible scenarios for converting from a single standalone cluster to a composite cluster. The two following sections will guide you through examples of each of these.

3.5.6.1. Convert and add new nodes as a new service

The following steps guide you through updating the configuration to include the new hosts as a new service and convert to a Composite Cluster.

For the purpose of this worked example, we have a single cluster dataservice called east with three nodes, defined as db1, db2 and db3 with db1 as the master.

Our goal is to create a new cluster dataservice called west with three nodes, defined as db4, db5 and db6 with db4 as the relay.

We will configure a new composite dataservice called global

  1. On the new host(s), ensure the Appendix B, Prerequisites have been followed.

    If configuring via the Staging Installation method, skip straight to Step 4:

  2. On the new host(s), ensure the tungsten.ini contains the correct service blocks for both the existing cluster and the new cluster.

  3. On the new host(s), install the proper version of clustering software, ensuring that the version being installed matches the version currently installed on the existing hosts.

    Important

    Ensure --start-and-report is set to false in the configuration for the new hosts.

  4. Set the existing cluster to maintenance mode using cctrl:

    shell> cctrl
    [LOGICAL] / > set policy maintenance
  5. Add the definition for the new slave cluster service west and composite service global to the existing configuration on the existing host(s):

    Click the link to switch between staging or ini examples

    Show Staging

    Show INI

    shell> tpm query staging
    tungsten@db1:/opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> echo The staging USER is `tpm query staging| cut -d: -f1 | cut -d@ -f1`
    The staging USER is tungsten
    
    shell> echo The staging HOST is `tpm query staging| cut -d: -f1 | cut -d@ -f2`
    The staging HOST is db1
    
    shell> echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
    The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> ssh {STAGING_USER}@{STAGING_HOST}
    shell> cd {STAGING_DIRECTORY}
    shell> ./tools/tpm configure west \
        --connectors=db4,db5,db6 \
        --relay-source=east \
        --relay=db4 \
        --slaves=db5,db6 \
        --topology=clustered
    
    shell> ./tools/tpm configure global \
        --composite-datasources=east,west
    

    Run the tpm command to update the software with the Staging-based configuration:

    shell> ./tools/tpm update --no-connectors --replace-release

    For information about making updates when using a Staging-method deployment, please see Section 9.3.7, “Configuration Changes from a Staging Directory”.

    shell> vi /etc/tungsten/tungsten.ini
    [west]
    ...
    connectors=db4,db5,db6
    relay-source=east
    relay=db4
    slaves=db5,db6
    topology=clustered
    
    [global]
    ...
    composite-datasources=east,west
    

    Run the tpm command to update the software with the INI-based configuration:

    shell> tpm query staging
    tungsten@db1:/opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
    The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-6.0.5-41
    
    shell> cd {STAGING_DIRECTORY}
    
    shell> ./tools/tpm update --no-connectors --replace-release

    For information about making updates when using an INI file, please see Section 9.4.4, “Configuration Changes with an INI file”.

    Note

    Using the optional --no-connectors option updates the current deployment without restarting the existing connectors.

    Note

    Using the --replace-release option ensures the metadata files for the cluster are correctly rebuilt. This parameter MUST be supplied.

  6. On every node in the original cluster, make sure all replicators are online:

    shell> trepctl online; trepctl services
  7. On all the new hosts in the new cluster, start the manager processes only

    shell> manager start
  8. From the original cluster, use cctrl to check that the new dataservice and composite dataservice have been created, and place the new dataserive into maintenance mode

    shell> cctrl -multi
    cctrl> ls
    cctrl> use global
    cctrl> ls
    cctrl> datasource east online
    cctrl> set policy maintenance
    tungsten@db1:~  $ cctrl -multi
    Tungsten Clustering 6.1.1 build 129
    east: session established, encryption=false, authentication=false
    
    [LOGICAL] / > ls
    global
      east
      west
    
    [LOGICAL] / > use global
    [LOGICAL] /global > ls
    COORDINATOR[db3:MIXED:ONLINE]
       east:COORDINATOR[db3:MAINTENANCE:ONLINE]
       west:COORDINATOR[db5:AUTOMATIC:ONLINE]
    
    ROUTERS:
    +---------------------------------------------------------------------------------+
    |connector@db1[9493](ONLINE, created=0, active=0)                                 |
    |connector@db2[9341](ONLINE, created=0, active=0)                                 |
    |connector@db3[10675](ONLINE, created=0, active=0)                                |
    +---------------------------------------------------------------------------------+
    
    DATASOURCES:
    +---------------------------------------------------------------------------------+
    |east(composite master:OFFLINE)                                                    |
    |STATUS [OK] [2019/12/09 11:04:17 AM UTC]                                         |
    +---------------------------------------------------------------------------------+
    +---------------------------------------------------------------------------------+
    |west(composite slave:OFFLINE)                                                  |
    |STATUS [OK] [2019/12/09 11:04:17 AM UTC]                                         |
    +---------------------------------------------------------------------------------+
    
    REASON FOR MAINTENANCE MODE: MANUAL OPERATION
    
    [LOGICAL] /global > datasource east online
    composite data source 'east@global' is now ONLINE
    
    [LOGICAL] /global > set policy maintenance
    policy mode is now MAINTENANCE
  9. Start the replicators in the new cluster ensuring they start as OFFLINE:

    shell> replicator start offline
  10. Go to the relay (master) node of the new cluster (i.e. db4) and provision it from a slave of the original cluster (i.e. db2):

    db4-shell> prov-sl.sh -s db2 -m xtrabackup
    source = db2
    method = xtrabackup
    parallel threads = 4
    port = 22
    help =
    Topology = COMPOSITE
  11. Go to each slave node of the new cluster and provision from the relay node of the new cluster (i.e. db4):

    db5-shell> prov-sl.sh -s db4
    source = db4
    method = mysqldump
    parallel threads = 4
    port = 22
    help =
    Topology = COMPOSITE
    db6-shell> prov-sl.sh -s db4
    source = db4
    method = mysqldump
    parallel threads = 4
    port = 22
    help =
    Topology = COMPOSITE
  12. Bring the replicators in the new cluster online:

    shell> trepctl online
  13. From a node in the original cluster (e.g. db1), using cctrl, set the composite cluster online and return both clusters to automatic:

    shell> cctrl -multi
    [LOGICAL] / > use global
    [LOGICAL] / > datasource west online
    [LOGICAL] / > set policy automatic
  14. Start the connectors associated with the new cluster hosts in west:

    shell> connector start
  15. If --no-connectors was issued during the update, then during a period when it is safe, restart the connectors associated with the original cluster:

    shell> ./tools/tpm promote-connector

3.5.6.2. Convert and move nodes to a new service

This method of conversion is a little more complicated and the only safe way to accomplish this would require downtime for the replication on all nodes.

To achieve this without downtime to your applications, it is recommended that all application activity be isolated to the master host only. Following the conversion, all activity will then be replicated to the slave nodes

Our example starting cluster has 5 nodes (1 master and 4 slaves) and uses service name alpha. Our target cluster will have 6 nodes (3 per cluster) in 2 member clusters alpha_east and alpha_west in composite service alpha.

This means that we will reuse the existing service name alpha as the name of the new composite service, and create two new service names, one for each cluster (alpha_east and alpha_west).

To convert the above configuration, follow the steps below:

  1. On the new host, ensure the Appendix B, Prerequisites have been followed.

  2. Ensure the cluster is in MAINTENANCE mode. This will prevent the managers from performing any unexpected recovery or failovers during the process.

    cctrl> set policy maintenance
  3. Next, you must stop all services on all existing nodes.

    shell> stopall
  4. If configuring via the INI Installation Method, update tungsten.ini on all original 5 nodes, then copy the file to the new node.

    You will need to create two new services for each cluster, and change the original service stanza to represent the composite service. An example of how the complete configuration would look is below. Click the link the switch between ini and staging configurations.

    Show Staging

    Show INI

    shell> ./tools/tpm configure defaults \
        --reset \
        --user=tungsten \
        --install-directory=/opt/continuent \
        --profile-script=~/.bash_profile \
        --replication-user=tungsten \
        --replication-password=secret \
        --replication-port=13306 \
        --application-user=app_user \
        --application-password=secret \
        --application-port=3306
    
    shell> ./tools/tpm configure alpha_east \
        --topology=clustered \
        --master=db1 \
        --members=db1,db2,db3 \
        --connectors=db1,db2,db3
    
    shell> ./tools/tpm configure alpha_west \
        --topology=clustered \
        --relay=db4 \
        --members=db4,db5,db6 \
        --connectors=db4,db5,db6 \
        --relay-source=alpha_east
    
    shell> ./tools/tpm configure alpha \
        --composite-datasources=alpha_east,alpha_west
    
    shell> vi /etc/tungsten/tungsten.ini
    [defaults]
    user=tungsten
    install-directory=/opt/continuent
    profile-script=~/.bash_profile
    replication-user=tungsten
    replication-password=secret
    replication-port=13306
    application-user=app_user
    application-password=secret
    application-port=3306
    
    [alpha_east]
    topology=clustered
    master=db1
    members=db1,db2,db3
    connectors=db1,db2,db3
    
    [alpha_west]
    topology=clustered
    relay=db4
    members=db4,db5,db6
    connectors=db4,db5,db6
    relay-source=alpha_east
    
    [alpha]
    composite-datasources=alpha_east,alpha_west
    
  5. Using you preferred backup/restore method, take a backup of the MySQL database on one of the original nodes and restore this to the new node

    If preferred, this step can be skipped, and the provision of the new node completed via the use of the supplied provisioning scripts, explained in Step 10 below.

  6. Invoke the conversion using the tpm command from the software extraction directory.

    If installation configured via the INI method, this command should be run on all 5 original nodes. If configured via Staging method, this command should be run on the staging host only.

    shell> tpm query staging
    shell> cd {software_staging_dir_from_tpm_query}
    shell> ./tools/tpm update --replace-release --force
    shell> rm /opt/continuent/tungsten/cluster-home/conf/cluster/*/datasource/*

    Note

    The use of the --force option is required to force the override of the old properties

  7. Only if installation configured via the INI method, then proceed to install the software using the tpm command from the software extraction directory on the new node:

    shell> cd {software_staging_dir}
    shell> ./tools/tpm install

    Note

    Ensure you install the same version of software on the new node that matches exactly, the version on the existing 5 nodes

  8. Start all services on all existing nodes.

    shell> startall
  9. Bring the clusters back into AUTOMATIC mode:

    shell> cctrl -multi
    cctrl> use alpha
    cctrl> set policy automatic
    cctrl> exit
  10. If you skipped the backup/restore step above, you now need to provision the database on the new node. To do this, use the prov-sl.sh script to provision the database from one of the existing nodes, for example db5

    shell> prov-sl.sh -s db5