4.4.3. Upgrade/Convert: From Multi-Site/Active-Active (MSAA) to Composite Active/Passive (CAP)

These steps are designed to guide you in the safe conversion of an existing Multi-Site/Active-Active (MSAA) topology to a Composite Active/Passive (CAP) topology, based on an ini installation.

For details of the difference between these two topologies, please review the following pages:

Warning

It is very important to follow all the below steps and ensure full backups are taken when instructed. These steps can be destructive and without proper care and attention, data loss, data corruption or a split-brain scenario can happen.

Warning

Parallel apply MUST be disabled before starting your upgrade. You may re-enable it once the upgrade has been fully completed. See Section 4.1.5.3, “How to Disable Parallel Replication Safely” and Section 4.1.2, “Enabling Parallel Apply During Install” for more information.

Note

The examples in this section are based on three clusters named 'nyc', 'london' and 'tokyo'

Each cluster has two dedicated connectors on separate hosts.

The converted cluster will consist of a Composite Service named 'global' and the 'nyc' cluster will be the Active cluster, with 'london' and 'tokyo' as Passive clusters.

If you do not have exactly three clusters, please adjust this procedure to match your environment.

Examples of before and after tungsten.ini files can be downloaded here:

If you are currently installed using a staging-based installation, you must convert to an INI based installed for this process to be completed with minimal risk and minimal interuption. For notes on how to perform the staging to INI file conversion using the translatetoini.pl script, please visit Section 10.4.6, “Using the translatetoini.pl Script”.

4.4.3.1. Conversion Prerequisites

Warning

Parallel apply MUST be disabled before starting your upgrade. You may re-enable it once the upgrade has been fully completed. See Section 4.1.5.3, “How to Disable Parallel Replication Safely” and Section 4.1.2, “Enabling Parallel Apply During Install” for more information.

  • Obtain the latest Tungsten Cluster software build and place it within /opt/continuent/software

    If you are not upgrading, just converting, then this step is not required since you will already have the extracted software bundle available.

  • Extract the package

  • The examples below refer to the tungsten_prep_upgrade script, this can be located in the extracted software package within the tools directory.

4.4.3.2. Step 1: Backups

Take a full and complete backup of one node - this can be a Replica, and preferably should be either performed by:

  • Percona xtrabackup whilst database is open

  • Manual backup of all datafiles after stopping the database instance

4.4.3.3. Step 2: Redirect Client Connections

A big difference between Multi-Site/Active-Active (MSAA) and Composite Active/Passive (CAP) is that with MSAA, clients can write into all custers. With CAP clients only write into a single cluster.

To be able to complete this conversion process with minimal interuption and risk, it is essential that clients are redirected and only able to write into a single cluster. This cluster will become the ACTIVE custer after the conversion. For the purpose of this procedure, we will use the 'nyc' cluster for this role.

After redirecting you client applications to connect through the connectors associated with the 'nyc' cluster, stop the connectors associated with the remaining clusters as an extra safeguard against writes happening

On every connector node associated with london and tokyo :

shell> connector stop

4.4.3.4. Step 3: Enter Maintenance Mode

Enable Maintenance mode on all clusters using the cctrl command:

shell> cctrl
cctrl> set policy maintenance

4.4.3.5. Step 4: Stop the Cross-site Replicators

Important

Typically the cross-site replicators will be installed within /opt/replicator, if you have installed this in a different location you will need to pass this to the script in the examples using the --path option

  1. The following commands tell the replicators to go offline at a specific point, in this case when they receive an explicit heartbeat. This is to ensure that all the replicators stop at the same sequence number and binary log position. The replicators will NOT be offline until the explicit heartbeat has been issued a bit later in this step.

    • On every nyc node:

      shell> ./tungsten_prep_upgrade -o 
      ~or~
      shell> ./tungsten_prep_upgrade --service london --offline
      shell> ./tungsten_prep_upgrade --service tokyo --offline
    • On every london node:

      shell> ./tungsten_prep_upgrade -o 
      ~or~
      shell> ./tungsten_prep_upgrade --service nyc --offline
      shell> ./tungsten_prep_upgrade --service tokyo --offline
    • On every tokyo node:

      shell> ./tungsten_prep_upgrade -o 
      ~or~
      shell> ./tungsten_prep_upgrade --service london --offline
      shell> ./tungsten_prep_upgrade --service tokyo --offline
  2. Next, on the Primary hosts within each cluster we issue the heartbeat, execute the following using the cluster-specific trepctl, typically in /opt/continuent:

    shell> trepctl heartbeat -name offline_for_upg

    Ensure that every cross-site replicator on every node is now in the OFFLINE:NORMAL state:

    shell> mmtrepctl status
    ~or~
    shell> mmtrepctl --service {servicename} status
  3. Capture the position of the cross-site replicators on all nodes in all clusters.

    The service name provided should be the name of the remote service(s) for this cluster, so for example in the london cluster you get the positions for nyc and tokyo, and in nyc you get the position for london and tokyo, etc.

    • On every london node:

      shell> ./tungsten_prep_upgrade -g
      ~or~
      shell> ./tungsten_prep_upgrade --service nyc --get
      (NOTE: saves to ~/position-nyc-YYYYMMDDHHMMSS.txt)
      shell> ./tungsten_prep_upgrade --service tokyo --get
      (NOTE: saves to ~/position-tokyo-YYYYMMDDHHMMSS.txt)
    • On every nyc node:

      shell> ./tungsten_prep_upgrade -g
      ~or~
      shell> ./tungsten_prep_upgrade --service london --get
      (NOTE: saves to ~/position-london-YYYYMMDDHHMMSS.txt)
      shell> ./tungsten_prep_upgrade --service tokyo --get
      (NOTE: saves to ~/position-tokyo-YYYYMMDDHHMMSS.txt)
    • On every tokyo node:

      shell> ./tungsten_prep_upgrade -g
      ~or~
      shell> ./tungsten_prep_upgrade --service london --get
      (NOTE: saves to ~/position-london-YYYYMMDDHHMMSS.txt)
      shell> ./tungsten_prep_upgrade --service nyc --get
      (NOTE: saves to ~/position-nyc-YYYYMMDDHHMMSS.txt)
  4. Finally, to complete this step, stop the cross-site replicators on all nodes:

    shell> ./tungsten_prep_upgrade --stop 

4.4.3.6. Step 5: Export the tracking schema databases

On every node in each intended Passive cluster (london and tokyo), export the tracking schema associated the intended Active cluster (nyc)

Note the generated dump file is called tungsten_global.dmp. global refers to the name of the intended Composite Cluster service, if you choose a different service name, change this accordingly.

  • On every london node:

    shell> mysqldump --opt --single-transaction tungsten_nyc > ~/tungsten_global.dmp
  • On every tokyo node:

    shell> mysqldump --opt --single-transaction tungsten_nyc > ~/tungsten_global.dmp

4.4.3.7. Step 6: Uninstall the Cross-site Replicators

To uninstall the cross-site replicators, execute the following on every node:

shell> cd {replicator software path}
shell> tools/tpm uninstall --i-am-sure

4.4.3.8. Step 7: Create Composite Tracking Schema

In this step, we pre-create the database for the composite service tracking schema, we are using global as the service name in this example, if you choose a different Composite service name, adjust this accordingly

On every node in all clusters:

shell> mysql -e 'set session sql_log_bin=0; create database tungsten_global'

4.4.3.9. Step 8: Reload the tracking schema for Passive clusters

This step reloads the tracking schema associated with the intended Active cluster (nyc) into the tracking schema we created in the previous step. This should ONLY be carried out within the intended Passive clusters at this stage.

We DO NOT want the reloading of this schema to appear in the binary logs on the Primary, therefore the reload needs to be performed on each node individually:

  • On every london node:

    shell> mysql -e 'set session sql_log_bin=0; use tungsten_global; source ~/tungsten_global.dmp;'
  • On every tokyo node:

    shell> mysql -e 'set session sql_log_bin=0; use tungsten_global; source ~/tungsten_global.dmp;'

4.4.3.10. Step 9: Stop local cluster Replicators

On every node in every cluster:

shell> replicator stop

Warning

The effect of this step will now mean that only the Primary node in the Active cluster will be up to date with ongoing data changes. You must ensure that your applications handle this accordingly until the replicators are restarted at Step 14

4.4.3.11. Step 10: Remove THL

Warning

This step, if not followed correctly, could be destructive to the entire conversion. It is CRITICAL that this step is NOT performed on the intended Active cluster (nyc)

By default, THL files will be located within /opt/continuent/thl, if you have configured this in a different location you will need to adjust the path below accordingly

  • On every london node:

    shell> cd /opt/continuent/thl
    shell> rm */thl*
  • On every tokyo node:

    shell> cd /opt/continuent/thl
    shell> rm */thl*

4.4.3.12. Step 11: Export the tracking schema database on Active cluster

On every node within the intended Active cluster (nyc), export the tracking schema associated with the local service

Note the generated dump file is called tungsten_global.dmp. global refers to the name of the intended Composite Cluster service, if you choose a different service name, change this accordingly.

  • On every nyc node:

    shell> mysqldump --opt --single-transaction tungsten_nyc > ~/tungsten_global.dmp

4.4.3.13. Step 12: Reload the tracking schema for Active cluster

This step reloads the tracking schema associated with the intended Active cluster (nyc) into the tracking schema we created in the earlier step.

We DO NOT want the reloading of this schema to appear in the binary logs on the Primary, therefore the reload needs to be performed on each node individually:

  • On every nyc node:

    shell> mysql -e 'set session sql_log_bin=0; use tungsten_global; source ~/tungsten_global.dmp;'

4.4.3.14. Step 13: Update Configuration

Update /etc/tungsten/tungsten.ini to a valid Composite Active/Passive config. An example of a valid config is as follows, a sample can also be downloaded from Section 4.4.3.1, “Conversion Prerequisites” above:

Important

Within a Composite Active/Passive topology, the ini file must be identical on EVERY node, including Connector Nodes

[defaults]
user=tungsten
home-directory=/opt/continuent
application-user=app_user
application-password=secret
application-port=3306
profile-script=~/.bash_profile
replication-user=tungsten
replication-password=secret
mysql-allow-intensive-checks=true
skip-validation-check=THLSchemaChangeCheck

[nyc]
topology=clustered
master=db1
slaves=db2,db3
connectors=nyc-conn1,nyc-conn2

[london]
topology=clustered
master=db4
slaves=db5,db6
connectors=ldn-conn1,ldn-conn2b6
relay-source=nyc

[tokyo]
topology=clustered
master=db7
slaves=db8,db9
connectors=tky-conn1,tky-conn2
relay-source=nyc

[global]
composite-datasources=nyc,london,tokyo

4.4.3.15. Step 14: Install the Software on Active Cluster

Validate and install the new release on all nodes in the Active (nyc) cluster only:

shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm validate-update

If validation shows no errors, run the install:

shell> tools/tpm update --replace-release

4.4.3.16. Step 15: Start Local Replicators on Active cluster

After the installation is complete on all nodes in the Active cluster, restart the replicator services:

shell> replicator start

After restarting, check the status of the replicator using the trepctl and check that all replicators are ONLINE:

shell> trepctl status

4.4.3.17. Step 16: Install the Software on remaining Clusters

Validate and install the new release on all nodes in the remaining Passive clusters (london and tokyo):

Important

The update should be performed on the Primary nodes within each cluster first, validation will report and error that the roles conflict (Primary vs Relay). This is expected and to override this warning the -f options should be used on the Primary nodes only

shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm validate-update

If validation shows no errors, run the install:

On Primary Nodes:
shell> tools/tpm update --replace-release -f

On Replica Nodes:
shell> tools/tpm update --replace-release

4.4.3.18. Step 17: Start Local Replicators on remaining clusters

After the installation is complete on all nodes in the Active cluster, restart the replicator services:

shell> replicator start

After restarting, check the status of the replicator using the trepctl and check that all replicators are ONLINE:

shell> trepctl status

4.4.3.19. Step 18: Convert Datasource roles for Passive clusters

Following the upgrades, there are a number of "clean-up" steps that we need to perform within cctrl to ensure the datasource roles have been converted from the previous "master" roles to "relay" roles.

The following steps can be performed in a single cctrl session initiated from any node within any cluster

shell> cctrl

Connect to Active cluster
cctrl> use nyc

Check Status and verify all nodes online
cctrl> ls

Connect to COMPOSITE service
cctrl> use global

Place Active service online
cctrl> datasource nyc online

Connect to london Passive service
cctrl> use london

Convert old Primary to relay
cctrl> set force true
cctrl> datasource oldPrimaryhost offline
cctrl> datasource oldPrimaryhost relay

Repeat on tokyo Passive service
cctrl> use tokyo
cctrl> set force true
cctrl> datasource oldPrimaryhost offline
cctrl> datasource oldPrimaryhost relay

Connect to COMPOSITE service
cctrl> use global

Place Passive services online
cctrl> datasource london online
cctrl> datasource tokyo online

Place all clusters into AUTOMATIC
cctrl> set policy automatic

4.4.3.20. Step 19: Upgrade the Software on Connectors

Validate and install the new release on all connectors nodes:

shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm validate-update

If validation shows no errors, run the install:

shell> tools/tpm update --replace-release

After upgrading previously stopped connectors, you will need to restart the process:

shell> connector restart

Warning

Upgrading a running connector will initiate a restart of the connector services, this will result in any active connections being terminated, therefore care should be taken with this process and client redirection should be handled accordingly prior to any connector upgrade/restart