These steps are designed to guide you in the safe conversion of an existing Multi-Site/Active-Active (MSAA) topology to a Composite Active/Passive (CAP) topology, based on an ini installation.
For details of the difference between these two topologies, please review the following pages:
It is very important to follow all the below steps and ensure full backups are taken when instructed. These steps can be destructive and without proper care and attention, data loss, data corruption or a split-brain scenario can happen.
Parallel apply MUST be disabled before starting your upgrade. You may re-enable it once the upgrade has been fully completed. See Section 4.1.5.3, “How to Disable Parallel Replication Safely” and Section 4.1.2, “Enabling Parallel Apply During Install” for more information.
The examples in this section are based on three clusters named 'nyc', 'london' and 'tokyo'
Each cluster has two dedicated connectors on separate hosts.
The converted cluster will consist of a Composite Service named 'global' and the 'nyc' cluster will be the Active cluster, with 'london' and 'tokyo' as Passive clusters.
If you do not have exactly three clusters, please adjust this procedure to match your environment.
Examples of before and after tungsten.ini files can be downloaded here:
If you are currently installed using a staging-based installation, you must
convert to an INI based installed for this process to be completed with
minimal risk and minimal interuption. For notes on how
to perform the staging to INI file conversion using the
translatetoini.pl
script, please visit Section 10.4.6, “Using the translatetoini.pl
Script”.
Parallel apply MUST be disabled before starting your upgrade. You may re-enable it once the upgrade has been fully completed. See Section 4.1.5.3, “How to Disable Parallel Replication Safely” and Section 4.1.2, “Enabling Parallel Apply During Install” for more information.
Obtain the latest Tungsten Cluster software build and place it
within /opt/continuent/software
If you are not upgrading, just converting, then this step is not required since you will already have the extracted software bundle available.
Extract the package
The examples below refer to the
tungsten_prep_upgrade
script, this can be located
in the extracted software package within the
tools
directory.
Take a full and complete backup of one node - this can be a Replica, and preferably should be either performed by:
Percona xtrabackup whilst database is open
Manual backup of all datafiles after stopping the database instance
A big difference between Multi-Site/Active-Active (MSAA) and Composite Active/Passive (CAP) is that with MSAA, clients can write into all custers. With CAP clients only write into a single cluster.
To be able to complete this conversion process with minimal interuption and risk, it is essential that clients are redirected and only able to write into a single cluster. This cluster will become the ACTIVE custer after the conversion. For the purpose of this procedure, we will use the 'nyc' cluster for this role.
After redirecting you client applications to connect through the connectors associated with the 'nyc' cluster, stop the connectors associated with the remaining clusters as an extra safeguard against writes happening
On every connector node associated with london and tokyo :
shell> connector stop
Enable Maintenance mode on all clusters using the cctrl command:
shell>cctrl
cctrl>set policy maintenance
Typically the cross-site replicators will be installed within
/opt/replicator
, if you have installed this in a
different location you will need to pass this to the script in the
examples using the --path option
The following commands tell the replicators to go offline at a specific point, in this case when they receive an explicit heartbeat. This is to ensure that all the replicators stop at the same sequence number and binary log position. The replicators will NOT be offline until the explicit heartbeat has been issued a bit later in this step.
On every nyc node:
shell>./tungsten_prep_upgrade -o
~or~ shell>./tungsten_prep_upgrade --service london --offline
shell>./tungsten_prep_upgrade --service tokyo --offline
On every london node:
shell>./tungsten_prep_upgrade -o
~or~ shell>./tungsten_prep_upgrade --service nyc --offline
shell>./tungsten_prep_upgrade --service tokyo --offline
On every tokyo node:
shell>./tungsten_prep_upgrade -o
~or~ shell>./tungsten_prep_upgrade --service london --offline
shell>./tungsten_prep_upgrade --service tokyo --offline
Next, on the Primary hosts within each
cluster we issue the heartbeat, execute the following using the
cluster-specific trepctl, typically in
/opt/continuent
:
shell> trepctl heartbeat -name offline_for_upg
Ensure that every cross-site replicator on every node is now in the
OFFLINE:NORMAL
state:
shell>mmtrepctl status
~or~ shell>mmtrepctl --service {servicename} status
Capture the position of the cross-site replicators on all nodes in all clusters.
The service name provided should be the name of the remote service(s) for this cluster, so for example in the london cluster you get the positions for nyc and tokyo, and in nyc you get the position for london and tokyo, etc.
On every london node:
shell>./tungsten_prep_upgrade -g
~or~ shell>./tungsten_prep_upgrade --service nyc --get
(NOTE: saves to ~/position-nyc-YYYYMMDDHHMMSS.txt) shell>./tungsten_prep_upgrade --service tokyo --get
(NOTE: saves to ~/position-tokyo-YYYYMMDDHHMMSS.txt)
On every nyc node:
shell>./tungsten_prep_upgrade -g
~or~ shell>./tungsten_prep_upgrade --service london --get
(NOTE: saves to ~/position-london-YYYYMMDDHHMMSS.txt) shell>./tungsten_prep_upgrade --service tokyo --get
(NOTE: saves to ~/position-tokyo-YYYYMMDDHHMMSS.txt)
On every tokyo node:
shell>./tungsten_prep_upgrade -g
~or~ shell>./tungsten_prep_upgrade --service london --get
(NOTE: saves to ~/position-london-YYYYMMDDHHMMSS.txt) shell>./tungsten_prep_upgrade --service nyc --get
(NOTE: saves to ~/position-nyc-YYYYMMDDHHMMSS.txt)
Finally, to complete this step, stop the cross-site replicators on all nodes:
shell> ./tungsten_prep_upgrade --stop
On every node in each intended Passive cluster (london and tokyo), export the tracking schema associated the intended Active cluster (nyc)
Note the generated dump file is called tungsten_global.dmp
.
global refers to the name of the intended Composite Cluster service, if you choose
a different service name, change this accordingly.
On every london node:
shell> mysqldump --opt --single-transaction tungsten_nyc > ~/tungsten_global.dmp
On every tokyo node:
shell> mysqldump --opt --single-transaction tungsten_nyc > ~/tungsten_global.dmp
To uninstall the cross-site replicators, execute the following on every node:
shell>cd {replicator software path}
shell>tools/tpm uninstall --i-am-sure
In this step, we pre-create the database for the composite service tracking schema, we are using global as the service name in this example, if you choose a different Composite service name, adjust this accordingly
On every node in all clusters:
shell> mysql -e 'set session sql_log_bin=0; create database tungsten_global'
This step reloads the tracking schema associated with the intended Active cluster (nyc) into the tracking schema we created in the previous step. This should ONLY be carried out within the intended Passive clusters at this stage.
We DO NOT want the reloading of this schema to appear in the binary logs on the Primary, therefore the reload needs to be performed on each node individually:
On every london node:
shell> mysql -e 'set session sql_log_bin=0; use tungsten_global; source ~/tungsten_global.dmp;'
On every tokyo node:
shell> mysql -e 'set session sql_log_bin=0; use tungsten_global; source ~/tungsten_global.dmp;'
On every node in every cluster:
shell> replicator stop
The effect of this step will now mean that only the Primary node in the Active cluster will be up to date with ongoing data changes. You must ensure that your applications handle this accordingly until the replicators are restarted at Step 14
This step, if not followed correctly, could be destructive to the entire conversion. It is CRITICAL that this step is NOT performed on the intended Active cluster (nyc)
By default, THL files will be located within
/opt/continuent/thl
, if you have configured this in a
different location you will need to adjust the path below accordingly
On every london node:
shell>cd /opt/continuent/thl
shell>rm */thl*
On every tokyo node:
shell>cd /opt/continuent/thl
shell>rm */thl*
On every node within the intended Active cluster (nyc), export the tracking schema associated with the local service
Note the generated dump file is called tungsten_global.dmp
.
global refers to the name of the intended Composite Cluster service, if you choose
a different service name, change this accordingly.
On every nyc node:
shell> mysqldump --opt --single-transaction tungsten_nyc > ~/tungsten_global.dmp
This step reloads the tracking schema associated with the intended Active cluster (nyc) into the tracking schema we created in the earlier step.
We DO NOT want the reloading of this schema to appear in the binary logs on the Primary, therefore the reload needs to be performed on each node individually:
On every nyc node:
shell> mysql -e 'set session sql_log_bin=0; use tungsten_global; source ~/tungsten_global.dmp;'
Update /etc/tungsten/tungsten.ini
to a valid Composite Active/Passive
config. An example of a valid config is as follows, a sample can also be downloaded from
Section 4.5.3.1, “Conversion Prerequisites” above:
Within a Composite Active/Passive topology, the ini file must be identical on EVERY node, including Connector Nodes
[defaults] user=tungsten home-directory=/opt/continuent application-user=app_user application-password=secret application-port=3306 profile-script=~/.bash_profile replication-user=tungsten replication-password=secret mysql-allow-intensive-checks=true skip-validation-check=THLSchemaChangeCheck [nyc] topology=clustered master=db1 slaves=db2,db3 connectors=nyc-conn1,nyc-conn2 [london] topology=clustered master=db4 slaves=db5,db6 connectors=ldn-conn1,ldn-conn2b6 relay-source=nyc [tokyo] topology=clustered master=db7 slaves=db8,db9 connectors=tky-conn1,tky-conn2 relay-source=nyc [global] composite-datasources=nyc,london,tokyo
Validate and install the new release on all nodes in the Active (nyc) cluster only:
shell>cd /opt/continuent/software/tungsten-clustering-7.1.4-10
shell>tools/tpm validate-update
If validation shows no errors, run the install:
shell> tools/tpm update --replace-release
After the installation is complete on all nodes in the Active cluster, restart the replicator services:
shell> replicator start
After restarting, check the status of the replicator using the trepctl and check that all replicators are ONLINE:
shell> trepctl status
Validate and install the new release on all nodes in the remaining Passive clusters (london and tokyo):
The update should be performed on the Primary nodes within each cluster
first, validation will report and error that the roles conflict (Primary vs Relay).
This is expected and to override this warning the -f
options should
be used on the Primary nodes only
shell>cd /opt/continuent/software/tungsten-clustering-7.1.4-10
shell>tools/tpm validate-update
If validation shows no errors, run the install:
On Primary Nodes: shell>tools/tpm update --replace-release -f
On Replica Nodes: shell>tools/tpm update --replace-release
After the installation is complete on all nodes in the Active cluster, restart the replicator services:
shell> replicator start
After restarting, check the status of the replicator using the trepctl and check that all replicators are ONLINE:
shell> trepctl status
Following the upgrades, there are a number of "clean-up" steps that we need to perform within cctrl to ensure the datasource roles have been converted from the previous "master" roles to "relay" roles.
The following steps can be performed in a single cctrl session initiated from any node within any cluster
shell>cctrl
Connect to Active cluster cctrl>use nyc
Check Status and verify all nodes online cctrl>ls
Connect to COMPOSITE service cctrl>use global
Place Active service online cctrl>datasource nyc online
Connect to london Passive service cctrl>use london
Convert old Primary to relay cctrl>set force true
cctrl>datasource
cctrl>oldPrimaryhost
offlinedatasource
Repeat on tokyo Passive service cctrl>oldPrimaryhost
relayuse tokyo
cctrl>set force true
cctrl>datasource
cctrl>oldPrimaryhost
offlinedatasource
Connect to COMPOSITE service cctrl>oldPrimaryhost
relayuse global
Place Passive services online cctrl>datasource london online
cctrl>datasource tokyo online
Place all clusters into AUTOMATIC cctrl>set policy automatic
Validate and install the new release on all connectors nodes:
shell>cd /opt/continuent/software/tungsten-clustering-7.1.4-10
shell>tools/tpm validate-update
If validation shows no errors, run the install:
shell> tools/tpm update --replace-release
After upgrading previously stopped connectors, you will need to restart the process:
shell> connector restart
Upgrading a running connector will initiate a restart of the connector services, this will result in any active connections being terminated, therefore care should be taken with this process and client redirection should be handled accordingly prior to any connector upgrade/restart