These steps are specifically for the safe and successful upgrade (or conversion) of an existing Multi-Site/Active-Active (MSAA) topology, to a Composite Active/Active (CAA) topology.
It is very important to follow all the below steps and ensure full backups are taken when instructed. These steps can be destructive and without proper care and attention, data loss, data corruption or a split-brain scenario can happen.
Parallel apply MUST be disabled before starting your upgrade/conversion. You may re-enable it once the process has been fully completed. See Section 4.1.5.3, “How to Disable Parallel Replication Safely” and Section 4.1.2, “Enabling Parallel Apply During Install” for more information.
The examples in this section are based on three clusters named 'nyc', 'london' and 'tokyo'
If you do not have exactly three clusters, please adjust this procedure to match your environment.
Click here for a video of the upgrade procedure, showing the full process from start to finish...
If you are currently installed using a staging-based installation, you must
convert to an INI based installed, since INI based installation is the only
option supported for the Composite Active/Active deployments. For notes on how
to perform the staging to INI file conversion using the
translatetoini.pl
script, please visit Section 10.4.6, “Using the translatetoini.pl
Script”.
Parallel apply MUST be disabled before starting your upgrade. You may re-enable it once the upgrade has been fully completed. See Section 4.1.5.3, “How to Disable Parallel Replication Safely” and Section 4.1.2, “Enabling Parallel Apply During Install” for more information.
Obtain the latest v6 (or greater) Tungsten Cluster software build and place it
within /opt/continuent/software
If you are not upgrading, just converting, then this step is not required since you will already have the extracted software bundle available. However you must be running v6 or greater of Tungsten Cluster to deploy a CAA topology.
Extract the package
The examples below refer to the
tungsten_prep_upgrade
script, this can be located
in the extracted software package within the
tools
directory.
Take a full and complete backup of one node - this can be a Replica, and preferably should be either performed by:
Percona xtrabackup whilst database is open
Manual backup of all datafiles after stopping the database instance
Typically the cross-site replicators will be installed within
/opt/replicator
, if you have installed this in a
different location you will need to pass this to the script in the
examples using the --path option
The following commands tell the replicators to go offline at a specific point, in this case when they receive an explicit heartbeat. This is to ensure that all the replicators stop at the same sequence number and binary log position. The replicators will NOT be offline until the explicit heartbeat has been issued a bit later in this step.
On every nyc node:
shell>./tungsten_prep_upgrade -o
~or~ shell>./tungsten_prep_upgrade --service london --offline
shell>./tungsten_prep_upgrade --service tokyo --offline
On every london node:
shell>./tungsten_prep_upgrade -o
~or~ shell>./tungsten_prep_upgrade --service nyc --offline
shell>./tungsten_prep_upgrade --service tokyo --offline
On every tokyo node:
shell>./tungsten_prep_upgrade -o
~or~ shell>./tungsten_prep_upgrade --service london --offline
shell>./tungsten_prep_upgrade --service tokyo --offline
Next, on the Primary hosts within each
cluster we issue the heartbeat, execute the following using the
cluster-specific trepctl, typically in
/opt/continuent
:
shell> trepctl heartbeat -name offline_for_upg
Ensure that every cross-site replicator on every node is now in the
OFFLINE:NORMAL
state:
shell>mmtrepctl status
~or~ shell>mmtrepctl --service {servicename} status
Capture the position of the cross-site replicators on all nodes in all clusters.
The service name provided should be the name of the remote service(s) for this cluster, so for example in the london cluster you get the positions for nyc and tokyo, and in nyc you get the position for london and tokyo, etc.
On every london node:
shell>./tungsten_prep_upgrade -g
~or~ shell>./tungsten_prep_upgrade --service nyc --get
(NOTE: saves to ~/position-nyc-YYYYMMDDHHMMSS.txt) shell>./tungsten_prep_upgrade --service tokyo --get
(NOTE: saves to ~/position-tokyo-YYYYMMDDHHMMSS.txt)
On every nyc node:
shell>./tungsten_prep_upgrade -g
~or~ shell>./tungsten_prep_upgrade --service london --get
(NOTE: saves to ~/position-london-YYYYMMDDHHMMSS.txt) shell>./tungsten_prep_upgrade --service tokyo --get
(NOTE: saves to ~/position-tokyo-YYYYMMDDHHMMSS.txt)
On every tokyo node:
shell>./tungsten_prep_upgrade -g
~or~ shell>./tungsten_prep_upgrade --service london --get
(NOTE: saves to ~/position-london-YYYYMMDDHHMMSS.txt) shell>./tungsten_prep_upgrade --service nyc --get
(NOTE: saves to ~/position-nyc-YYYYMMDDHHMMSS.txt)
Finally, to complete this step, stop the replicators on all nodes:
shell> ./tungsten_prep_upgrade --stop
On every node in each cluster, export the tracking schema for the cross-site replicator
Similar to the above step 2 when you captured the cross-site position, the same applies here, in london you export/backup nyc and tokyo, and in nyc you export/backup london and tokyo, and finally in tokyo you export/backup nyc and london.
On every london node:
shell>./tungsten_prep_upgrade -d --alldb
~or~ shell>./tungsten_prep_upgrade --service nyc --dump
shell>./tungsten_prep_upgrade --service tokyo --dump
On every nyc node:
shell>./tungsten_prep_upgrade -d --alldb
~or~ shell>./tungsten_prep_upgrade --service london --dump
shell>./tungsten_prep_upgrade --service tokyo --dump
On every tokyo node:
shell>./tungsten_prep_upgrade -d --alldb
~or~ shell>./tungsten_prep_upgrade --service london --dump
shell>./tungsten_prep_upgrade --service nyc --dump
To uninstall the cross-site replicators, execute the following on every node:
shell>cd {replicator software path}
shell>tools/tpm uninstall --i-am-sure
We DO NOT want the reloading of this schema to appear in the binary logs on the Primary, therefore the reload needs to be performed on each node individually:
On every london node:
shell>./tungsten_prep_upgrade -s nyc -u tungsten -w secret -r
shell>./tungsten_prep_upgrade -s tokyo -u tungsten -w secret -r
~or~ shell>./tungsten_prep_upgrade --service nyc --user tungsten --password secret --restore
shell>./tungsten_prep_upgrade --service tokyo --user tungsten --password secret --restore
On every tokyo node:
shell>./tungsten_prep_upgrade -s london -u tungsten -w secret -r
shell>./tungsten_prep_upgrade -s nyc -u tungsten -w secret -r
~or~ shell>./tungsten_prep_upgrade --service london --user tungsten --password secret --restore
shell>./tungsten_prep_upgrade --service nyc --user tungsten --password secret --restore
On every nyc node:
shell>./tungsten_prep_upgrade -s london -u tungsten -w secret -r
shell>./tungsten_prep_upgrade -s tokyo -u tungsten -w secret -r
~or~ shell>./tungsten_prep_upgrade --service london --user tungsten --password secret --restore
shell>./tungsten_prep_upgrade --service tokyo --user tungsten --password secret --restore
Update /etc/tungsten/tungsten.ini
to a valid v6 CAA
configuration. An example of a valid configuration is as follows:
[defaults] user=tungsten home-directory=/opt/continuent application-user=app_user application-password=secret application-port=3306 profile-script=~/.bash_profile replication-user=tungsten replication-password=secret mysql-allow-intensive-checks=true skip-validation-check=THLSchemaChangeCheck start-and-report=true [nyc] topology=clustered master=db1 members=db1,db2,db3 connectors=db1,db2,db3 [london] topology=clustered master=db4 members=db4,db5,db6 connectors=db4,db5,db6 [tokyo] topology=clustered master=db7 members=db8,db8,db9 connectors=db7,db8,db9 [global] topology=composite-multi-master composite-datasources=nyc,london,tokyo
It is critical that you ensure the master=
entry in the configuration
matches the current, live Primary host in your cluster for the purpose of this process.
Enable Maintenance mode on all clusters using the cctrl command:
shell>cctrl
cctrl>set policy maintenance
Run the update as follows:
shell> tools/tpm update --replace-release
If you had start-and-report=false you may need to restart manager services
Until all nodes have been updated, the output from cctrl may show services in an OFFLINE, STOPPED, or UNKNOWN state. This is to be expected until all the new v6 managers are online
After the installation is complete on all nodes, start the manager services:
shell> manager start
Return all clusters to Automatic mode using the cctrl command:
shell>cctrl
cctrl>set policy automatic
Identify the cross-site service name(s):
shell> trepctl services
In our example, the local cluster service will one of
london
, nyc
or
tokyo
depending on the node you are on. The cross
site replication services would be:
(within the london cluster) london_from_nyc london_from_tokyo (within the nyc cluster) nyc_from_london nyc_from_tokyo (within the tokyo cluster) tokyo_from_london tokyo_from_nyc
Upon installation, the new cross-site replicators will come online, it
is possible that they may be in an OFFLINE:ERROR
state due to a change in Epoch numbers, check this on the
Primary in each cluster by looking at
the output from the trepctl command.
Check each service as needed based on the status seen above:
shell>trepctl -service london_from_nyc status
shell>trepctl -service london_from_tokyo status
~or~ shell>trepctl -service nyc_from_london status
shell>trepctl -service nyc_from_tokyo status
~or~ shell>trepctl -service tokyo_from_london status
shell>trepctl -service tokyo_from_nyc status
If the replicator is in an error state due to an epoch difference, you will see an error similar to the following:
pendingErrorSeqno : -1 pendingExceptionMessage: Client handshake failure: Client response validation failed: Log epoch numbers do not match: master source ID=db1 client source ID=db4 seqno=4 server epoch number=0 client epoch number=4 pipelineSource : UNKNOWN
The above error is due to the epoch numbers changing as a result of the replicators being restarted, and the new replicators being installed.
To resolve, simply force the replicator online as follows:
shell>trepctl -service london_from_nyc online -force
shell>trepctl -service london_from_tokyo online -force
~or~ shell>trepctl -service nyc_from_london online -force
shell>trepctl -service nyc_from_tokyo online -force
~or~ shell>trepctl -service tokyo_from_london online -force
shell>trepctl -service tokyo_from_nyc online -force
If the replicator shows an error state similar to the following:
pendingErrorSeqno : -1 pendingExceptionMessage: Client handshake failure: Client response validation failed: Master log does not contain requested transaction: master source ID=db1 client source ID=db2 requested seqno=1237 client epoch number=0 master min seqno=5 master max seqno=7 pipelineSource : UNKNOWN
The above error is possible if during install the Replica replicators came online before the Primary.
Providing the steps above have been followed, just bringing the replicator online should be enough to get the replicator to retry and carry on successfully:
shell>trepctl -service london_from_nyc online
shell>trepctl -service london_from_tokyo online
~or~ shell>trepctl -service nyc_from_london online
shell>trepctl -service nyc_from_tokyo online
~or~ shell>trepctl -service tokyo_from_london online
shell>trepctl -service tokyo_from_nyc online
Known Issue (CT-569)
During an upgrade, the tpm process will incorrectly create additional, empty, tracking schemas based on the service names of the auto-generated cross-site services.
For example, if your cluster has service names east and west, you should only have tracking schemas for tungsten_east and tungsten_west
In some cases, you will also see tungsten_east_from_west and/or tungsten_west_from_east
These tungsten_x_from_y tracking schemas will be empty
and unused. They can be safely removed by issuing DROP DATABASE
tungsten_x_from_y
on a Primary node,
or they can be safely ignored