There are two possible scenarios for converting from a single standalone cluster to a composite cluster. The two following sections will guide you through examples of each of these.
The following steps guide you through updating the configuration to include the new hosts as a new service and convert to a Composite Cluster.
For the purpose of this worked example, we have a single cluster dataservice called
east
with three nodes, defined
as db1, db2 and db3 with db1 as the Primary.
Our goal is to create a new cluster dataservice called
west
with three nodes, defined
as db4, db5 and db6 with db4 as the relay.
We will configure a new composite dataservice called
global
The steps show two alternative approaches, to create the west
as a Passive cluster (Composite Active/Passive) or to create the
west
cluster as a second active cluster (Composite Active/Active)
On the new host(s), ensure the Appendix B, Prerequisites have been followed.
If configuring via the Staging Installation method, skip straight to Step 4:
The staging method CANNOT be used if converting to an Active/Active cluster
On the new host(s), ensure the tungsten.ini
contains
the correct service blocks for both the existing cluster and the new cluster.
On the new host(s), install the proper version of clustering software, ensuring that the version being installed matches the version currently installed on the existing hosts.
shell>cd /opt/continuent/sofware
shell>tar zxvf tungsten-clustering-7.1.4-10.tar.gz
shell>cd tungsten-clustering-7.1.4-10
shell>./tools/tpm install
Ensure --start-and-report
is set to false
in the configuration for the new hosts.
Set the existing cluster to maintenance mode using cctrl:
shell>cctrl
[LOGICAL] / >set policy maintenance
Add the definition for the new cluster service
west
and composite service
global
to the existing configuration
on the existing host(s):
For Composite Active/Passive
shell>tpm query staging
tungsten@db1:/opt/continuent/software/tungsten-clustering-7.1.4-10 shell>echo The staging USER is `tpm query staging| cut -d: -f1 | cut -d@ -f1`
The staging USER is tungsten shell>echo The staging HOST is `tpm query staging| cut -d: -f1 | cut -d@ -f2`
The staging HOST is db1 shell>echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-7.1.4-10 shell>ssh {STAGING_USER}@{STAGING_HOST}
shell>cd {STAGING_DIRECTORY}
shell>./tools/tpm configure west \ --connectors=db4,db5,db6 \ --relay-source=east \ --relay=db4 \ --slaves=db5,db6 \ --topology=clustered
shell>./tools/tpm configure global \ --composite-datasources=east,west
Run the tpm command to update the software with the Staging-based configuration:
shell> ./tools/tpm update --no-connectors --replace-release
For information about making updates when using a Staging-method deployment, please see Section 10.3.7, “Configuration Changes from a Staging Directory”.
shell> vi /etc/tungsten/tungsten.ini
[west] ... connectors=db4,db5,db6 relay-source=east relay=db4 slaves=db5,db6 topology=clustered
[global] ... composite-datasources=east,west
Run the tpm command to update the software with the INI-based configuration:
shell>tpm query staging
tungsten@db1:/opt/continuent/software/tungsten-clustering-7.1.4-10 shell>echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-7.1.4-10 shell>cd {STAGING_DIRECTORY}
shell>./tools/tpm update --no-connectors --replace-release
For information about making updates when using an INI file, please see Section 10.4.4, “Configuration Changes with an INI file”.
For Composite Active/Active
shell> vi /etc/tungsten/tungsten.ini
[west] topology=clustered connectors=db4,db5,db6 master=db4 members=db4,db5,db6 [global] topology=composite-multi-master composite-datasources=east,west
shell>tpm query staging
tungsten@db1:/opt/continuent/software/tungsten-clustering-7.1.4-10 shell>echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-7.1.4-10 shell>cd {STAGING_DIRECTORY}
shell>./tools/tpm update --no-connectors --replace-release
Using the optional --no-connectors
option
updates the current deployment without restarting the existing
connectors.
Using the --replace-release
option
ensures the metadata files for the cluster are correctly rebuilt.
This parameter MUST be supplied.
On every node in the original EAST cluster, make sure all replicators are online:
shell>trepctl services
shell>trepctl -all-services online
On all the new hosts in the new cluster, start the manager processes ONLY
shell> manager start
From the original cluster, use cctrl to check that the new dataservice and composite dataservice have been created, and place the new dataservice into maintenance mode
shell>cctrl
cctrl>cd /
cctrl>ls
cctrl>use global
cctrl>ls
cctrl>datasource east online
cctrl>set policy maintenance
Example from a Composite Active/Passive Cluster
tungsten@db1:~ $cctrl
Tungsten Clustering 7.1.4 build 10 east: session established, encryption=false, authentication=false [LOGICAL] /east > cd / [LOGICAL] / > ls [LOGICAL] / >ls
global east west [LOGICAL] / >use global
[LOGICAL] /global >ls
COORDINATOR[db3:MIXED:ONLINE] east:COORDINATOR[db3:MAINTENANCE:ONLINE] west:COORDINATOR[db5:AUTOMATIC:ONLINE] ROUTERS: +---------------------------------------------------------------------------------+ |connector@db1[9493](ONLINE, created=0, active=0) | |connector@db2[9341](ONLINE, created=0, active=0) | |connector@db3[10675](ONLINE, created=0, active=0) | +---------------------------------------------------------------------------------+ DATASOURCES: +---------------------------------------------------------------------------------+ |east(composite master:OFFLINE) | |STATUS [OK] [2019/12/09 11:04:17 AM UTC] | +---------------------------------------------------------------------------------+ +---------------------------------------------------------------------------------+ |west(composite slave:OFFLINE) | |STATUS [OK] [2019/12/09 11:04:17 AM UTC] | +---------------------------------------------------------------------------------+ REASON FOR MAINTENANCE MODE: MANUAL OPERATION [LOGICAL] /global >datasource east online
composite data source 'east@global' is now ONLINE [LOGICAL] /global >set policy maintenance
policy mode is now MAINTENANCE
Example from a Composite Active/Active Cluster
tungsten@db1:~ $cctrl
Tungsten Clustering 7.1.4 build 10 east: session established, encryption=false, authentication=false [LOGICAL] /east > cd / [LOGICAL] / > ls [LOGICAL] / >ls
global east east_from_west west west_from_east [LOGICAL] / >use global
[LOGICAL] /global >ls
COORDINATOR[db3:MIXED:ONLINE] east:COORDINATOR[db3:MAINTENANCE:ONLINE] west:COORDINATOR[db4:AUTOMATIC:ONLINE] ROUTERS: +---------------------------------------------------------------------------------+ |connector@db1[23431](ONLINE, created=0, active=0) | |connector@db2[25535](ONLINE, created=0, active=0) | |connector@db3[15353](ONLINE, created=0, active=0) | +---------------------------------------------------------------------------------+ DATASOURCES: +---------------------------------------------------------------------------------+ |east(composite master:OFFLINE, global progress=10, max latency=1.043) | |STATUS [OK] [2024/08/13 11:05:01 AM UTC] | +---------------------------------------------------------------------------------+ | east(master:ONLINE, progress=10, max latency=1.043) | | east_from_west(UNKNOWN:UNKNOWN, progress=-1, max latency=-1.000) | +---------------------------------------------------------------------------------+ +---------------------------------------------------------------------------------+ |west(composite master:ONLINE, global progress=-1, max latency=-1.000) | |STATUS [OK] [2024/08/13 11:07:56 AM UTC] | +---------------------------------------------------------------------------------+ | west(UNKNOWN:UNKNOWN, progress=-1, max latency=-1.000) | | west_from_east(UNKNOWN:UNKNOWN, progress=-1, max latency=-1.000) | +---------------------------------------------------------------------------------+ REASON FOR MAINTENANCE MODE: MANUAL OPERATION [LOGICAL] /global >datasource east online
composite data source 'east@global' is now ONLINE [LOGICAL] /global >set policy maintenance
policy mode is now MAINTENANCE
Start the replicators in the new cluster ensuring they start as OFFLINE:
shell> replicator start offline
Go to the relay (or Primary) node of the new cluster (i.e. db4) and provision it from a Replica of the original cluster (i.e. db2):
Provision the new relay in a Composite Active/Passive Cluster
db4-shell> tprovision -s db2
Provision the new primary in a Composite Active/Active Cluster
db4-shell> tprovision -s db2 -c
Go to each Replica node of the new cluster and provision from the relay node of the new cluster (i.e. db4):
db5-shell> tprovision -s db4
Bring the replicators in the new cluster online, if not already:
shell> trepctl -all-services online
From a node in the original cluster (e.g. db1), using cctrl, set the composite cluster online, if not already, and return to automatic:
shell>cctrl
[LOGICAL] / >use global
[LOGICAL] / >datasource west online
[LOGICAL] / >set policy automatic
Start the connectors associated with
the new cluster hosts in west
:
shell> connector start
Depending on the mode in which the connectors are running, you may need to configure the
user.map
. If this is in use on the old cluster, then we recommend
that you take a copy of this file and place this on the new connectors associated with the
new cluster, and then adjust any affinity settings that are required. Additionally, the
user.map
may need adjustments on the original cluster. For more details
on the user.map
file, it is advised to review the relevant sections in
the Connector documentation related to the mode your connectors are operating in.
These can be found at Section 7.6.1, “user.map
File Format”
If --no-connectors
was issued during the update, then
during a period when it is safe, restart the connectors associated with
the original cluster:
shell> ./tools/tpm promote-connector
This method of conversion is a little more complicated and the only safe way to accomplish this would require downtime for the replication on all nodes.
To achieve this without downtime to your applications, it is recommended that all application activity be isolated to the Primary host only. Following the conversion, all activity will then be replicated to the Replica nodes
Our example starting cluster has 5 nodes (1 Primary and 4 Replicas) and uses
service name alpha
. Our target cluster will have 6
nodes (3 per cluster) in 2 member clusters alpha_east
and alpha_west
in composite service
alpha
.
This means that we will reuse the existing service name
alpha
as the name of the new composite service, and
create two new service names, one for each cluster
(alpha_east
and alpha_west
).
To convert the above configuration, follow the steps below:
On the new host, ensure the Appendix B, Prerequisites have been followed.
Ensure the cluster is in MAINTENANCE mode. This will prevent the managers from performing any unexpected recovery or failovers during the process.
cctrl> set policy maintenance
Next, you must stop all services on all existing nodes.
shell> stopall
If configuring via the INI Installation Method, update tungsten.ini on all original 5 nodes, then copy the file to the new node.
You will need to create two new services for each cluster, and change the original service stanza to represent the composite service. An example of how the complete configuration would look is below. Click the link the switch between ini and staging configurations.
shell>./tools/tpm configure defaults \ --reset \ --user=tungsten \ --install-directory=/opt/continuent \ --profile-script=~/.bash_profile \ --replication-user=tungsten \ --replication-password=secret \ --replication-port=13306 \ --application-user=app_user \ --application-password=secret \ --application-port=3306 \ --rest-api-admin-user=apiuser \ --rest-api-admin-pass=secret
shell>./tools/tpm configure alpha_east \ --topology=clustered \ --master=db1 \ --members=db1,db2,db3 \ --connectors=db1,db2,db3
shell>./tools/tpm configure alpha_west \ --topology=clustered \ --relay=db4 \ --members=db4,db5,db6 \ --connectors=db4,db5,db6 \ --relay-source=alpha_east
shell>./tools/tpm configure alpha \ --composite-datasources=alpha_east,alpha_west
shell> vi /etc/tungsten/tungsten.ini
[defaults] user=tungsten install-directory=/opt/continuent profile-script=~/.bash_profile replication-user=tungsten replication-password=secret replication-port=13306 application-user=app_user application-password=secret application-port=3306 rest-api-admin-user=apiuser rest-api-admin-pass=secret
[alpha_east] topology=clustered master=db1 members=db1,db2,db3 connectors=db1,db2,db3
[alpha_west] topology=clustered relay=db4 members=db4,db5,db6 connectors=db4,db5,db6 relay-source=alpha_east
[alpha] composite-datasources=alpha_east,alpha_west
Using you preferred backup/restore method, take a backup of the MySQL database on one of the original nodes and restore this to the new node
If preferred, this step can be skipped, and the provision of the new node completed via the use of the supplied provisioning scripts, explained in Step 10 below.
Invoke the conversion using the tpm command from the software extraction directory.
If installation configured via the INI method, this command should be run on all 5 original nodes. If configured via Staging method, this command should be run on the staging host only.
shell>tpm query staging
shell>cd {software_staging_dir_from_tpm_query}
shell>./tools/tpm update --replace-release --force
shell>rm /opt/continuent/tungsten/cluster-home/conf/cluster/*/datasource/*
The use of the --force
option is required
to force the override of the old properties
Only if installation configured via the INI method, then proceed to install the software using the tpm command from the software extraction directory on the new node:
shell>cd {software_staging_dir}
shell>./tools/tpm install
Ensure you install the same version of software on the new node that matches exactly, the version on the existing 5 nodes
Start all services on all existing nodes.
shell> startall
Bring the clusters back into AUTOMATIC mode:
shell>cctrl -multi
cctrl>use alpha
cctrl>set policy automatic
cctrl>exit
If you skipped the backup/restore step above, you now need to provision the database on the new node. To do this, use the tungsten_provision_slave script to provision the database from one of the existing nodes, for example db5
shell> tungsten_provision_slave --source db5