This section explains the simple process for converting an Active Witness into a full cluster node. This process can be used to either convert the existig node or replace the witness with a new node.
First, place the cluster into MAINTENANCE
mode.
shell> cctrl
cctrl> set policy maintenance
Stop the software on the existing Witness node
shell> stopall
Whether you are converting this host, or adding a new host, ensure any additional pre-requisities that are needed for a full cluster node are in place, for example MySQL has been installed.
INI Install
If you are using an ini file for configuration, update the ini on all nodes (including connectors) removing the witness properties and placing the new host as part of the cluster configuration, example below. Skip to Staging Install further down for Staging steps.
Before:
[defaults]
user=tungsten
home-directory=/opt/continuent
application-user=app_user
application-password=secret
application-port=3306
profile-script=~/.bash_profile
replication-user=tungsten
replication-password=secret
replication-port=13306
mysql-allow-intensive-checks=true
[nyc]
enable-active-witnesses=true
topology=clustered
master=db1
members=db1,db2,db3
witnesses=db3
connectors=db1,db2,db3
After:
[defaults]
user=tungsten
home-directory=/opt/continuent
application-user=app_user
application-password=secret
application-port=3306
profile-script=~/.bash_profile
replication-user=tungsten
replication-password=secret
replication-port=13306
mysql-allow-intensive-checks=true
[nyc]
topology=clustered
master=db1
members=db1,db2,db3
connectors=db1,db2,db3
Update the software on the existing cluster nodes and connector nodes (If separate). Include --no-connectors
if connectors you want to manually restart them when convenient.
shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm update --replace-release
Either install on the new host or update on the previous Witness host:
shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm install
or:
shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm update --replace-release -f
Staging Install
If you are using a staging configuration, update the configuration from the staging host, example below:
shell> cd {STAGING_DIRECTORY}
./tools/tpm configure defaults \
--reset \
--user=tungsten \
--home-directory=/opt/continuent \
--application-user=app_user \
--application-password=secret \
--application-port=3306 \
--profile-script=~/.bash_profile \
--replication-user=tungsten \
--replication-password=secret \
--replication-port=13306 \
--mysql-allow-intensive-checks=true
./tools/tpm configure nyc \
--topology=clustered \
--master=db1 \
--members=db1,db2,db3 \
--connectors=db1,db2,db3
Update the software on the existing cluster nodes. Include --no-connectors
if connectors
co-exist on database nodes and you want to manually restart them when convenient.
shell> cd {STAGING_DIRECTORY}
shell> tools/tpm update --replace-release --hosts=db1,db2
Either install on the new host or update on the previous Witness host:
shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm install --hosts=db3
or:
shell> cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell> tools/tpm update --replace-release -f --hosts=db3
Once the software has been installed you now need to restore a backup of the database onto the node, or provision the database using the provided scripts. Either restore a backup, create and restore a new backup or use tprovision to restore the database on the host.
Start the software on the new node/old witness node
shell> startall
If you issued --no-connectors
during the update, restart the connectors when convenient
shell> connector restart
Check within cctrl from one of the existing database nodes to check that the status returns the exptected
output, if it does, return the cluster to AUTOMATIC
and the process is complete. If the output
is not correct, this is usually due to metadata files not updating, therefore on every node, issue the following:
shell> tungsten_reset_manager
This will clean the metadata files and stop the manager process. Once the script has completed on all nodes, restart the manager process on each node, one-by-one, starting with the Primary node first, followed by the Replicas
shell> manager start
Finally, return the cluster to AUTOMATIC
. If the reset process above was performed,
it may take a minute or two for the ls output of cctrl to update whilst the metadata files are refreshed.