Adding an entire new cluster provides significant level of availability and capacity. The new cluster nodes that form the cluster will be fully aware of the original cluster(s) and communicate with existing managers and datasources within the cluster.
The following steps guide you through updating the configuration to include the new hosts and services you are adding.
On the new host(s), ensure the Appendix B, Prerequisites have been followed.
Let's assume that we have a composite cluster dataservice called
global
with two clusters,
east
and
west
, with three nodes each.
In this worked exmple, we show how to add an additional cluster service
called north
with three new nodes.
Set the cluster to maintenance mode using cctrl:
shell>cctrl
[LOGICAL] / >use global
[LOGICAL] /global >set policy maintenance
Using the following as an example, update the configuration to include the new cluster and update the additional composite service block. If using an INI installation copy the ini file to all the new nodes in the new cluster.
Click the link to switch between staging or ini examples
shell>tpm query staging
tungsten@db1:/opt/continuent/software/tungsten-clustering-7.0.3-141 shell>echo The staging USER is `tpm query staging| cut -d: -f1 | cut -d@ -f1`
The staging USER is tungsten shell>echo The staging HOST is `tpm query staging| cut -d: -f1 | cut -d@ -f2`
The staging HOST is db1 shell>echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-7.0.3-141 shell>ssh {STAGING_USER}@{STAGING_HOST}
shell>cd {STAGING_DIRECTORY}
shell>./tools/tpm configure north \ --connectors=db7,db8,db9 \ --relay-source=east \ --relay=db7 \ --slaves=db8,db9 \ --topology=clustered
shell>./tools/tpm configure global \ --composite-datasources=east,west,north
Run the tpm command to update the software with the Staging-based configuration:
shell> ./tools/tpm update --no-connectors --replace-release
For information about making updates when using a Staging-method deployment, please see Section 10.3.7, “Configuration Changes from a Staging Directory”.
shell> vi /etc/tungsten/tungsten.ini
[north] ... connectors=db7,db8,db9 relay-source=east relay=db7 slaves=db8,db9 topology=clustered
[global] ... composite-datasources=east,west,north
Run the tpm command to update the software with the INI-based configuration:
shell>tpm query staging
tungsten@db1:/opt/continuent/software/tungsten-clustering-7.0.3-141 shell>echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-7.0.3-141 shell>cd {STAGING_DIRECTORY}
shell>./tools/tpm update --no-connectors --replace-release
For information about making updates when using an INI file, please see Section 10.4.4, “Configuration Changes with an INI file”.
Using the --no-connectors
option
updates the current deployment without restarting the existing
connectors.
If installed via INI, on all nodes in the new cluster, download and unpack the software, and install
shell>cd /opt/continunent/software
shell>tar zxvf tungsten-clustering-7.0.3-141.tar.gz
shell>cd /opt/continuent/software/tungsten-clustering-7.0.3-141
shell>tools/tpm install
On every node in the original clusters, make sure all replicators are online:
shell> trepctl online; trepctl services
On all nodes in the new cluster start the software
shell> startall
The next steps will involve provisioning the new cluster nodes. An alternative approach to using this method would be to take a backup of a Replica from the existing cluster, and manually restoring it to ALL nodes in the new cluster PRIOR to issuing the install step above. If you take this approach then you can skip the next two re-provision steps.
Go to the relay (Primary) node of the new cluster (i.e. db7) and provision it from any Replica in the original cluster (i.e. db2):
shell> tungsten_provision_slave --source db2
Go to a Replica node of the new cluster (i.e. db8) and provision it from the relay node of the new cluster (i.e. db7):
shell> tungsten_provision_slave --source db7
Repeat the process for the renamining Replicas nodes in the new cluster.
Set the composite cluster to automatic mode using cctrl:
shell>cctrl
[LOGICAL] / >use global
[LOGICAL] /global >set policy automatic
During a period when it is safe to restart the connectors:
shell> ./tools/tpm promote-connector