There are really three different ways to remove a service from a composite cluster, (CAP, DAA or CAA):
Using command tools/tpm update --replace-release
Using command tpm delete-service
Manually using individual commands for each required step
See Section 4.4.1.3, “Removing a Composite Datasource/Cluster from an Existing Deployment Manually”
Internal Operations Reference
For Reference Only
Below are the internal operations taken during the script processing, presented for clarity.
Do NOT perform any step below by hand.
For CAP, tpm update and tpm delete-service both run the Manager Cleanup Steps on each node
For CAA, tpm update and tpm delete-service both run the Manager Cleanup Steps and the Replicator Cleanup Steps on each node
Manager Cleanup Steps
Manager Cleanup Steps performed on a node in a service not being deleted
Delete the dataservice from the Manager layer using cctrl
Remove the matching service line from
cluster-home/conf/dataservices.properties
Manager Cleanup Steps performed on a node in service to be removed
Stop all Tungsten services
Run the tpm uninstall command
Replicator Cleanup Steps
Replicator Cleanup Steps performed on a node in a service not being deleted
Offline all Tungsten Replicator services
Remove the two
tungsten/tungsten-replicator/conf/static-*_from_{service}.properties*
files
Restart the Tungsten Replicator process
Online all Tungsten Replicator services
Delete the service-specific thl and relay subdirs
Replicator Cleanup Steps performed on a node in service to be removed
Offline all Tungsten Replicator services
Remove the two
tungsten/tungsten-replicator/conf/static-*_from_{service}.properties*
files
To remove an entire composite datasource (cluster) from an existing deployment using the tpm update command, you must perform the following steps on every node in the composite cluster, including the ones to be removed.
Edit the tungsten.ini
file on every node in the
composite cluster you are keeping
Delete the entire stanza for the cluster service you wish to remove
Delete the cluster name from the composite service definition
Save and exit the tungsten.ini
file
Execute the ./tools/tpm update --replace-release command from the staging directory.
For more information about running tpm update, please see Section 10.2.4, “Configuration Changes with an INI file”.
Once the procedure is complete on all nodes, the service will no longer be visible in cctrl.
To remove an entire composite datasource (cluster) from an existing deployment using the tpm delete-service command, you must perform the following steps on every node in the composite cluster, including the ones to be removed.
Execute the tpm delete-service {service_name_here} command
During the execution of the tpm command on every node in the
composite cluster that you are
keeping, you will be prompted to edit the
tungsten.ini
file
This step will not be called if the node is due to be de-commissioned (i.e. the service has been deleted).
Delete the entire stanza for the cluster service you wish to remove
Delete the cluster name from the composite service definition
Save and exit the tungsten.ini
file
tpm will complete the service deletion based on your edits. Once this is done on all of the nodes, the service will be gone from cctrl.
While it is possible to manually remove a service from a Composite Cluster, there are many steps involved. Continuent recommends using either tpm update or tpm delete-service to do so.
For tpm update, see Section 4.4.1.1, “Removing a Composite Datasource/Cluster from an Existing Deployment Using tpm update”
For tpm delete-service, see Section 4.4.1.2, “Removing a Composite Datasource/Cluster from an Existing Deployment Using tpm delete-service”
To manually remove an entire composite datasource (cluster) from an existing deployment there are two primary stages, removing the child service from the composite parent cluster inside cctrl, and then removing the Tungsten software from the nodes to be decommissioned in the cluster service being deleted.
For CAA clusters, additional manual steps at the replicator layer will also be required.
For example, to remove cluster
west
from a composite dataservice:
Check the current service state:
shell>cctrl -multi
[LOGICAL] / >ls
+----------------------------------------------------------------------------+ |DATA SERVICES: | +----------------------------------------------------------------------------+ east global west [LOGICAL] / >use global
[LOGICAL] /global >ls
COORDINATOR[db1:AUTOMATIC:ONLINE] DATASOURCES: +----------------------------------------------------------------------------+ |east(composite master:ONLINE) | |STATUS [OK] [2017/05/16 01:25:31 PM UTC] | +----------------------------------------------------------------------------+ +----------------------------------------------------------------------------+ |west(composite slave:ONLINE) | |STATUS [OK] [2017/05/16 01:25:30 PM UTC] | +----------------------------------------------------------------------------+
Switch to MAINTENANCE
policy mode:
[LOGICAL] /global > set policy maintenance
policy mode is now MAINTENANCE
Remove the composite member cluster from the composite service using the drop command.
[LOGICAL] /global >drop composite datasource west
COMPOSITE DATA SOURCE 'west@global' WAS DROPPED [LOGICAL] /global >ls
COORDINATOR[db1:AUTOMATIC:ONLINE] DATASOURCES: +----------------------------------------------------------------------------+ |east(composite master:ONLINE) | |STATUS [OK] [2017/05/16 01:25:31 PM UTC] | +----------------------------------------------------------------------------+ [LOGICAL] /global >cd /
[LOGICAL] / >ls
+----------------------------------------------------------------------------+ |DATA SERVICES: | +----------------------------------------------------------------------------+ east global
If the removed composite datasource still appears in the top-level listing, then you will need to clean up by hand. For example:
[LOGICAL] /global >cd /
[LOGICAL] / >ls
+----------------------------------------------------------------------------+ |DATA SERVICES: | +----------------------------------------------------------------------------+ east global west
Stop all managers on all nodes at the same time
shell > manager stop
shell > vim $CONTINUENT_HOME/cluster-home/conf/dataservices.properties
Before:
east=db1,db2,db3
west=db4,db5,db6
After:
east=db1,db2,db3
Start all managers one-by-one, starting with the current Primary
shell > manager start
Once all managers are running, check the list again:
shell>cctrl -multi
[LOGICAL] / >ls
+----------------------------------------------------------------------------+ |DATA SERVICES: | +----------------------------------------------------------------------------+ east global
Switch to AUTOMATIC
policy
mode:
[LOGICAL] / > set policy automatic
policy mode is now AUTOMATIC
Now the cluster has been removed from the composite dataservice, the services on the old nodes must be stopped and then removed from the configuration.
Stop the running services on all nodes in the removed cluster:
shell> stopall
Now you must remove the node from the configuration, although the exact method depends on which installation method used with tpm:
If you are using staging directory method with tpm:
Change to the staging directory. The current staging directory can be located using tpm query staging:
shell>tpm query staging
tungsten@host1:/home/tungsten/tungsten-clustering-7.0.3-141 shell>cd /home/tungsten/tungsten-clustering-7.0.3-141
Update the configuration, omitting the cluster datasource name from the list of members of the dataservice:
shell> tpm update global --composite-datasources=east
If you are using the INI file method with tpm:
Remove the INI configuration file:
shell> rm /etc/tungsten/tungsten.ini
Stop the replicator/manager from being started again.
If this all the services on the this node, replicator, manager and connector are being removed, remove the Tungsten Cluster installation entirely:
Remove the startup scripts from your server:
shell> sudo /opt/continuent/tungsten/cluster-home/bin/undeployall
Remove the installation directory:
shell> rm -rf /opt/continuent
If the replicator/manager has been installed on a host but the connector is not being removed, remove the start scripts to prevent the services from being automatically started:
shell>rm /etc/init.d/tmanager
shell>rm /etc/init.d/treplicator
Replicator Cleanup Steps for CAA Clusters
Replicator Cleanup Steps performed on a node in a service not being deleted
Offline all Tungsten Replicator services
Remove the two
tungsten/tungsten-replicator/conf/static-*_from_{service}.properties*
files
Restart the Tungsten Replicator process
Online all Tungsten Replicator services
Delete the service-specific thl and relay subdirs
Replicator Cleanup Steps performed on a node in service to be removed
Offline all Tungsten Replicator services
Remove the two
tungsten/tungsten-replicator/conf/static-*_from_{service}.properties*
files