To remove an entire composite datasource (cluster) from an existing deployment there are two primary stages, removing it from the active service, and then removing it from the active configuration.
For example, to remove cluster west
from a composite dataservice:
Check the current service state:
shell>cctrl -multi
[LOGICAL] / >ls
+----------------------------------------------------------------------------+ |DATA SERVICES: | +----------------------------------------------------------------------------+ east global west [LOGICAL] / >use global
[LOGICAL] /global >ls
COORDINATOR[db1:AUTOMATIC:ONLINE] DATASOURCES: +----------------------------------------------------------------------------+ |east(composite master:ONLINE) | |STATUS [OK] [2017/05/16 01:25:31 PM UTC] | +----------------------------------------------------------------------------+ +----------------------------------------------------------------------------+ |west(composite slave:ONLINE) | |STATUS [OK] [2017/05/16 01:25:30 PM UTC] | +----------------------------------------------------------------------------+
Switch to MAINTENANCE
policy
mode:
[LOGICAL] /global > set policy maintenance
policy mode is now MAINTENANCE
Remove the composite member cluster from the composite service using the drop command.
[LOGICAL] /global >drop composite datasource west
COMPOSITE DATA SOURCE 'west@global' WAS DROPPED [LOGICAL] /global >ls
COORDINATOR[db1:AUTOMATIC:ONLINE] DATASOURCES: +----------------------------------------------------------------------------+ |east(composite master:ONLINE) | |STATUS [OK] [2017/05/16 01:25:31 PM UTC] | +----------------------------------------------------------------------------+ [LOGICAL] /global >cd /
[LOGICAL] / >ls
+----------------------------------------------------------------------------+ |DATA SERVICES: | +----------------------------------------------------------------------------+ east global
If the removed composite datasource still appears in the top-level listing, then you will need to clean up by hand. For example:
[LOGICAL] /global >cd /
[LOGICAL] / >ls
+----------------------------------------------------------------------------+ |DATA SERVICES: | +----------------------------------------------------------------------------+ east global west
Stop all managers on all nodes at the same time
[LOGICAL] /global >use west
[LOGICAL] /west >manager * stop
shell > vim $CONTINUENT_HOME/cluster-home/conf/dataservices.properties
Before:
east=db1,db2,db3
west=db4,db5,db6
After:
east=db1,db2,db3
Start all managers one-by-one, starting with the current Primary
shell > manager start
Once all managers are running, check the list again:
shell>cctrl -multi
[LOGICAL] / >ls
+----------------------------------------------------------------------------+ |DATA SERVICES: | +----------------------------------------------------------------------------+ east global
Switch to AUTOMATIC
policy
mode:
[LOGICAL] / > set policy automatic
policy mode is now AUTOMATIC
Now the cluster has been removed from the composite dataservice, the services on the old nodes must be stopped and then removed from the configuration.
Stop the running services on all nodes in the removed cluster:
shell> stopall
Now you must remove the node from the configuration, although the exact method depends on which installation method used with tpm:
If you are using staging directory method with tpm:
Change to the staging directory. The current staging directory can be located using tpm query staging:
shell>tpm query staging
tungsten@host1:/home/tungsten/tungsten-clustering-7.1.4-10 shell>cd /home/tungsten/tungsten-clustering-7.1.4-10
Update the configuration, omitting the cluster datasource name from the list of members of the dataservice:
shell> tpm update global --composite-datasources=east
If you are using the INI file method with tpm:
Remove the INI configuration file:
shell> rm /etc/tungsten/tungsten.ini
Stop the replicator/manager from being started again.
If this all the services on the this node, replicator, manager and connector are being removed, remove the Tungsten Cluster installation entirely:
Remove the startup scripts from your server:
shell> sudo /opt/continuent/tungsten/cluster-home/bin/undeployall
Remove the installation directory:
shell> rm -rf /opt/continuent
If the replicator/manager has been installed on a host but the connector is not being removed, remove the start scripts to prevent the services from being automatically started:
shell>rm /etc/init.d/tmanager
shell>rm /etc/init.d/treplicator