4.5.2.3. Removing a Composite Datasource/Cluster from an Existing Deployment Manually

While it is possible to manually remove a service from a Composite Cluster, there are many steps involved. Continuent recommends using either tpm update or tpm delete-service to do so.

For tpm update, see Section 4.5.2.1, “Removing a Composite Datasource/Cluster from an Existing Deployment Using tpm update”

For tpm delete-service, see Section 4.5.2.2, “Removing a Composite Datasource/Cluster from an Existing Deployment Using tpm delete-service”

To manually remove an entire composite datasource (cluster) from an existing deployment there are two primary stages, removing the child service from the composite parent cluster inside cctrl, and then removing the Tungsten software from the nodes to be decommissioned in the cluster service being deleted.

For CAA clusters, additional manual steps at the replicator layer will also be required.

For example, to remove cluster west from a composite dataservice:

  1. Check the current service state:

    shell> cctrl -multi
        [LOGICAL] / > ls
    
        +----------------------------------------------------------------------------+
        |DATA SERVICES:                                                              |
        +----------------------------------------------------------------------------+
        east
        global
        west
    
        [LOGICAL] / > use global
        [LOGICAL] /global > ls
    
        COORDINATOR[db1:AUTOMATIC:ONLINE]
    
        DATASOURCES:
        +----------------------------------------------------------------------------+
        |east(composite master:ONLINE)                                               |
        |STATUS [OK] [2017/05/16 01:25:31 PM UTC]                                    |
        +----------------------------------------------------------------------------+
    
        +----------------------------------------------------------------------------+
        |west(composite slave:ONLINE)                                                |
        |STATUS [OK] [2017/05/16 01:25:30 PM UTC]                                    |
        +----------------------------------------------------------------------------+
  2. Switch to MAINTENANCE policy mode:

    [LOGICAL] /global > set policy maintenance
        policy mode is now MAINTENANCE
  3. Remove the composite member cluster from the composite service using the drop command.

    [LOGICAL] /global > drop composite datasource west
        COMPOSITE DATA SOURCE 'west@global' WAS DROPPED
    
        [LOGICAL] /global > ls
    
        COORDINATOR[db1:AUTOMATIC:ONLINE]
    
        DATASOURCES:
        +----------------------------------------------------------------------------+
        |east(composite master:ONLINE)                                               |
        |STATUS [OK] [2017/05/16 01:25:31 PM UTC]                                    |
        +----------------------------------------------------------------------------+
    
        [LOGICAL] /global > cd /
        [LOGICAL] / > ls
    
        +----------------------------------------------------------------------------+
        |DATA SERVICES:                                                              |
        +----------------------------------------------------------------------------+
        east
        global
  4. If the removed composite datasource still appears in the top-level listing, then you will need to clean up by hand. For example:

    [LOGICAL] /global > cd /
        [LOGICAL] / > ls
    
        +----------------------------------------------------------------------------+
        |DATA SERVICES:                                                              |
        +----------------------------------------------------------------------------+
        east
        global
        west

    Stop all managers on all nodes at the same time

    shell > manager stop
    shell > vim $CONTINUENT_HOME/cluster-home/conf/dataservices.properties
    
        Before:
        east=db1,db2,db3
        west=db4,db5,db6
    
        After:
        east=db1,db2,db3

    Start all managers one-by-one, starting with the current Primary

    shell > manager start

    Once all managers are running, check the list again:

    shell> cctrl -multi
        [LOGICAL] / > ls
    
        +----------------------------------------------------------------------------+
        |DATA SERVICES:                                                              |
        +----------------------------------------------------------------------------+
        east
        global
  5. Switch to AUTOMATIC policy mode:

    [LOGICAL] / > set policy automatic
        policy mode is now AUTOMATIC

Now the cluster has been removed from the composite dataservice, the services on the old nodes must be stopped and then removed from the configuration.

  1. Stop the running services on all nodes in the removed cluster:

    shell> stopall
  2. Now you must remove the node from the configuration, although the exact method depends on which installation method used with tpm:

    • If you are using staging directory method with tpm:

      1. Change to the staging directory. The current staging directory can be located using tpm query staging:

        shell> tpm query staging
            tungsten@host1:/home/tungsten/tungsten-clustering-7.1.4-10
            shell> cd /home/tungsten/tungsten-clustering-7.1.4-10
      2. Update the configuration, omitting the cluster datasource name from the list of members of the dataservice:

        shell> tpm update global --composite-datasources=east
    • If you are using the INI file method with tpm:

      • Remove the INI configuration file:

        shell> rm /etc/tungsten/tungsten.ini
  3. Stop the replicator/manager from being started again.

    • If this all the services on the this node, replicator, manager and connector are being removed, remove the Tungsten Cluster installation entirely:

      • Remove the startup scripts from your server:

        shell> sudo /opt/continuent/tungsten/cluster-home/bin/undeployall
      • Remove the installation directory:

        shell> rm -rf /opt/continuent
    • If the replicator/manager has been installed on a host but the connector is not being removed, remove the start scripts to prevent the services from being automatically started:

      shell> rm /etc/init.d/tmanager
          shell> rm /etc/init.d/treplicator

Replicator Cleanup Steps for CAA Clusters

  • Replicator Cleanup Steps performed on a node in a service not being deleted

    • Offline all Tungsten Replicator services

    • Remove the two tungsten/tungsten-replicator/conf/static-*_from_{service}.properties* files

    • Restart the Tungsten Replicator process

    • Online all Tungsten Replicator services

    • Delete the service-specific thl and relay subdirs

  • Replicator Cleanup Steps performed on a node in service to be removed

    • Offline all Tungsten Replicator services

    • Remove the two tungsten/tungsten-replicator/conf/static-*_from_{service}.properties* files