3.2.4. Adding a remote Composite Cluster

Adding an entire new cluster provides significant level of availability and capacity. The new cluster nodes that form the cluster will be fully aware of the original cluster(s) and communicate with existing managers and datasources within the cluster.

The following steps guide you through updating the configuration to include the new hosts and services you are adding.

  1. On the new host(s), ensure the Appendix B, Prerequisites have been followed.

  2. Let's assume that we have a composite cluster dataservice called global with two clusters, east and west, with three nodes each.

    In this worked exmple, we show how to add an additional cluster service called north with three new nodes.

  3. Set the cluster to maintenance mode using cctrl:

    shell> cctrl
    [LOGICAL] / > use global
    [LOGICAL] /global > set policy maintenance
  4. Using the following as an example, update the configuration to include the new cluster and update the additional composite service block. If using an INI installation copy the ini file to all the new nodes in the new cluster.

    Click the link to switch between staging or ini examples

    Show Staging

    Show INI

    shell> tpm query staging
    tungsten@db1:/opt/continuent/software/tungsten-clustering-6.1.25-6
    
    shell> echo The staging USER is `tpm query staging| cut -d: -f1 | cut -d@ -f1`
    The staging USER is tungsten
    
    shell> echo The staging HOST is `tpm query staging| cut -d: -f1 | cut -d@ -f2`
    The staging HOST is db1
    
    shell> echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
    The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-6.1.25-6
    
    shell> ssh {STAGING_USER}@{STAGING_HOST}
    shell> cd {STAGING_DIRECTORY}
    shell> ./tools/tpm configure north \
        --connectors=db7,db8,db9 \
        --relay-source=east \
        --relay=db7 \
        --slaves=db8,db9 \
        --topology=clustered
    
    shell> ./tools/tpm configure global \
        --composite-datasources=east,west,north
    

    Run the tpm command to update the software with the Staging-based configuration:

    shell> ./tools/tpm update --no-connectors --replace-release

    For information about making updates when using a Staging-method deployment, please see Section 10.3.7, “Configuration Changes from a Staging Directory”.

    shell> vi /etc/tungsten/tungsten.ini
    [north]
    ...
    connectors=db7,db8,db9
    relay-source=east
    relay=db7
    slaves=db8,db9
    topology=clustered
    
    [global]
    ...
    composite-datasources=east,west,north
    

    Run the tpm command to update the software with the INI-based configuration:

    shell> tpm query staging
    tungsten@db1:/opt/continuent/software/tungsten-clustering-6.1.25-6
    
    shell> echo The staging DIRECTORY is `tpm query staging| cut -d: -f2`
    The staging DIRECTORY is /opt/continuent/software/tungsten-clustering-6.1.25-6
    
    shell> cd {STAGING_DIRECTORY}
    
    shell> ./tools/tpm update --no-connectors --replace-release

    For information about making updates when using an INI file, please see Section 10.4.4, “Configuration Changes with an INI file”.

  5. Using the --no-connectors option updates the current deployment without restarting the existing connectors.

  6. If installed via INI, on all nodes in the new cluster, download and unpack the software, and install

    shell> cd /opt/continunent/software
    shell> tar zxvf tungsten-clustering-6.1.25-6.tar.gz
    shell> cd /opt/continuent/software/tungsten-clustering-6.1.25-6
    shell> tools/tpm install

  7. On every node in the original clusters, make sure all replicators are online:

    shell> trepctl online; trepctl services
  8. On all nodes in the new cluster start the software

    shell> startall
  9. The next steps will involve provisioning the new cluster nodes. An alternative approach to using this method would be to take a backup of a Replica from the existing cluster, and manually restoring it to ALL nodes in the new cluster PRIOR to issuing the install step above. If you take this approach then you can skip the next two re-provision steps.

  10. Go to the relay (Primary) node of the new cluster (i.e. db7) and provision it from any Replica in the original cluster (i.e. db2):

    shell> tungsten_provision_slave --source db2
  11. Go to a Replica node of the new cluster (i.e. db8) and provision it from the relay node of the new cluster (i.e. db7):

    shell> tungsten_provision_slave --source db7
  12. Repeat the process for the renamining Replicas nodes in the new cluster.

  13. Set the composite cluster to automatic mode using cctrl:

    shell> cctrl
    [LOGICAL] / > use global
    [LOGICAL] /global > set policy automatic
  14. During a period when it is safe to restart the connectors:

    shell> ./tools/tpm promote-connector