3.2. Deploying Multisite/Multimaster Clusters

Warning

The procedures in this section are designed for the pre-v6.x Multisite/Multimaster topology ONLY. Do NOT use these procedures with version 6.x Multisite Clusters.

For version 6.x Multisite Clustering, please refer to Deploying Composite Multimaster Clustering.

A Multisite/Multimaster topology provides all the benefits of a typical dataservice at a single location, but with the benefit of also replicating the information to another site. The underlying configuration within Tungsten Clustering uses the Tungsten Replicator System of Record (SOR) service, which enables multimaster operation between the two sites.

The configuration is in two separate parts:

  • Tungsten Clustering dataservice that operates the main dataservice service within each site.

  • Tungsten Replicator dataservice that provides replication between the two sites; one to replicate from site1 to site2, and one for site2 to site1.

A sample display of how this operates is provided in Figure 3.2, “Topologies: Multisite/Multimaster Clusters”.

Figure 3.2. Topologies: Multisite/Multimaster Clusters

Topologies: Multisite/Multimaster Clusters

The service can be described as follows:

  • Tungsten Clustering Service: east

    Replicates data between east1, east2 and east3 (not shown).

  • Tungsten Clustering Service: west

    Replicates data between west1, west2 and west3 (not shown).

  • Tungsten Replicator Service: east

    Defines the replication of data within east as a replicator service using Tungsten Replicator. This service reads from all the hosts within the Tungsten Clustering service east and writes to west1, west2, and west3. The service name is the same to ensure that we do not duplicate writes from the clustered service already running.

    Data is read from the east Tungsten Clustering and replicated to the west Tungsten Clustering dataservice. The configuration allows for changes in the Tungsten Clustering dataservice (such as a switch or failover) without upsetting the site-to-site replication.

  • Tungsten Replicator Service: west

    Defines the replication of data within west as a replicator service using Tungsten Replicator. This service reads from all the hosts within the Tungsten Clustering service west and writes to east1, east2, and east3. The service name is the same to ensure that we do not duplicate writes from the clustered service already running.

    Data is read from the west Tungsten Clustering and replicated to the east Tungsten Clustering dataservice. The configuration allows for changes in the Tungsten Clustering dataservice (such as a switch or failover) without upsetting the site-to-site replication.

  • Tungsten Replicator Service: east_west

    Replicates data from East to West, using Tungsten Replicator. This is a service alias that defines the reading from the dataservice (as a slave) to other servers within the destination cluster.

  • Tungsten Replicator Service: west_east

    Replicates data from West to East, using Tungsten Replicator. This is a service alias that defines the reading from the dataservice (as a slave) to other servers within the destination cluster.

Requirements.  Recommended releases for Multisite/Multimaster deployments are Tungsten Clustering 5.3.x and Tungsten Replicator 5.3.x

3.2.1. Prepare: Multisite/Multimaster Clusters

Warning

The procedures in this section are designed for the pre-v6.x Multisite/Multimaster topology ONLY. Do NOT use these procedures with version 6.x Multisite Clusters.

For version 6.x Multisite Clustering, please refer to Deploying Composite Multimaster Clustering.

Some considerations must be taken into account for any multimaster scenario:

  • For tables that use auto-increment, collisions are possible if two hosts select the same auto-increment number. You can reduce the effects by configuring each MySQL host with a different auto-increment settings, changing the offset and the increment values. For example, adding the following lines to your my.cnf file:

    auto-increment-offset = 1
    auto-increment-increment = 4

    In this way, the increments can be staggered on each machine and collisions are unlikely to occur.

  • Use row-based replication. Update your configuration file to explicitly use row-based replication by adding the following to your my.cnf file:

    binlog-format = row
  • Beware of triggers. Triggers can cause problems during replication because if they are applied on the slave as well as the master you can get data corruption and invalid data. Tungsten Clustering cannot prevent triggers from executing on a slave, and in a multimaster topology there is no sensible way to disable triggers. Instead, check at the trigger level whether you are executing on a master or slave. For more information, see Section C.3.1, “Triggers”.

3.2.2. Install: Multisite/Multimaster Clusters

Warning

The procedures in this section are designed for the pre-v6.x Multisite/Multimaster topology ONLY. Do NOT use these procedures with version 6.x Multisite Clusters.

For version 6.x Multisite Clustering, please refer to Deploying Composite Multimaster Clustering.

Creating the configuration requires two distinct steps, the first to create the two Tungsten Clustering deployments, and a second that creates the Tungsten Replicator configurations on different network ports, and different install directories.

  1. Install the Tungsten Clustering and Tungsten Replicator packages or download the tarballs, and unpack them:

    shell> cd /opt/continuent/software
    shell> tar zxf tungsten-clustering-5.4.1-41.tar.gz
    shell> tar zxf tungsten-replicator-5.4.1-41.tar.gz
  2. Change to the Tungsten Clustering directory:

    shell> cd tungsten-clustering-5.4.1-41
  3. Run tpm to configure the installation. This method assumes you are using the Section 9.3, “tpm Staging Configuration” method:

    Click the link below to switch examples between Staging and INI methods

    For ini install, the ini file contains all the configuration for both the cluster deployment and the replicator deployment.

    For a staging install, you first use the cluster configuration show below and then configure the replicator as a separate process. These additional steps are outlined below

    Show Staging

    Show INI

    shell> ./tools/tpm configure defaults \
        --reset \
        --user=tungsten \
        --install-directory=/opt/continuent \
        --replication-user=tungsten \
        --replication-password=secret \
        --replication-port=3306 \
        --profile-script=~/.bashrc \
        --application-user=app_user \
        --application-password=secret \
        --skip-validation-check=MySQLPermissionsCheck \
        --start-and-report
    
    
    shell> ./tools/tpm configure east \
        --topology=clustered \
        --connectors=east1,east2,east3 \
        --master=east1 \
        --members=east1,east2,east3
    
    shell> ./tools/tpm configure west \
        --topology=clustered \
        --connectors=west1,west2,west3 \
        --master=west1 \
        --members=west1,west2,west3
    
    shell> vi /etc/tungsten/tungsten.ini
    [defaults]
    user=tungsten
    install-directory=/opt/continuent
    replication-user=tungsten
    replication-password=secret
    replication-port=3306
    profile-script=~/.bashrc
    application-user=app_user
    application-password=secret
    skip-validation-check=MySQLPermissionsCheck
    start-and-report
    
    [defaults.replicator]
    home-directory=/opt/replicator
    rmi-port=10002
    executable-prefix=mm
    
    [east]
    topology=clustered
    connectors=east1,east2,east3
    master=east1
    members=east1,east2,east3
    
    [west]
    topology=clustered
    connectors=west1,west2,west3
    master=west1
    members=west1,west2,west3
    
    [east_west]
    topology=cluster-slave
    master-dataservice=east
    slave-dataservice=west
    thl-port=2113
    
    [west_east]
    topology=cluster-slave
    master-dataservice=west
    slave-dataservice=east
    thl-port=2115
    

    Configuration group defaults

    The description of each of the options is shown below; click the icon to hide this detail:

    Click the icon to show a detailed description of each argument.

    Configuration group defaults.replicator

    The description of each of the options is shown below; click the icon to hide this detail:

    Click the icon to show a detailed description of each argument.

    Configuration group east

    The description of each of the options is shown below; click the icon to hide this detail:

    Click the icon to show a detailed description of each argument.

    Configuration group west

    The description of each of the options is shown below; click the icon to hide this detail:

    Click the icon to show a detailed description of each argument.

    Configuration group east_west

    The description of each of the options is shown below; click the icon to hide this detail:

    Click the icon to show a detailed description of each argument.

    Configuration group west_east

    The description of each of the options is shown below; click the icon to hide this detail:

    Click the icon to show a detailed description of each argument.

    Run tpm to install the software with the configuration.

    shell > ./tools/tpm install

    During the startup and installation, tpm will notify you of any problems that need to be fixed before the service can be correctly installed and started. If the service starts correctly, you should see the configuration and current status of the service.

  4. Change to the Tungsten Replicator directory:

    shell> cd tungsten-replicator-5.4.1-41
  5. Run tpm to configure the installation. This method assumes you are using the Section 9.3, “tpm Staging Configuration” method:

  6. If you are running a staging install, first configure the replicator using the following example, if configuring using an ini file, skip straight to the install step below

    shell> ./tools/tpm configure defaults \
        --reset \
        --user=tungsten \
        --install-directory=/opt/replicator \
        --replication-user=tungsten \
        --replication-password=secret \
        --replication-port=3306 \
        --profile-script=~/.bashrc \
        --application-user=app_user \
        --application-password=secret \
        --skip-validation-check=MySQLPermissionsCheck \
        --rmi-port=10002 \
        --executable-prefix=mm \
        --thl-port=2113 \
        --start-and-report
    
    shell> ./tools/tpm configure east \
        --topology=clustered \
        --connectors=east1,east2,east3 \
        --master=east1 \
        --members=east1,east2,east3
    
    shell> ./tools/tpm configure west \
        --topology=clustered \
        --connectors=west1,west2,west3 \
        --master=west1 \
        --members=west1,west2,west3
    
    shell> ./tools/tpm configure east_west \
        --topology=cluster-slave \
        --master-dataservice=east \
        --slave-dataservice=west \
        --thl-port=2113
    
    shell> ./tools/tpm configure west_east \
        --topology=cluster-slave \
        --master-dataservice=west \
        --slave-dataservice=east \
        --thl-port=2115
    
  7. Run tpm to install the software with the configuration.

    shell > ./tools/tpm install

    During the startup and installation, tpm will notify you of any problems that need to be fixed before the service can be correctly installed and started. If the service starts correctly, you should see the configuration and current status of the service.

  8. Initialize your PATH and environment.

    shell> source /opt/continuent/share/env.sh
    shell> source /opt/replicator/share/env.sh

The MSMM clustering should be installed and ready to use.

3.2.3. Best Practices: Multisite/Multimaster Clusters

Warning

The procedures in this section are designed for the pre-v6.x Multisite/Multimaster topology ONLY. Do NOT use these procedures with version 6.x Multisite Clusters.

For version 6.x Multisite Clustering, please refer to Deploying Composite Multimaster Clustering.

Note

In addition to this information, follow the guidelines in Section 2.5, “Best Practices”.

  • Running a Multisite/Multimaster service uses many different components to keep data updated on all servers. Monitoring the dataservice is divided into monitoring the two different clusters. Be mindful when using commands that you have the correct path. You should either use the full path to the command under /opt/continuent and /opt/replicator, or use the aliases created by setting the --executable-prefix=mm option. Calling trepctl would become mm_trepctl.

  • Configure your database servers with distinct auto_increment_increment and auto_increment_offset settings. Each location that may accept writes should have a unique offset value.

Using cctrl gives you the dataservice status individually for the east and west dataservice. For example, the east dataservice is shown below:

Continuent Tungsten 5.4.1 build 41
east: session established
[LOGICAL] /east > ls

COORDINATOR[east1:AUTOMATIC:ONLINE]

ROUTERS:
+----------------------------------------------------------------------------+
|connector@east1[17951](ONLINE, created=0, active=0)                         |
|connector@east2[17939](ONLINE, created=0, active=0)                         |
|connector@east3[17961](ONLINE, created=0, active=0)                         |
+----------------------------------------------------------------------------+

DATASOURCES:
+----------------------------------------------------------------------------+
|east1(master:ONLINE, progress=29, THL latency=0.739)                        |
|STATUS [OK] [2013/11/25 11:24:35 AM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=master, state=ONLINE)                                     |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|east2(slave:ONLINE, progress=29, latency=0.721)                             |
|STATUS [OK] [2013/11/25 11:24:39 AM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=east1, state=ONLINE)                        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|east3(slave:ONLINE, progress=29, latency=1.143)                             |
|STATUS [OK] [2013/11/25 11:24:38 AM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=east1, state=ONLINE)                        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

When checking the current status, it is import to compare the sequence numbers from each service correctly. There are four services to monitor, the Tungsten Clustering service east, and a Tungsten Replicator service east that reads data from the west Tungsten Clustering service. A corresponding west Tungsten Clustering and west Tungsten Replicator service.

  • When data is inserted on the master within the east Tungsten Clustering, use cctrl to determine the cluster status. Sequence numbers within the Tungsten Clustering east should match, and latency between hosts in the Tungsten Clustering service are relative to each other.

  • When data is inserted on east, the sequence number of the east Tungsten Clustering service and east Tungsten Replicator service (on west{1,2,3}) should be compared.

  • When data is inserted on the master within the east Tungsten Clustering, use cctrl to determine the cluster status. Sequence numbers within the Tungsten Clustering east should match, and latency between hosts in the Tungsten Clustering service are relative to each other.

  • When data is inserted on west, the sequence number of the west Tungsten Clustering service and west Tungsten Replicator service (on east{1,2,3}) should be compared.

  Tungsten Clustering Service Seqno Tungsten Replicator Service Seqno
Operation east west east west
Insert/update data on east Seqno Increment   Seqno Increment  
Insert/update data on west   Seqno Increment   Seqno Increment

Within each cluster, cctrl can be used to monitor the current status. For more information on checking the status and controlling operations, see Section 5.3, “Checking Dataservice Status”.

Note

For convenience, the shell PATH can be updated with the tools and configuration. With two separate services, both environments must be updated. To update the shell with the Tungsten Clustering service and tools:

shell> source /opt/continuent/share/env.sh

To update the shell with the Tungsten Replicator service and tools:

shell> source /opt/replicator/share/env.sh

To monitor all services and the current status, you can also use the multi_trepctl command (part of the Tungsten Replicator installation). This generates a unified status report for all the hosts and services configured:

shell> multi_trepctl --by-service
| host  | servicename | role   | state  | appliedlastseqno | appliedlatency |
| east1 | east        | master | ONLINE |               53 |        120.161 |
| east3 | east        | master | ONLINE |               44 |          0.697 |
| east2 | east        | slave  | ONLINE |               53 |        119.961 |
| west1 | east        | slave  | ONLINE |               53 |        119.834 |
| west2 | east        | slave  | ONLINE |               53 |        181.128 |
| west3 | east        | slave  | ONLINE |               53 |        204.790 |
| west1 | west        | master | ONLINE |           294327 |          0.285 |
| west2 | west        | master | ONLINE |           231595 |          0.316 |
| east1 | west        | slave  | ONLINE |           294327 |          0.879 |
| east2 | west        | slave  | ONLINE |           294327 |          0.567 |
| east3 | west        | slave  | ONLINE |           294327 |          1.046 |
| west3 | west        | slave  | ONLINE |           231595 |         22.895 |

In the above example, it can be seen that the west services have a much higher applied last sequence number than the east services, this is because all the writes have been applied within the west cluster.

To monitor individual servers and/or services, use trepctl, using the correct port number and servicename. For example, on east1 to check the status of the replicator within the Tungsten Clustering service:

shell> trepctl status

To check the Tungsten Replicator service, explicitly specify the port and service:

shell> mm_trepctl -service west status