G.1. General Questions

G.1.1. On a Tungsten Replicator Replica, how do I set both the local Replica THL listener port and the upstream Primaries THL listener port?
G.1.2. How do I update the IP address of one or more hosts in the cluster?
G.1.3. How do I fix the mysql-connectorj to drizzle MySQL driver bug which prevents my application from connecting through the Connector?
G.1.4. How do I update the password for the replication user in the cluster?
G.1.5. One of my hosts is regularly a number of seconds behind my other Replicas?
G.1.6. Does the replicate filter (i.e. replicate.do and replicate.ignore) address both DML and DDL?
G.1.7. How do you change the replicator heap size after installation?

G.1.1.

On a Tungsten Replicator Replica, how do I set both the local Replica THL listener port and the upstream Primaries THL listener port?

You need to specify two options: thl-port to set the Replica THL listener port and master-thl-port to define the upstream Primary THL listener port. Otherwise thl-port alone sets BOTH.

G.1.2.

How do I update the IP address of one or more hosts in the cluster?

To update the IP address used by one or more hosts in your cluster, you must perform the following steps:

  1. If possible, switch the node into SHUNNED mode.

  2. Reconfigure the IP address on the machine.

  3. Update the hostname lookup, for example, by editing the IP configuration in /etc/hosts.

  4. Restart the networking to reconfigure the service.

  5. On the node that has changed IP address, run:

    shell> tpm update

    The above updates the configuration, but does not restart the individual services, which may still have the old, incorrect, IP address information for the host cached.

  6. Restart the node services:

    shell> tpm restart
  7. On each other node within the cluster:

    1. Update the hostname lookup for the new node, for example, by updating the IP configuration in /etc/hosts.

    2. Update the configuration, using tpm:

      shell> tpm update

    3. Restart the services:

      shell> tpm restart

G.1.3.

How do I fix the mysql-connectorj to drizzle MySQL driver bug which prevents my application from connecting through the Connector?

When upgrading from version 2 to v4+, or simply just moving away from the mysql-connectorj driver to the Drizzle driver, the update process doesn't correctly remove all the connectorJ properties, causing a mismatch when connectors that did get the update try to make a connection to the cluster.

This is a known issue logged as CT-7

As yet, a fix has not been found, but the following workaround will correct the issue by hand:

To properly identify this issue, check the extended output of cctrl for the active driver. There will be one line of output for each node in the local cluster. Repeat once per cluster, on which node does not matter.

shell> echo ls -l | cctrl -expert| grep driver: | awk '{print $3}'

For example, for a three-node cluster, you may see something like this:

com.mysql.jdbc.Driver
com.mysql.jdbc.Driver
com.mysql.jdbc.Driver

If any line on any node in any cluster shows the com.mysql.jdbc.Driver, please use the workaround below:

Warning

If you have multiple clusters, either MSMM, CMM or Composite HA/DR, always ensure you check ALL clusters. Especially in Composite clusters, the Primary cluster, and especially the Primary node, must be checked and corrected if necessary.

Ensure the tpm update was done with the --replace-release option.

Review the tpm reverse output and analyze based on the following:

  • --mysql-driver=drizzle should exist in the defaults section

  • You may (or may not) see the old --mysql-connectorj-path entry within each service definition or in the defaults

  • If none of the above appear in the output, then the default drizzle driver will be active by default as of v4.0.0.

Repeat the following steps for all clusters, one by one:

  1. Place the cluster into Maintenance Mode using the cctrl command:

    cctrl> set policy maintenance
  2. Stop all managers on all nodes within the single cluster:

    shell> manager stop
    Stopping Tungsten Manager Service...
    Waiting for Tungsten Manager Service to exit...
    Stopped Tungsten Manager Service.
  3. On all nodes within the single cluster, remove all files from the /opt/continuent/tungsten/cluster-home/conf/cluster/{local_servicename}/datasource/ directory.

    Only delete the files from the local cluster service name directory, do not touch the composite service directory if there is one.

  4. Start all managers on all nodes within the single cluster,starting with the Primary:

    shell> manager start
    Starting Tungsten Manager Service...
    Waiting for Tungsten Manager Service..........
    running: PID:24819
  5. Place the cluster back into Automatic Mode

    shell> echo set policy automatic | cctrl -expert
    Tungsten Clustering 6.0.3 build 608
    alpha: session established, encryption=false, authentication=false
    [LOGICAL:EXPERT] /alpha > set policy automatic
    policy mode is now AUTOMATIC
    [LOGICAL:EXPERT] /alpha >
    Exiting...

Once the above has been completed, confirm that the procedure has worked as follows:

shell> echo ls -l | cctrl -expert| grep driver: | awk '{print $3}'
org.drizzle.jdbc.DrizzleDriver
org.drizzle.jdbc.DrizzleDriver
org.drizzle.jdbc.DrizzleDriver

G.1.4.

How do I update the password for the replication user in the cluster?

If you need to change the password used by Tungsten Cluster to connect to a dataserver and apply changes, the password can be updated first by changing the information within the your dataserver, and then by updating the configuration using tpm update. The new password is not checked until the Tungsten Replicator process is starting. Changing the password and then updating the configuration will keep replication from failing.

  1. Within cctrl set the maintenance policy mode:

    cctrl> set policy maintenance
  2. Within MySQL, update the password for the user, allowing the change to be replicated to the other datasources:

    mysql> SET PASSWORD FOR tungsten@'%' = PASSWORD('new_pass');
  3. Follow the directions for tpm update to apply the --datasource-password=new_pass setting.

  4. Set the policy mode in cctrl back to AUTOMATIC :

    cctrl> set policy automatic

G.1.5.

One of my hosts is regularly a number of seconds behind my other Replicas?

The most likely culprit for this issue is that the time is different on the machine in question. If you have ntp or a similar network time tool installed on your machine, use it to update the current time across all the hosts within your deployment:

shell> ntpdate pool.ntp.org

Once the command has been executed across all the hosts, trying sending a heartbeat on the Primary to Replicas and checking the latency:

shell> trepctl heartbeat

G.1.6.

Does the replicate filter (i.e. replicate.do and replicate.ignore) address both DML and DDL?

Both filters replicate.do and replicate.ignore will either do or ignore both DML and DDL

DDL is currently ONLY replicated for MySQL to MySQL or Oracle to Oracle topologies, or within MySQL Clusters, although it would be advisable not to use ignore/do filters in a clustered environment where data/structural integrity is key.

With replicate.do, all DML and DDL will be replicated ONLY for any database or table listed as part of the do filter.

With replicate.ignore, all DML and DDL will be replicated except for any database or table listed as part of the ignore filter.

G.1.7.

How do you change the replicator heap size after installation?

You can change the configuration by running the following command from the staging directory:

shell> ./tools/tpm --host=host1 --java-mem-size=2048