1.28. Tungsten Clustering 6.1.3 GA (17 February 2020)

Version End of Life. 15 Aug 2024

Release 6.1.3 contains a small number of critical bug fixes that can affect customers operating geo-distributed clusters across high latency network links, along with a small number of improvements and fixes to common command line tools.

Behavior Changes

The following changes have been made to Tungsten Cluster and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Command-line Tools

    • Improved the tungsten_find_position script to add the ability to specify the low and high sequence numbers which limits the amount of THL the script needs to parse. This allows for better performance and lower system overhead. Also allows the use of the --file option.

      [-f|--file] Pass specified file to the thl command as -file {file}
      [--low|--from] Pass specified seqno to the thl command as -low {seqno}
      [--high|--to] Pass specified seqno to the thl command as -high {seqno}

      Issues: CT-1143

    • tpm diag : make the output from remote host diagnostic gathering visible in addition to alerting when a host is not reachable.

      Issues: CT-1158

Known Issue

The following issues are known within this release but not considered critical, nor impact the operation of Tungsten Cluster. They will be addressed in a subsequent patch release.

  • Backup and Restore

    • The backup process, when configured to use Xtrabackup, uses the --stream=tar option as one of the options passed to the backup process.

      This option is no longer available in Xtrabackup 8.0

      If you use Xtrabackup 8.0 in combination with MySQL 8, generating backups using the procedures available in Tungsten Clustering will fail. Until a fix is available and to allow backups to continue you will need to make the following edit to the configuration

      After installation, open the static-servicename.properties file located in INSTALL_PATH/tungsten/tungsten_replicator/conf

      Locate the following entry replicator.backup.agent.xtrabackup.options and within the string value, change the value of tar=true to tar=false

      If the replicator is already running, then you will need to issue replicator restart for the change to take effect

      Warning

      Changing the properties file directly will cause future tpm update commands to fail, therefore you should run this with the --force option, and then redit the file as per the above instructions to reset the tar option

      Issues: CT-1157

Bug Fixes

  • Command-line Tools

    • tpm diag would fail to collect diagnostics for relay nodes within a Composite Active/Passive Topology

      Issues: CT-1140

    • Fixes an edge case whereby tpm mysql would fail on a node within a Composite Active/Active topology.

      Issues: CT-1151

    • tpm diag now gathers all hosts in a staging deployment when run from a non-staging node.

      Issues: CT-1155

    • tpm diag : fix ensures collection of diagnostics from standalone connector hosts.

      Issues: CT-1159

  • Tungsten Manager

    • During a local Primary switch within a Composite Active/Active Topology, where there is a high latency link between clusters, the switch could intermittently fail due to an incorrect rule triggering as the remote cluster sees an incorrect state in the opposing cluster

      Issues: CT-1141

Tungsten Clustering 6.1.3 Includes the following changes made in Tungsten Replicator 6.1.3

Release 6.1.3 contains a small number of improvements and fixes to common command line tools, and introduces compatibility with MongoDB Atlas.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Command-line Tools

    • tpm diag has been updated to provide additional feedback detailing the hosts that were gathered during the execution, and also provides examples of how to handle failures

      When running on a single host configured via the ini method:

      shell> tpm diag
      Collecting localhost diagnostics.
      Note: to gather for all hosts please use the "-a" switch and ensure that you have paswordless »
      ssh access set between the hosts.
      Collecting diag information on db1.....
      Diagnostic information written to /home/tungsten/tungsten-diag-2020-02-06-19-34-25.zip

      When running on a staging host, or with the -a flag:

      shell> tpm diag [-a|--allhosts]
      Collecting full cluster diagnostics
      Note: if ssh access to any of the cluster hosts is denied, use "--local" or "--hosts=<host1,host2,...>"
      Collecting diag information on db1.....
      Collecting diag information on db2.....
      Collecting diag information on db3.....
      Diagnostic information written to /home/tungsten/tungsten-diag-2020-02-06-19-34-25.zip

      Issues: CT-1137

Bug Fixes

  • Command-line Tools

    • tpm would fail to run on some Operating Systems due to missing realpath

      tpm has been changed to use readlink which is commonly installed by default on most operating systems, however if it is not available, you may be required to install GNU coreutils to satisfy this dependency

      Issues: CT-1124

    • Removed dependency on perl module Time::HiRes from tpm

      Issues: CT-1126

    • Added support for handling missing dependency (Data::Dumper) within various tpm subcommands

      Issues: CT-1130

    • tpm will now work on MacOS/X systems, provided greadlink is installed.

      Issues: CT-1147

    • tpm install will no longer report that the linux distribution cannot be determined on SUSE platforms.

      Issues: CT-1148

    • Fixes a condition where tpm diag would fail to set the working path correctly, especially on Debian 8 hosts.

      Issues: CT-1150

    • tpm diag now checks for OS commands in additional paths (/bin, /sbin, /usr/bin and /usr/sbin)

      Issues: CT-1160

    • Fixes an issue introduced in v6.1.2 where the use of the undeployall script would stop services as it removed them from systemctl control

      Issues: CT-1166

  • Core Replicator

    • The MongoDB Applier has been updated to use the latest MongoDB JDBC Driver

      Issues: CT-1134

    • The MongoDB Applier now supports MongoDB Atlas as a target

      Issues: CT-1142

    • The replicator would fail with Unknown column '' in 'where clause when replicating between MySQL 8 hosts where the client application would write data into the source database host using a different collation to that of the default on the target database.

      The replicator would fail due to a mismatch in these collations when querying the information_schema.columns view to gather metadata ahead of applying to the target

      Issues: CT-1145