Release Notes

Continuent Ltd

Abstract

This document provides release notes for all (active) released versions of Continuent software.

Build date: 2020-11-24 (4e71a1f8)

Up to date builds of this document: Release Notes (Online), Release Notes (PDF)


Table of Contents

1. Tungsten Clustering Release Notes
1.1. Tungsten Clustering 6.1.9 GA (23 Nov 2020)
1.2. Tungsten Clustering 6.1.8 GA (2 Nov 2020)
1.3. Tungsten Clustering 6.1.7 GA (5 Oct 2020)
1.4. Tungsten Clustering 6.1.6 GA (20 Aug 2020)
1.5. Tungsten Clustering 6.1.5 GA (5 Aug 2020)
1.6. Tungsten Clustering 6.1.4 GA (4 June 2020)
1.7. Tungsten Clustering 6.1.3 GA (17 February 2020)
1.8. Tungsten Clustering 6.1.2 GA (20 January 2020)
1.9. Tungsten Clustering 6.1.1 GA (28 October 2019)
1.10. Tungsten Clustering 6.1.0 GA (31 July 2019)
1.11. Tungsten Clustering 6.0.5 GA (20 March 2019)
1.12. Tungsten Clustering 6.0.4 GA (11 December 2018)
1.13. Tungsten Clustering 6.0.3 GA (5 September 2018)
1.14. Tungsten Clustering 6.0.2 GA (27 June 2018)
1.15. Tungsten Clustering 6.0.1 GA (30 May 2018)
1.16. Tungsten Clustering 6.0.0 GA (4 April 2018)
1.17. Tungsten Clustering 5.4.1 GA (28 October 2019)
1.18. Tungsten Clustering 5.4.0 GA (31 July 2019)
1.19. Tungsten Clustering 5.3.6 GA (04 February 2019)
1.20. Tungsten Clustering 5.3.5 GA (06 November 2018)
1.21. Tungsten Clustering 5.3.4 GA (11 October 2018)
1.22. Tungsten Clustering 5.3.3 GA (20 September 2018)
1.23. Tungsten Clustering 5.3.2 GA (4 June 2018)
1.24. Tungsten Clustering 5.3.1 GA (18 April 2018)
1.25. Tungsten Clustering 5.3.0 GA (12 December 2017)
2. Tungsten Replicator Release Notes
2.1. Tungsten Replicator 6.1.9 GA (23 Nov 2020)
2.2. Tungsten Replicator 6.1.8 GA (2 Nov 2020)
2.3. Tungsten Replicator 6.1.7 GA (5 Oct 2020)
2.4. Tungsten Replicator 6.1.6 GA (20 Aug 2020)
2.5. Tungsten Replicator 6.1.5 GA (5 Aug 2020)
2.6. Tungsten Replicator 6.1.4 GA (4 June 2020)
2.7. Tungsten Replicator 6.1.3 GA (17 February 2020)
2.8. Tungsten Replicator 6.1.2 GA (20 January 2020)
2.9. Tungsten Replicator 6.1.1 GA (28 October 2019)
2.10. Tungsten Replicator 6.1.0 GA (31 July 2019)
2.11. Tungsten Replicator 6.0.5 GA (20 March 2019)
2.12. Tungsten Replicator 6.0.4 GA (11 December 2018)
2.13. Tungsten Replicator 6.0.3 GA (5 September 2018)
2.14. Tungsten Replicator 6.0.2 GA (27 June 2018)
2.15. Tungsten Replicator 6.0.1 GA (30 May 2018)
2.16. Tungsten Replicator 6.0.0 GA (4 April 2018)
2.17. Tungsten Replicator 5.4.1 GA (28 October 2019)
2.18. Tungsten Replicator 5.4.0 GA (31 July 2019)
2.19. Tungsten Replicator 5.3.6 GA (04 February 2019)
2.20. Tungsten Replicator 5.3.5 GA (06 November 2018)
2.21. Tungsten Replicator 5.3.4 GA (11 October 2018)
2.22. Tungsten Replicator 5.3.3 GA (20 September 2018)
2.23. Tungsten Replicator 5.3.2 GA (4 June 2018)
2.24. Tungsten Replicator 5.3.1 GA (18 April 2018)
2.25. Tungsten Replicator 5.3.0 GA (12 December 2017)
3. Tungsten Dashboard Release Notes
3.1. Tungsten Dashboard 1.0.9 GA (12 August 2020)
3.2. Tungsten Dashboard 1.0.8 GA (4 June 2020)
3.3. Tungsten Dashboard 1.0.7 GA (26 November 2019)
3.4. Tungsten Dashboard 1.0.6 GA (3 September 2019)
3.5. Tungsten Dashboard 1.0.5 GA (28 June 2019)
3.6. Tungsten Dashboard 1.0.4 GA (11 April 2019)
3.7. Tungsten Dashboard 1.0.3 GA (22 March 2019)
3.8. Tungsten Dashboard 1.0.2 GA (20 September 2018)
3.9. Tungsten Dashboard 1.0.1 GA (17 September 2018)
3.10. Tungsten Dashboard 1.0.0 GA (10 May 2018)

1. Tungsten Clustering Release Notes

1.1. Tungsten Clustering 6.1.9 GA (23 Nov 2020)

Version End of Life. Not Yet Set

Release 6.1.9 is a minor bug fix release containing a fix for a critial bug within the Replicator related to the handling of Timezone.

Improvements, new features and functionality

  • Command-line Tools

    • A new script is available, tungsten_generate_haproxy_for_api. This script will read all available INI files and dump out corresponding haproxy.cfg entries with properly incrementing ports; the composite parent will come first, followed by the composite children in alphabetical order.

      This script will be embedded as a tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command in a future release.

      Issues: CT-1385

Bug Fixes

  • Command-line Tools

    • tpm update (in [Tungsten Clustering (for MySQL) 6.1 Manual]) no longer fails when using the staging method to upgrade to a new version.

      Issues: CT-1381

    • tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 6.1 Manual]) no longer hangs when the password keyword exists by itself under [client] in my.cnf, which caused mysqlbinlog to issue a password prompt.

      Issues: CT-1387

  • Core Replicator

    • In some edge case scenarios, the replicator was not setting the session time_zone correctly when proceeding sessions had a different time_zone applied, this could lead to situations where TIMESTAMP values could be applied into replica nodes with an incorrect time_zone offset applied.

      Issues: CT-1390

1.2. Tungsten Clustering 6.1.8 GA (2 Nov 2020)

Version End of Life. Not Yet Set

Release 6.1.8 is a minor bug fix release.

Bug Fixes

  • Command-line Tools

    • tungsten_post_process (in [Tungsten Clustering (for MySQL) 6.1 Manual]) now handles whitespace properly on property= lines in INI files.

      Issues: CT-1364

  • Core Replicator

    • Fixes an issue that would prevent a service from going offline at a specified time (trepctl online -until-time (in [Tungsten Clustering (for MySQL) 6.1 Manual])) or at a specific seqno (trepctl online -until-seqno (in [Tungsten Clustering (for MySQL) 6.1 Manual])) when parallel apply is enabled.

      Issues: CT-1243

    • In MySQL releases using old row events format (MySQL 5.6 or earlier), Delete_rows_v1 were badly handled, leading to an extraction error when handling such an event type.

      Issues: CT-1358

  • Tungsten Manager

    • Managers now accept both unencrypted and TLS connections from connectors. This allows migrations when security is enabled from earlier versions to 6.1.2+, as well as seamless switch to encrypted connector-manager communications.

      Important

      When upgrading to this release from an earlier release, and SSL is either already enabled, or being enabled as part of the upgrade, it is important that managers are upgraded before the connectors.

      Note that non-secured connectors connections will trigger a warning in the manager logs:

      WARN [RouterGatewayServer] - Un-encrypted connection request from <connector address> while this manager »
      is configured to use SSL. This is expected ONLY when migrating manager<>connector communication to SSL. »
      Otherwise it might be a misconfiguration of this connector.

      Issues: CT-1320

    • A network partition between primary and relay sites with port 11999 still functioning was leading to full connectivity outage. Managers now send data services configuration with every node update (one per node every 3s) so that connectors remain up-to-date even in that corner case.

      Issues: CT-1356

    • Fixes a memory leak when non recognized connections to manager port 11999 were made.

      Issues: CT-1357

1.3. Tungsten Clustering 6.1.7 GA (5 Oct 2020)

Version End of Life. Not Yet Set

Release 6.1.7 is a minor bug fix release containing a fix for SSL Communications specific to clustering deployments.

Bug Fixes

  • Tungsten Connector

    • Disables the naggle algorithm (TcpNoDelay=true) on SSL Sockets which causes performance degredation on SSL communications within the connector.

      Issues: CT-1331

    • Allows configuration of the protocols and cipher suites to use within the drizzle driver for SSL communications to the database server.

      New TPM parameters can be used to control this. New parameters are as follows:

      connector-server-ssl-protocols=protocol1[,protocol2,...]
      connector-server-ssl-ciphers=cipher1[,cipher2,...]

      For example:

      connector-server-ssl-protocols=TLSv1.2
      connector-server-ssl-ciphers=TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA512

      Default values for connector-server-ssl-protocols will be TLSv1,TLSv1.1,TLSv1.2. Default values for connector-server-ssl-ciphers will be to allow all cipher suites supported by the running JVM.

      Issues: CT-1335

    • When the Primary node is not available (for example during switch/failover), the connector would pause incoming RO_RELAXED connection requests even if a Replica is available for reads.

      Note

      This only applies to connectors configured in Bridge Mode.

      Issues: CT-1337

    • c3p0 libraries upgraded to version 0.9.5.5 and adjusted recommended pool configuration.

      Issues: CT-1343

1.4. Tungsten Clustering 6.1.6 GA (20 Aug 2020)

Version End of Life. Not Yet Set

Release 6.1.6 is a minor bug fix containing a fix for Composite Active/Active environments.

Bug Fixes

  • Command-line Tools

    • tpm install (in [Tungsten Clustering (for MySQL) 6.1 Manual]) and tpm update (in [Tungsten Clustering (for MySQL) 6.1 Manual]) will now call tungsten_post_process (in [Tungsten Clustering (for MySQL) 6.1 Manual]) automatically to ensure cross-site specific configurations are applied at the right time during the install and update process within Composite Active/Active installations.

      Issues: CT-761

  • Core Replicator

    • Allows multiple service names to be supplied to property=local.service.name when configuring bi-directional replication between a Composite Active/Passive cluster topology and a MySQL target to prevent loopback of transactions.

      Issues: CT-1308

1.5. Tungsten Clustering 6.1.5 GA (5 Aug 2020)

Version End of Life. Not Yet Set

Release 6.1.5 is a small interim bug fix with a number of issues resolved within the Core Replicator, and a specific issue within Tungsten Connector in Multi-Cluster Environments.

Known Issue

The following issues may affect the operation of Tungsten Cluster and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • In MySQL release 8.0.21 the behavior of CREATE TABLE ... AS SELECT ... has changed, resulting in the transactions being logged differenly in the binary log. This change in behavior will cause the replicators to fail.

      Until a fix is implemented within the replicator, the workaround for this will be to split the action into separate CREATE TABLE ... followed by INSERT INTO ... SELECT FROM... statements.

      If this is not possible, then you will need to manually create the table on all nodes, and then skip the resulting error in the replicator, allowing the subsequent loading of the data to continue.

      Issues: CT-1301

Bug Fixes

  • Core Replicator

    • When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when log_slave_updates was enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.

      This was most commonly seen when using the standalone replicator to replicate into a Cluster, either from a standlone source, or a cluster source.

      Issues: CT-1255

    • If filtering is in use, and a space appeared either side of the delimiter in a "schema.table" definition in your SQL, the replicator would fail to parse the statement correctly.

      For example, all of the examples below are valid SQL but would cause a failure in the replicator:

      sql> CREATE TABLE myschema. mytable (....
      
      sql> CREATE TABLE myschema .mytable (....
      
      sql> CREATE TABLE myschema . mytable (....

      Issues: CT-1278

    • Fixes a bug in the Drizzle Driver whereby a failing prepared statement that exceeds 1000 characters would report a String index out of range: 999 error rather than the actual error.

      Issues: CT-1303

  • Tungsten Connector

    • Connector would fail to properly find appropriate data service for its auto-configuration. Now using the service provided in the user map when present.

      Note

      This fix is also available in build 51 of the 6.1.4 release.

      Issues: CT-1277

1.6. Tungsten Clustering 6.1.4 GA (4 June 2020)

Version End of Life. Not Yet Set

Release 6.1.4 contains a number of improvements and bug fixes, specifically for the tpm command line tool and stability improvements for Composite Active/Active topologies. In addition this release now fully supports the latest binlog compression feature of MySQL 8.0.20.

Improvements, new features and functionality

  • Tungsten Manager

    • When a Primary is OFFLINE or SHUNNED, we no longer mark the whole data service as OFFLINE.

      Note

      This will allow reading from Replicas to continue even if the Primary is offline.

      Issues: CT-1152

    • In a Composite Active/Active topology, the relay hosts, by default, will pull THL from the remote Primaries. This can be changed to pull from remote Replica(s) by use of the following setting policy-relay-from-slave=true (in [Tungsten Clustering (for MySQL) 6.1 Manual])

      Note

      This is a change in behaviour. From version 6.0.4 up to and including 6.1.3, the default behaviour was configured to pull from the remote Replica(s).

      Issues: CT-1211

Bug Fixes

  • Command-line Tools

    • The tpm diag (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command now captures cctrl ls output from the Composite dataservice when appropriate.

      Issues: CT-1139

    • Fixes an issue where affinity specified in the configuration (connector-affinity (in [Tungsten Clustering (for MySQL) 6.1 Manual])) was ignored since it was not written to user.map (in [Tungsten Clustering (for MySQL) 6.1 Manual]) (this affects proxy configurations only).

      Issues: CT-1146

    • Fixed a bug where tpm diag (in [Tungsten Clustering (for MySQL) 6.1 Manual]) would fail to gather some MySQL information on a Composite Active/Active node.

      Issues: CT-1167

    • Fixed a bug in tungsten_post_process (in [Tungsten Clustering (for MySQL) 6.1 Manual]) where the /etc/tungsten/tungsten.ini (in [Tungsten Clustering (for MySQL) 6.1 Manual]) file would not be read properly. Also adds two warnings for skipped entries and corrects a typo in the property definitions.

      Issues: CT-1198

  • Tungsten Connector

    • Fixed an issue where connector client-list (in [Tungsten Clustering (for MySQL) 6.1 Manual]) and the Proxy-mode command, tungsten show processlist, would report NullPointerException errors when listing disconnected client applications.

      Issues: CT-1177

    • When in proxy mode and fetching data source configuration at startup, the Connector will now properly parse, and use, affinity (and other URL options if any) to establish the connection.

      Issues: CT-1212

    • Fixes an issue where connections with multiple affinity would not keep the current channel and reconnect to the database server while the connection could be reused.

      Issues: CT-1250

  • Tungsten Manager

    • In certain edge case situations, the manager could choose a manually shunned node as a viable node during failover.

      Issues: CT-1115

    • Issuing cluster topology validate within cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) would fail if the cluster contained an Active Witness host.

      Issues: CT-1180

    • Piping multiple commands to cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) that would affect components in a remote cluster would fail, for example:

      shell> echo "use east; replicator db5 offline" | cctrl

      Issues: CT-1209

    • Resolves an edge case in a Composite Active/Passive topology, with 2 or more Composite Passive dataservices, where a switch of a relay node in a single Replica service would incorrectly reconfigure all relays in the other Replica services.

      Issues: CT-1214

Tungsten Clustering 6.1.4 Includes the following changes made in Tungsten Replicator 6.1.4

Release 6.1.4 contains a number of improvements and bug fixes, specifically for the tpm command line tool and improvements to the Redshift Applier. In addition this release now fully supports the latest binlog compression feature of MySQL 8.0.20.

Improvements, new features and functionality

  • Command-line Tools

    • Improves tpm (in [Tungsten Replicator 6.1 Manual]) performance by using more efficient routines to calculate paths.

      Issues: CT-1168

    • Added the ability for tpm diag (in [Tungsten Replicator 6.1 Manual]) to skip both individual gather subroutines along with entire groups of gather subroutines.

      Also added ability to list all gather groups and subroutines using --list for use with the --skip and --skipgroups cli arguments.

      Issues: CT-1176

    • tungsten_provision_slave (in [Tungsten Replicator 6.1 Manual]) has been rewritten fixing a number of issues in the previous release. This version was previously released as the Beta script prov-sl.sh.

      Issues: CT-1208

    • A number of performance improvements and fixes have been incorporated into the Drizzle Driver used for communication between components and MySQL, these include:

      • Better handling of large queries close to max network packet size.

      • Batch Support. Instead of sending statements one by one, the driver will be able to send multiple statements at once, avoiding round trips between the driver and MySQL server.

      • Fixes issues with interpreting useSSL on connect string URLs.

      Issues: CT-1215, CT-1216, CT-1217, CT-1228

    • The tungsten_send_diag (in [Tungsten Replicator 6.1 Manual]) command now accepts arguments for the tpm diag (in [Tungsten Replicator 6.1 Manual]) command using --args or -a for short.

      You must enclose the arguments in quotes, for example:

      shell> tungsten_send_diag -c 9999 -d --args ‘--all -v’

      Issues: CT-1220

  • Core Replicator

    • debug has been disabled by default in the schemachange filter. Resulting in reduced noise in the replicator log file.

      Issues: CT-1013

    • A new replicator role thl-client has been added. This new role allows the replicator to download THL from a Primary, but not apply to the target database.

      Issues: CT-1123

    • A new delayInMs (in [Tungsten Replicator 6.1 Manual]) filter has been added which allows the applying of THL to a Replica to be delayed. The filter allows millisecond precision. This filter works in the same way as the TimeDelayFilter (in [Tungsten Replicator 6.1 Manual]), however that filter only allow second precision.

      Issues: CT-1191

    • A new droprow JavaScript filter has been added.

      The filter works at ROW level and allows the filtering out of rows based on one or more column/value matches

      Issues: CT-1213

    • When configuring the Redshift applier, you can now configure which tool the applier will use for posting CSV files to S3. Options are s3cmd (default), s4cmd or aws.

      Issues: CT-1218

    • A number of improvements have been made to the Redshift Applier to allow optional levels of table locking.

      This is particularly useful when you have multiple Redshift Appliers in a Fan-In topology, and/or very high volumes of data to process.

      The additional locking options reduce the risk of Redshift Serializable Isolation Violation errors occuring.

      Full details of how to utilise the new options can be read at Handling Concurrent Writes from Multiple Appliers (in [Tungsten Replicator 6.1 Manual])

      Issues: CT-1221

    • Tungsten Replicator can now correctly extract and parse Binary Log entries when the MySQL option binlog-transaction-compression has been enabled.

      binlog-transaction-compression is a new parameter introduced from MySQL 8.0.20.

      Issues: CT-1223

Bug Fixes

  • Command-line Tools

    • In certain edge cases, tungsten_provision_slave (in [Tungsten Replicator 6.1 Manual]) would fail to detect if mysql was shutdown.

      Issues: CT-1096

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now collects directories specified with !includedir in the /etc/my.cnf file.

      Issues: CT-1153

    • Fixes the tpm update (in [Tungsten Replicator 6.1 Manual]) command, which would exit with the error:

      Argument " (error code 1)" isn't numeric

      Issues: CT-1165

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now collects any files specified by !include directives in the /etc/my.cnf and /etc/mysql/my.cnf files.

      tpm diag (in [Tungsten Replicator 6.1 Manual]) also looks in /etc/mysql/my.cnf for !includedir directives

      Issues: CT-1169

    • Fixes a bug which prevented tungsten_send_diag (in [Tungsten Replicator 6.1 Manual]) from uploading a self-generated diagnostic zip file.

      Issues: CT-1170

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now properly derives the correct target path to the releases directory if the home directory in the configuration points to a sym-link.

      Issues: CT-1172

    • Removed tpm diag (in [Tungsten Replicator 6.1 Manual]) call to sudo for gathering ifconfig and lsb_release commands.

      Issues: CT-1175

    • tpm update (in [Tungsten Replicator 6.1 Manual]) would fail and report errors if deployall (in [Tungsten Replicator 6.1 Manual]) had been executed.

      Issues: CT-1179

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) no longer requires the mysql command-line client when running on non-MySQL Applier nodes, and no longer attempts to gather any MySQL diagnostic information.

      Issues: CT-1188

    • Removes the requirement to execute component start/stop commands with sudo when called through systemd.

      This is specific to start/stop actions following the use of the deployall (in [Tungsten Replicator 6.1 Manual]) scripts.

      Issues: CT-1193

    • Fixes cases where tpm (in [Tungsten Replicator 6.1 Manual]) fails when the OS hostname command returns a different string than what is used in the configuration (i.e. hostname returns a FQDN, yet the configuration contains shortnames like db1, db2, etc.).

      Issues: CT-1206

    • In certain cases, after a reprovision, tungsten_provision_slave (in [Tungsten Replicator 6.1 Manual]) didn’t always run the steps to reset the local replicator service. This made the replicator go into an error state after provision had completed.

      Issues: CT-1210

    • The tpm diag (in [Tungsten Replicator 6.1 Manual]) command now handles the cluster-slave topology more gracefully, and properly handles cluster nodes without the Connector installed.

      Improved output text clarity by converting multiple verbose-level outputs to debug, and warnings to notice-level.

      Issues: CT-1222

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now gathers sym-linked files correctly.

      Issues: CT-1232

    • ddlscan (in [Tungsten Replicator 6.1 Manual]) now sets the datatype for sequence number columns to BIGINT when generating staging table DDL for Redshift deployments.

      Issues: CT-1235

    • Fixes a situation where tpm update (in [Tungsten Replicator 6.1 Manual]) exits with a Data::Dumper error.

      Issues: CT-1249

  • Core Replicator

    • In heterogeneous replicator deployments, the convertstringfrommysql filter would fail to convert strings for alphanumeric key values.

      Issues: CT-1128

    • Tungsten Replicator could wrongly think it is already in DEGRADED mode when trying to put it into DEGRADED state.

      Issues: CT-1131

    • Tungsten Replicator now recognises Amazon AWS SSL Certificates to enable SSL communication with AWS Aurora.

      Issues: CT-1173

    • When host server time (and thus MySQL time) is not configured as UTC, issuing cluster heartbeat or trepctl heartbeat (in [Tungsten Replicator 6.1 Manual]) in the first hours around daylight savings time would create an invalid time in MySQL.

      For more information on timezones when issuing heartbeats, see trepctl heartbeat Time Zone Handling (in [Tungsten Replicator 6.1 Manual])

      Issues: CT-1174

    • The replicator would fail to apply into Vertica when configured as an offboard installation due to the applier incorrectly expecting the csv files to exist locally on the remote Vertica host.

      Issues: CT-1194

    • Due to a change in the Binary log structure introduced in MySQL 8.0.16, the replicator would fail to extract transactions for Partitioned tables.

      Issues: CT-1201

    • The replicator would fail to correctly parse statements with leading comment blocks in excess of 200 characters.

      Issues: CT-1236

1.7. Tungsten Clustering 6.1.3 GA (17 February 2020)

Version End of Life. Not Yet Set

Release 6.1.3 contains a small number of critical bug fixes that can affect customers operating geo-distributed clusters across high latency network links, along with a small number of improvements and fixes to common command line tools.

Behavior Changes

The following changes have been made to Tungsten Cluster and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Command-line Tools

    • Improved the tungsten_find_position (in [Tungsten Clustering (for MySQL) 6.1 Manual]) script to add the ability to specify the low and high sequence numbers which limits the amount of THL the script needs to parse. This allows for better performance and lower system overhead. Also allows the use of the --file option.

      [-f|--file] Pass specified file to the thl command as -file {file}
      [--low|--from] Pass specified seqno to the thl command as -low {seqno}
      [--high|--to] Pass specified seqno to the thl command as -high {seqno}

      Issues: CT-1143

    • tpm diag (in [Tungsten Clustering (for MySQL) 6.1 Manual]) : make the output from remote host diagnostic gathering visible in addition to alerting when a host is not reachable.

      Issues: CT-1158

Known Issue

The following issues may affect the operation of Tungsten Cluster and should be taken into account when deploying or updating to this release.

  • Backup and Restore

    • The backup process, when configured to use Xtrabackup, uses the --stream=tar option as one of the options passed to the backup process.

      This option is no longer available in Xtrabackup 8.0

      If you use Xtrabackup 8.0 in combination with MySQL 8, generating backups using the procedures available in Tungsten Clustering will fail. Until a fix is available and to allow backups to continue you will need to make the following edit to the configuration

      After installation, open the static-servicename.properties file located in INSTALL_PATH/tungsten/tungsten_replicator/conf

      Locate the following entry replicator.backup.agent.xtrabackup.options and within the string value, change the value of tar=true to tar=false

      If the replicator is already running, then you will need to issue replicator restart (in [Tungsten Clustering (for MySQL) 6.1 Manual]) for the change to take effect

      Warning

      Changing the properties file directly will cause future tpm update (in [Tungsten Clustering (for MySQL) 6.1 Manual]) commands to fail, therefore you should run this with the --force (in [Tungsten Clustering (for MySQL) 6.1 Manual]) option, and then redit the file as per the above instructions to reset the tar option

      Issues: CT-1157

Bug Fixes

  • Command-line Tools

    • tpm diag (in [Tungsten Clustering (for MySQL) 6.1 Manual]) would fail to collect diagnostics for relay nodes within a Composite Active/Passive Topology

      Issues: CT-1140

    • Fixes an edge case whereby tpm mysql (in [Tungsten Clustering (for MySQL) 6.1 Manual]) would fail on a node within a Composite Active/Active topology.

      Issues: CT-1151

    • tpm diag (in [Tungsten Clustering (for MySQL) 6.1 Manual]) now gathers all hosts in a staging deployment when run from a non-staging node.

      Issues: CT-1155

    • tpm diag (in [Tungsten Clustering (for MySQL) 6.1 Manual]) : fix ensures collection of diagnostics from standalone connector hosts.

      Issues: CT-1159

  • Tungsten Manager

    • During a local Primary switch within a Composite Active/Active Topology, where there is a high latency link between clusters, the switch could intermittently fail due to an incorrect rule triggering as the remote cluster sees an incorrect state in the opposing cluster

      Issues: CT-1141

Tungsten Clustering 6.1.3 Includes the following changes made in Tungsten Replicator 6.1.3

Release 6.1.3 contains a small number of improvements and fixes to common command line tools, and introduces compatibility with MongoDB Atlas.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Command-line Tools

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) has been updated to provide additional feedback detailing the hosts that were gathered during the execution, and also provides examples of how to handle failures

      When running on a single host configured via the ini method:

      shell> tpm diag
      Collecting localhost diagnostics.
      Note: to gather for all hosts please use the "-a" switch and ensure that you have paswordless »
      ssh access set between the hosts.
      Collecting diag information on db1.....
      Diagnostic information written to /home/tungsten/tungsten-diag-2020-02-06-19-34-25.zip

      When running on a staging host, or with the -a flag:

      shell> tpm diag [-a|--allhosts]
      Collecting full cluster diagnostics
      Note: if ssh access to any of the cluster hosts is denied, use "--local" or "--hosts=<host1,host2,...>"
      Collecting diag information on db1.....
      Collecting diag information on db2.....
      Collecting diag information on db3.....
      Diagnostic information written to /home/tungsten/tungsten-diag-2020-02-06-19-34-25.zip

      Issues: CT-1137

Bug Fixes

  • Command-line Tools

    • tpm would fail to run on some Operating Systems due to missing realpath

      tpm (in [Tungsten Replicator 6.1 Manual]) has been changed to use readlink which is commonly installed by default on most operating systems, however if it is not available, you may be required to install GNU coreutils to satisfy this dependency

      Issues: CT-1124

    • Removed dependency on perl module Time::HiRes from tpm

      Issues: CT-1126

    • Added support for handling missing dependency (Data::Dumper) within various tpm subcommands

      Issues: CT-1130

    • tpm (in [Tungsten Replicator 6.1 Manual]) will now work on MacOS/X systems, provided greadlink is installed.

      Issues: CT-1147

    • tpm install (in [Tungsten Replicator 6.1 Manual]) will no longer report that the linux distribution cannot be determined on SUSE platforms.

      Issues: CT-1148

    • Fixes a condition where tpm diag (in [Tungsten Replicator 6.1 Manual]) would fail to set the working path correctly, especially on Debian 8 hosts.

      Issues: CT-1150

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now checks for OS commands in additional paths (/bin, /sbin, /usr/bin and /usr/sbin)

      Issues: CT-1160

    • Fixes an issue introduced in v6.1.2 where the use of the undeployall (in [Tungsten Replicator 6.1 Manual]) script would stop services as it removed them from systemctl control

      Issues: CT-1166

  • Core Replicator

    • The MongoDB Applier has been updated to use the latest MongoDB JDBC Driver

      Issues: CT-1134

    • The MongoDB Applier now supports MongoDB Atlas as a target

      Issues: CT-1142

    • The replicator would fail with Unknown column '' in 'where clause when replicating between MySQL 8 hosts where the client application would write data into the source database host using a different collation to that of the default on the target database.

      The replicator would fail due to a mismatch in these collations when querying the information_schema.columns view to gather metadata ahead of applying to the target

      Issues: CT-1145

1.8. Tungsten Clustering 6.1.2 GA (20 January 2020)

Version End of Life. Not Yet Set

Release 6.1.2 contains significant improvements as well as some needed bugfixes.

Behavior Changes

The following changes have been made to Tungsten Cluster and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Behavior Changes

    • The Passive Witness functionality is now officially DEPRECATED and will be REMOVED in version 6.2

      Issues: CT-653

  • Installation and Deployment

    • In cluster deployments with witness nodes, if the MySQL servers had been configured to listen on any port other than the standard 3306, the tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command would default to the wrong port number.

      When witness hosts are in use, the tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command will no longer "guess" the data source port automatically by running my_print_defaults.

      It is now mandatory to specify the MySQL data server port explicitly using --datasource-port={mysql_listen_port} (in [Tungsten Clustering (for MySQL) 6.1 Manual]) or one of its aliases

      Issues: CT-1071

Improvements, new features and functionality

  • Command-line Tools

    • The tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command was originally written in Ruby. This improvement converts tpm to Perl over time, starting with the tpm shell wrapper and refactoring each sub-command one-by-one.

      For this release, we have redone the diag, mysql and connector sub-commands.

      This also wraps the update sub-command to provide the CT-1093 clustering fix.

      Issues: CT-1048

    • This version includes an update to the new BETA tool to provision Replicas, and Primaries in a Composite Active/Active topology. This release fully supports provisioning of nodes in a Composite Active/Active Topology.

      It can be invoked by running prov-sl.sh or tps.pl.

      This tool will replace tungsten_provision_slave (in [Tungsten Clustering (for MySQL) 6.1 Manual]) in a future release.

      Issues: CT-1070

    • Added the tpm policy (in [Tungsten Clustering (for MySQL) 6.1 Manual]) subcommand to allow easy get and set cluster policy operations instead of using cctrl.

      For more information, please see ...

      Issues: CT-1106

    • The new changes made to the tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command, require that the zip package be installed on all DB hosts

      Issues: CT-1111

  • Tungsten Manager

    • The new default datasources sort order is alphabetically when using the cctrl ls command. Additionally, the sort order of the datasources list is now configurable.

      The behavior is controlled by the tpm configuration property cctrl.sort.datasources.alphabetically which has a default value of true (meaning alpha sort).

      If set to false, the sort is ordered by datasource role, so the Primary or relay will appear first, followed by the Replicas. For example, use the following in the INI file for role-based sorting:

      property=cctrl.sort.datasources.alphabetically=false

      Issues: CT-1018

Bug Fixes

  • Installation and Deployment

    • Ensure that all Connector-Manager communications are SSL-encrypted when --disable-security-controls=false (in [Tungsten Clustering (for MySQL) 6.1 Manual])

      Issues: CT-1060

  • Command-line Tools

    • When performing an update of a cluster with tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]), the cluster would be switched to MAINTENANCE (in [Tungsten Clustering (for MySQL) 6.1 Manual]) but would remain in this policy after the update. The original policy is now retained during the update.

      Issues: CT-595, CT-1093

    • The deployall (in [Tungsten Clustering (for MySQL) 6.1 Manual]) script was only able to install init.d system startup scripts.

      In this release, the script will now detect the initialization system in use (systemd or initd) and prefer systemd when both are available.

      For systemd configurations only:

      For continuity of service reasons, the deployall (in [Tungsten Clustering (for MySQL) 6.1 Manual]) script does NOT restart individual components when called, it will only install systemd scripts. This implies that, right after a call to deployall and before host restart, the system will stay in a mixed mode where systemd scripts are in place but components were started without systemd, so won’t be controllable by it.

      In order to align the configuration, you will need to run

      shell> component stop sysd
      shell> sudo systemctl start tcomponent

      For example:

      shell> connector stop sysd
      shell> sudo systemctl start tconnector

      Issues: CT-853

    • When issuing tpm connector --samples (in [Tungsten Clustering (for MySQL) 6.1 Manual]), the output displayed clear text passwords. In this release the passwords are obsfucated.

      Issues: CT-1021

    • Continuent Tungsten Clustering now uses the xtrabackup command instead of the deprecated innobackupex to create and restore backups. A new check was added to TPM for validating different xtrabackup versions along with MySQL version compatibility. The oldest supported version of xtrabackup is v2.3

      Issues: CT-1074

  • Core Replicator

    • When configuring SSL for the Connector only, the Replicator would fail to start due to the Replicator also looking for the SSL configuration.

      Issues: CT-956

  • Tungsten Connector

    • Fixed an issue where some applications might fail to connect to the Connector with MariaDB 10+

      Previously, when using MariaDB 10+, the Connector would be confused by the 10 and will think it is a MySQL 8+ server. By default, the Connector will offer to connect with caching_sha2_password. If the application does not know how to switch authentication plugins, it would fail with a message similar to the following:

      The server requested authentication method unknown to the client [caching_sha2_password]

      The previous work-around was to specify the authentication plugin using the tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command:

      --property=defaultAuthPlugin=mysql_native_password

      Issues: CT-1033

    • Improved Tungsten Connector bridge mode performance when transferring small amounts of data.

      Issues: CT-1081

  • Tungsten Manager

    • When using the cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command interactively, the `cluster topology` TAB completion was showing invalid options. Invalid options have been removed.

      Issues: CT-979

    • Fixed an issue where long-duration operations like failover and switch would create false positives about network partitioning after completion.

      Issues: CT-1023

    • Fixed the cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command so that the '[SSL]' indicator in the `ls` output is displayed. This is a Version 5 feature that was lost in v6.0.0, now restored.

      Issues: CT-1061

    • Fixed the cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command `datasource {hostname} restore` which was failing in Composite Active/Active cluster deployments with: ERROR: MORE THAN ONE PRIMARY DATA SOURCE FOUND.

      Issues: CT-1062

    • Continuent Tungsten Clustering now only checks for a running MySQL server when the backup method is 'mysqldump' in cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]).

      Background: Running datasource {hostname} restore inside cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) would fail when the MySQL server was not running. Only the 'mysqldump' method requires a running MySQL server. The 'xtrabackup-full' and 'xtrabackup-incremental' methods will work even if MySQL is stopped.

      Issues: CT-1077

    • tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 6.1 Manual]) was displaying an incorrect error message if a service name wasn't supplied correctly.

      Issues: CT-1079

    • tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 6.1 Manual]) would error with 'Argument "" isn't numeric in addition'.

      Issues: CT-1080

    • Fixed an issue where Composite clusters with only a single site would come up as SHUNNED after install.

      Issues: CT-1101

Tungsten Clustering 6.1.2 Includes the following changes made in Tungsten Replicator 6.1.2

Release 6.1.2 contains both significant improvements as well as some needed bugfixes.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Behavior Changes

    • Certified the Tungsten product suite with Java 11.

      A small set of minor issues have been found and fixed (CT-1091, CT-1076) along with this certification.

      The code is now compiled with Java compiler v11 while keeping Java 8 compatibility.

      Java 9 and 10 have been tested and validated but certification and support will only cover Long Term releases.

      Note

      Known Issue

      With Java 11, command line tools are slower. There is no impact on the overall clustering or replication performance but this can affect manual operations using CLI tools such as cctrl and trepctl (in [Tungsten Replicator 6.1 Manual])

      Issues: CT-1052

Improvements, new features and functionality

  • Core Replicator

    • A new Replicator role, thl-server, has been added.

      This new feature allows your Replica replicators to still pull generated THL from a Primary even when the Primary replicator has stopped extracting from the binlogs.

      If used in Tungsten Clustering, this feature must only be enabled when the cluster is in MAINTENANCE mode.

      Issues: CT-58

      For more information, see Understanding Replicator Roles (in [Tungsten Replicator 6.1 Manual]).

    • A new JavaScript filter dropddl.js (in [Tungsten Replicator 6.1 Manual]) has been added to allow selective removal of specific object DDL from THL.

      Issues: CT-1092

Bug Fixes

  • Behavior Changes

    • If you need to reposition the extractor, there are a number of ways to do this, including the use of the options -from-event or -base-seqno

      Both of these options are mutually exclusive, however in some situations, such as when positioning against an Aurora source, you may need to issue both of these options together. Previously this was not possible. In this release both options can now be supplied providing that you include the additional -force option, for example

      shell> trepctl -service serviceName online -base-seqno 53 -from-event 000412:762897 -force

      Issues: CT-1065

    • When the Replicator inserts a heartbeart there is an associated timezone. Previously, the heartbeat would be inserted using the GMT timezone, which fails during the DST switch window. The new default uses the Replicator host's timezone instead.

      This defaults change corrects an edge case where inserting a heartbeat will fail during the DST switch window when the MYSQL server is running in a different timezone than the Replicator (which runs in GMT).

      For example, on 31th March 2019, the time switch occurred @ 2AM in the Europe/Paris timezone. When inserting a heartbeat in the window between 4 and 5 AM (say at 4:15am), the corresponding GMT time would be 2:15am, which is invalid in the Europe/Paris timezone. Replicator would then fail if the MySQL timezone was set to Europe/Paris, as it would try to insert an invalid timestamp.

      A new option, -tz has been added into the trepctl heartbeat (in [Tungsten Replicator 6.1 Manual]) command to force the use of a specific timezone.

      For example, use GMT as the timezone when inserting a heartbeat:

      shell> trepctl heartbeat -tz NONE

      Use the Replicator host's timezone to insert the heartbeat:

      shell> trepctl heartbeat -tz HOST

      Use the given timezone to insert the heartbeat:

      shell> trepctl heartbeat -tz {valid timezone id}

      If the MySQL server timezone is different from the host timezone (which is strongly not recommended), then -tz {valid timezone id} should be used instead where {valid timezone id} should be the same as the MySQL server timezone.

      Issues: CT-1066

    • Corrected resource leak when loading Java keystores

      Issues: CT-1091

  • Command-line Tools

    • Fixed error message to indicate the need to specify a service on Composite Active/Active clusters for the tungsten_find_position and tungsten_find_seqno commands.

      Issues: CT-1098

    • The tpm (in [Tungsten Replicator 6.1 Manual]) command no longer reports warnings about existing system triggers with MySQL 8+

      Issues: CT-1099

  • Core Replicator

    • When configuring a Kafka Applier, the Kafka Port was set incorrectly

      Issues: CT-693

    • If a JSON field contained a single quote, the replicator would break during the apply stage whilst running the generated SQL into MySQL.

      Single quotes will now be properly escaped to solve this issue

      Issues: CT-983

    • Under rare circumstances (network packet loss or MySQL Server hang), the replicator would also hang until restarted.

      This issue has been fixed by using specific network timeouts in both the replicator and in the Drizzle jdbc driver connection logic

      Issues: CT-1034

    • When configuring Active/Active, standalone replicators, with the BidiSlave filter enabled, the replicator was incorrectly parsing certain DDL Statements and marking them as unsafe, as a result they were being dropped by the applier and ignored

      The full list of DDL commands fixed in this release are as follows:

      • CREATE|DROP TRIGGER

      • CREATE|DROP FUNCTION

      • CREATE|DROP|ALTER|RENAME USER

      • GRANT|REVOKE

      Issues: CT-1084, CT-1117

    • The following warnings would appear in the replicator log due to GTID events not being handled.

      WARN extractor.mysql.LogEvent Skipping unrecognized binlog event type 33, 34 or 35)

      The WARN message will no longer appear, however GTID Events are still not handled in this release, but will be fully extracted in a future release.

      Issues: CT-1114

1.9. Tungsten Clustering 6.1.1 GA (28 October 2019)

Version End of Life. Not Yet Set

Release 6.1.1 contains both significant improvements as well as some needed bugfixes.

Improvements, new features and functionality

  • Tungsten Manager

    • Improved how the Manager and Replicator behave when MySQL dies on the Primary node.

      This improvement will induce a change of behavior in the product during failover by default, possibly causing a delay in failover as a way to protect data integrity.

      The new default setting for 6.1.1 is:

      replicator.store.thl.stopOnDBError=false

      This means that the Manager will wait until the Replicator reads all remaining binlog events on the failing Primary node.

      Failover will only continue once:

      • all available events are completely read from the binary logs on the Primary node

      • all events have reached the Replicas

      WARNING:

      The new default means that the failover time could take longer than it used to.

       

      When replicator.store.thl.stopOnDBError=true, then the Replicator will stop extracting once it is unable to update the trep_commit_seqno table in MySQL, and the Manager will perform the failover without waiting, at the risk of possible data loss due to leaving binlog events behind. All such situations are logged.

      For use cases where failover speed is more important than data accuracy, those NOT willing to wait for long failover can set replicator.store.thl.stopOnDBError=true and still use tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 6.1 Manual]) to manually analyze and perform the data recovery. For more information, please see The tungsten_find_orphaned Command (in [Tungsten Clustering (for MySQL) 6.1 Manual]).

      Issues: CT-583

    • A new feature called "Cluster State Savepoints" has been implemented.

      This new functionality was created to support clean, consistent rollbacks during aborted switch and failover operations. This functionality works for both physical clusters as well as for composite clusters.

      To support this new feature, a new cluster (in [Tungsten Clustering (for MySQL) 6.1 Manual]) sub-command has been added to the cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command - cluster topology validate (in [Tungsten Clustering (for MySQL) 6.1 Manual]), which will check and validate a cluster topology and, in the process, will report any issues that it finds. The purpose of this command is to provide a fast way to see, immediately, if there are any issues with any components of a cluster.

      Savepoints are created automatically with every switch and failover command. The savepoint is only used if there is an exception during switch or failover that is actually able to be rolled-back.

       
         

      WARNING:    

         

      Not all exceptions during switch and failover will cause a rollback.    

         

      In particular, if an exception happens during switch or failover AFTER a new primary datasource has been put online (relay or Primary) then the switch or failover operation cannot be rolled back.    

       

      The Manager is configured, by default, to hold a maximum of 50 savepoints. When that limit is hit, the Manager resets the current-savepoint-id to 0 and starts to overwrite existing savepoints, starting at 0. 

      Issues: CT-951

      For more information, see The cctrl Command (in [Tungsten Clustering (for MySQL) 6.1 Manual]).

    • Improved the ability of the manager to detect un-extracted, desirable binary log events when recovering the old Primary via cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) after a failover.

      The cctrl recover command will now fail if:

      • any unextracted binlog events exist on the old Primary that we are trying to recover

      • the old Primary THL contains more events than the Replicas

      In this case, the cctrl recover command will display text similar to the following:

      Recovery failed because the failed Primary has unextracted events in
      the binlog. Please run the tungsten_find_orphaned script to inspect
      this events. Provided you have a recent backup available, you can
      try to restore the data source by issuing the following command:
                     datasource {hostname} restore
      Please consult the user manual at:
      https://docs.continuent.com/tungsten-clustering-6.1/operations-restore.html

      The tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 6.1 Manual]) script is designed to locate orphaned MySQL binary logs that were not extracted into THL before a failover. For more information, please see The tungsten_find_orphaned Command (in [Tungsten Clustering (for MySQL) 6.1 Manual]).

      Issues: CT-996

    • Improved the ability to configure the manager's behavior upon failover.

      During a failover, the manager will now wait until the selected Replica has applied all stored THL events before promoting that node to Primary.

      This wait time can be configured via the manager.failover.thl.apply.wait.timeout=0 property.

      The default value is 0, which means "wait indefinitely until all stored THL events are applied".

      Any value other than zero invites data loss due to the fact that once the Replica is promoted to Primary, any unapplied stored events in the THL will be ignored, and therefore lost.

      Whenever a failover occurs, the Replica with most events stored in the local THL is selected so that when the events are eventually applied, the data is as close to the original Primary as possible with the least number of events missed.

      That is usually, but not always, the most up-to-date Replica, which is the one with the most events applied.

      There should be a good balance between the value for manager.failover.thl.apply.wait.timeout and the value for policy.slave.promotion.latency.threshold=900, which is the number of seconds to which a Replica must be current with the Primary in order to qualify as a candidate for failover. The default is 15 minutes (900 seconds).

      Issues: CT-1022

Bug Fixes

  • Command-line Tools

    • Installing with disable-security-controls=false or when updating using: tools/tpm update --replace-jgroups-certificate --replace-tls-certificate would generate self-signed security certs that have a 1-year expiration which will cause installs to break eventually.

      This expiration time value is controlled by the tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command option --java-tls-key-lifetime, which is now set to 10 years or 3,650 days by default.

      Issues: CT-937

    • Updated the check_tungsten.sh command to have the executable bit set.

      Issues: CT-1037

    • Updated the check_tungsten_services (in [Tungsten Clustering (for MySQL) 6.1 Manual]) and zabbix_tungsten_services (in [Tungsten Clustering (for MySQL) 6.1 Manual]) commands to auto-detect active witnesses.

      Issues: CT-1043

  • Tungsten Manager

    • Fixed an issue where the ls resources command run inside of cctrl (in [Tungsten Clustering (for MySQL) 6.1 Manual]) would fail to list the MANAGER entry on a Replica node.

      Issues: CT-599

    • If the pipeline source replicator goes OFFLINE, the relay will reconnect to a different Replica.

      Issues: CT-871

    • Fixed an issue where the Manager would show an exception when the MySQL check script did not get expected results.

      Issues: CT-912

    • Fixed use case where xtrabackup would timeout during backup via cctrl

      Issues: CT-1045

    • Improve ability to find needed binaries, both locally and over SSH, for commands: tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 6.1 Manual]) and tungsten_is_recoverable

      Issues: CT-1053

Tungsten Clustering 6.1.1 Includes the following changes made in Tungsten Replicator 6.1.1

Release 6.1.1 contains both significant improvements as well as some needed bugfixes.

Improvements, new features and functionality

  • Core Replicator

    • Added Clickhouse applier support.

      Issues: CT-383

    • If using the dropcolumn filter during extraction, in conjunction with the Batch Applier (eg Replicating to Redshift, Hadoop, Vertica) writes would fail with a CSV mismatch error due to gaps in the THL Index.

      However, for JDBC appliers, the gaps are required to ensure the correct column mapping

      To handle the two different requirements, a new property has been added to the filter to control whether or not to leave the THL index untouched (the default) or to re-order the Index ID's

      If applying to Batch targets, then the following property should be added to your configuration. The property is not required for JDBC targets.

      --property=replicator.filter.dropcolumn.fillGaps=true

      Issues: CT-1025

Bug Fixes

  • Command-line Tools

    • Fixed an issue that would prevent reading remote binary logs when using SSL.

      Issues: CT-958

    • Fixed an issue where the command trepctl -all-services status -name watches fails.

      Issues: CT-977

    • Restored previously-removed log file symbolic links under $CONTINUENT_ROOT/service_logs/

      Issues: CT-1026

    • Fixed a bug where tpm diag (in [Tungsten Replicator 6.1 Manual]) would generate an empty zip file if the hostnames contain hyphens (-) or periods (.)

      Issues: CT-1032

    • Improve ability to find needed binaries for commands: tungsten_find_position, tungsten_find_seqno and tungsten_get_rtt

      Issues: CT-1054

1.10. Tungsten Clustering 6.1.0 GA (31 July 2019)

Version End of Life. Not Yet Set

Release 6.1.0 contains both significant improvements as well as some needed bugfixes. One of the main features of this release is MySQL 8 support.

The Tungsten Stack now supports the new MySQL 8.0 authentication plugins. Both sha256_password and caching_sha2_password (the new default) are supported by the Replicator, Manager and Connector.

More info on these authentication plugins can be found here: https://dev.mysql.com/doc/refman/8.0/en/sha256-pluggable-authentication.html

The Drizzle driver has been updated to support these new authentication methods, and the MySQL Connector/J 8 is also supported.

Behavior Changes

The following changes have been made to Tungsten Cluster and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Tungsten Connector

    • The Connector passThroughMode configuration option is now deprecated.

      The following passThroughMode entry will be removed from tungsten-connector/conf/connector.properties. There is currently no tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) option for this, and it is undocumented. The default will be kept to passThroughMode=true.

      # The Tungsten Connector offers an extra fast data transfer mode known as
      # pass-through. When the following switch enabled (default), the Connector
      # will directly transfer data packets between the client and the server.
      # When disabled, every native MySQL command will be translated into a JDBC call.
      passThroughMode=true

      Issues: CT-897

Known Issue

The following issues may affect the operation of Tungsten Cluster and should be taken into account when deploying or updating to this release.

  • Tungsten Connector

    • Some applications might fail to connect to the Connector with MariaDB 10+

      When using MariaDB 10+, the Connector will be confused by the 10 and will think it is a MySQL 8+ server. By default, the Connector will offer to connect with caching_sha2_password. If the application does not know how to switch authentication plugins, it will fail with a message similar to the following:

      The server requested authentication method unknown to the client [caching_sha2_password]

      As a work-around, you may specify the authentication plugin using the following tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) command option:

      --property=defaultAuthPlugin=mysql_native_password

      Issues: CT-1033

Improvements, new features and functionality

  • Command-line Tools

    • A new utility script has been added to the release, tungsten_post_process (in [Tungsten Clustering (for MySQL) 6.1 Manual]), which assists with the graceful maintenance of the static cross-site replicator configuration files on disk.

      Issues: CT-761

      For more information, see The tungsten_post_process Command (in [Tungsten Clustering (for MySQL) 6.1 Manual]).

  • Tungsten Connector

    • The Tungsten Stack now supports the new MySQL 8.0 authentication plugins. Both sha256_password and caching_sha2_password (the new default) are supported by the Replicator, Manager and Connector.

      More info on these authentication plugins can be found here: https://dev.mysql.com/doc/refman/8.0/en/sha256-pluggable-authentication.html

      The Drizzle driver has been updated to support these new authentication methods, and the MySQL Connector/J 8 is also supported.

      In order to be fully transparent with the new defaults, when connected to a MySQL 8+ data source, the Connector will advertise caching_sha2_password as the default plugin.

      With earlier versions of MySQL (pre-8.0), the previous default mysql_native_password is used by default and advertised to the client applications.

      In order to override the default behavior, a new Connector property option for tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]), property=defaultAuthPlugin={autodetect|caching_sha2_password|mysql_native_password}, and is set to autodetect by default.

      Note that if property=defaultAuthPlugin is set to caching_sha2_password, the sha256_password authentication is automatically also supported.

      Warning

      Please note that the Connector does not support public key retrieval as of yet.

      Also note that, for backwards compatibility, the Connector forces the “CLIENT_DEPRECATE_EOF” to false, disallowing the usage of client session tracking requests (https://dev.mysql.com/doc/refman/5.7/en/session-state-tracking.html)

      Issues: CT-771

    • When logging is set to debug or trace, the Connector will print individual queries. In the past, advanced logging limited the display size of requests to 256 characters to prevent overwhelming the logs in terms of both space and filesystem I/O.

      Some customers need to display more than that, so it is now possible to adjust the size of the statements displayed in debug or trace logging modes. This is handled by a new Connector property option for tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]), property=statement.display.size.in.kb=NNN, which is defined as the maximum query length to display in Kbytes, and now defaults to 1KB.

      Warning

      Warning: setting this option to a high value while DEBUG or TRACE is enabled will quickly fill logs and disk, in addition to using up disk I/O's!

      For example, if the raw query size is 4KB, then a setting of 1KB would simply display the first 1024 bytes of the query and truncate/discard the rest from a logging perspective.

      For more information about configuring debug and trace logging, please visit Generating Advanced Diagnostic Information (in [Tungsten Clustering (for MySQL) 6.1 Manual]).

      Issues: CT-990

Bug Fixes

  • Tungsten Connector

    • OLD BEHAVIOR: If the Primary data source was not accessible when the Connector started (i.e. connection refused, etc.), the connector would still fully initialize, leading to a running Connector without an accessible data source. This has the side effects of having default configuration values for both wait_timeout and server_version instead of properly auto-detected values based on the MySQL server settings.

      NEW BEHAVIOR: The Connector will now wait indefinitely for a Primary to become available before finishing startup.

      Issues: CT-930

    • Introduced a new tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) flag allowing for tuning Connector thread stack size, which can be required in particular cases where large requests are sent as text to a connector configured for automated read/write splitting (smartscale and direct reads).

      The setting is commented out by default, leaving the JVM use its own default, generally 1024.

      Setting tpm (in [Tungsten Clustering (for MySQL) 6.1 Manual]) option connector-thread-stack-size={value in kb} will override this value.

      Note

      Please note that since the new size will be allocated for each incoming connection, increasing the thread stack size will affect the total runtime memory used by the connector instance

      Issues: CT-973

Tungsten Clustering 6.1.0 Includes the following changes made in Tungsten Replicator 6.1.0

Release 6.1.0 contains both significant improvements as well as some needed bugfixes. One of the main features of this release is MySQL 8 support.

Improvements, new features and functionality

  • Command-line Tools

    • Two new utility scripts have been added to the release to help with setting the Replicator position:

      - tungsten_find_position, which assists with locating information in the THL based on the provided MySQL binary log event position and outputs a dsctl set (in [Tungsten Replicator 6.1 Manual]) command as output.

      - tungsten_find_seqno, which assists with locating information in the THL based on the provided sequence number and outputs a dsctl set (in [Tungsten Replicator 6.1 Manual]) command as output.

      Issues: CT-934

  • Core Replicator

    • A new, beta-quality command has been included called prov-sl.sh which is intended to eventually replace the current tungsten_provision_slave (in [Tungsten Replicator 6.1 Manual]) script.

      Currently, prov-sl.sh supports provisioning Replicas using mysqldump and xtrabackup tools, and is MySQL 8-compatible. 

      The prov-sl.sh command is written in Bash, has less dependencies compared to the current version and is meant to fix a number of issues with the current version.

      Backups are streamed from source to target so that an intermediate write to disk is not performed, resulting in faster provisioning times.

      Logs are written to $CONTINUENT_ROOT/service_logs/prov-sl.log (i.e. /opt/continuent/service_logs/prov-sl.log).

      For example, provision a Replica from [source db] using mysqldump (default):

      shell> prov-sl.sh -s {source db}

      As another example, use xtrabackup for the backup method, with 10 parallel threads (default is 4), and ssh is listening on port 2222:

      shell> prov-sl.sh -s {source db} -n xtrabackup -t 10 -p 2222

      Warning

      At the moment, prov-sl.sh does not support Composite Active/Active topologies when used with Tungsten Clustering, however it will be included in a future release.

      Issues: CT-614, CT-723, CT-809, CT-855, CT-963

    • Upgraded the Drizzle driver to support MySQL 8 authentication protocols (SHA256, caching_sha2).

      Issues: CT-914, CT-931, CT-966

    • The Redshift Applier now allows AWS authentication using IAM Roles. Previously authentication was possible via Access and Secret Key pairs only.

      Issues: CT-980

      For more information, see Redshift Preparation for Amazon Redshift Deployments (in [Tungsten Replicator 6.1 Manual]).

Bug Fixes

  • Command-line Tools

    • When executing mysqldump, all Tungsten tools no longer use the --add-drop-database flag as it will prevent MySQL 8+ from restoring the dump.

      Issues: CT-935

    • Fixed a bug where tpm diag (in [Tungsten Replicator 6.1 Manual]) would generate an empty zip file if the hostnames contain hyphens (-) or periods (.)

      Issues: CT-1032

  • Core Replicator

    • Added support for missing charset GB18030 to correct WARN extractor.mysql.MysqlBinlog Unknown charset errors.

      Issues: CT-915, CT-932

    • Loading data into Redshift would fail with the following error if a row of data contained a specific control character (0x00 (NULL))

      Missing newline: Unexpected character 0x30 found at location nnn

      Issues: CT-984

    • Now properly extracting the Geometry datatype.

      Issues: CT-997

    • The ddl_map.json file used by the apply_schema_changes filter was missing a rule to handle ALTER TABLE statements when replicating between MySQL and Redshift

      Issues: CT-1002

    • The extract_schema_change filter wasn't escaping " (double-quotes) and the generated JSON would then cause the applier to error with

      pendingExceptionMessage: SyntaxError: missing } after property list »
      (../../tungsten-replicator//support/filters-javascript/apply_schema_changes.js#236(eval)#1)

      Issues: CT-1011

1.11. Tungsten Clustering 6.0.5 GA (20 March 2019)

Version End of Life. 31 July 2020

Release 6.0.5 contains both significant improvements as well as some needed bugfixes.

Improvements, new features and functionality

  • Command-line Tools

    • A new utility script has been added to the release, tungsten_reset_manager (in [Tungsten Clustering (for MySQL) 6.0 Manual]), which assists with the graceful reset of the manager's dynamic state files on disk.

      Issues: CT-850

      For more information, see The tungsten_reset_manager Command (in [Tungsten Clustering (for MySQL) 6.0 Manual]).

Bug Fixes

  • Installation and Deployment

    • Fixing the rpm-based post-install chown command so that symlinked directories get correct ownership.

      Issues: CT-767

    • The Tungsten Clustering RPM now preserves the original OS group memberships for the tungsten user.

      Issues: CT-867

  • Command-line Tools

    • Do not try to backup a witness server.

      Issues: CT-669

    • Include additional views of cctrl output in tpm diag (cctrl_status_simple_SVCNAME).

      Issues: CT-681

    • The MySQL MyISAM check seems to fail intermittantly with no way to bypass it so the check has been disabled completely.

      Issues: CT-756

    • Fixed an issue where the tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command would allocate inconsistent THL listener ports for the Composite Active/Active topology.

      The new, correct behavior is for the main cluster replicator to always be allocated port 2112, and then relay sub-services are incremented per remote cluster.

      For example, in a 4-site CMM deployment, ports 2112 through 2115 would be allocated - 2112 for the main cluster and 2113, 2114 and 2115 for the remote site relays.

      Issues: CT-799

    • The tpm diag (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command now collects cctrl status without a "WARNING: Unrecognized option 'multi'" error.

      Issues: CT-821

    • Remove any clear-text passwords gathered via tpm diag.

      Issues: CT-822

    • Fixed NullPointerException in cctrl 'ls -l' output when the dataserver is down.

      Issues: CT-826

  • Tungsten Connector

    • MySQL ping commands are now reconnected/retried upon "server gone away" error (Proxy mode ONLY).

      Issues: CT-863, CT-885

  • Tungsten Manager

    • Fixed a case when get_replicator_roles and cctrl ‘ls -l’ didn't work if a replicator was stopped.

      When a replicator is not running insert the Replicator.HOST to the ReplicationNotification. It was wrongly inserted into the Replicator.DATASERVERHOST. This fixes the get_replicator_roles script. Also substituted hard-coded strings for their constant values.

      Issues: CT-760, CT-876

    • mysql_checker_query script was returning unexpected errors and creating false positives. Changed the script logic to use the timestampdiff function for better accuracy.

      Issues: CT-824

    • Change the Manager behavior so as to place the replicator online asynchronously to prevent cctrl from hanging if a Replica replicator is put online while the Primary is offline. Now, if the Primary is offline the Replica will go into the SYNCHRONIZING state. As the Primary comes online the Replicas will come online as well.

      Issues: CT-825

Tungsten Clustering 6.0.5 Includes the following changes made in Tungsten Replicator 6.0.5

Release 6.0.5 is a bugfix release.

Improvements, new features and functionality

Bug Fixes

  • Command-line Tools

    • The --hosts (in [Tungsten Replicator 6.0 Manual]) option was not working with the diag sub-command of the tpm (in [Tungsten Replicator 6.0 Manual]) command on nodes installed using the INI method.

      The corrected behavior is as follows:

      • With Staging-method deployments, the tpm diag (in [Tungsten Replicator 6.0 Manual]) command continues to behave as before:

        • The tpm diag (in [Tungsten Replicator 6.0 Manual]) command alone will obtain diagnostics from all hosts in the cluster.

        • The tpm diag --hosts host1,host2,hostN command will obtain diagnostics from the specified host(s) only.

      • With INI-method deployments, the new behavior is as follows:

        • The tpm diag (in [Tungsten Replicator 6.0 Manual]) command alone will obtain diagnostics from the local host only.

        • The tpm diag --hosts host1,host2,hostN command will obtain diagnostics from the specified host(s) only.

          Warning

          Limitation: the host list MUST include the local hostname or the command will fail.

      Issues: CT-345

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) command now properly handles the -all-services option for the reset sub-command.

      Issues: CT-762

    • The command tpm reverse --ini-format now outputs correctly (without the double-dashes and the trailing backslash).

      Issues: CT-827, CT-877

    • The command tpm diag (in [Tungsten Replicator 6.0 Manual]) was not collecting config dirs other than the localhost ones.

      Now the mysql, manager, cluster and connector config directories are properly gathered in the diag zip file.

      Issues: CT-860

    • The tpm (in [Tungsten Replicator 6.0 Manual]) command now properly handles network interface names containing colons and/or dots.

      Issues: CT-864

    • Fixed an issue where the tpm (in [Tungsten Replicator 6.0 Manual]) command could print warnings about nil verify_host_key.

      Issues: CT-873

  • Core Replicator

    • The postgres applier now respects the database name set by pgsql-dbname.

      Specifically, the tungsten-replicator/samples/conf/datasources/postgresql.tpl was updated to use the correct variable for the value.

      Issues: CT-704

    • Instead of searching for a Primary with appropriate role (i.e. matching the Replica preferred role) until timeout is reached, the Replicator will now loop twice before accepting connection to any host, no matter what its role is.

      Issues: CT-712

    • The backup process fails with 0-byte store*.properties files or store*.properties files with invalid dates.

      Changed the process so that invalid backup properties files are skipped.

      Issues: CT-820

    • Fix the ability to enable parallel apply within a Composite Active/Active topology.

      Now handling relay as Replica to make the relay use the same code as a Replica concerning its internal connections (disable binary logging of its internal SQL queries).

      Issues: CT-851

1.12. Tungsten Clustering 6.0.4 GA (11 December 2018)

Version End of Life. 31 July 2020

Release 6.0.4 is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • When installing from an RPM, the installation would automatically restart the connector during the installation. This behavior can now be controlled by setting the parameter no-connectors within the ini configuration. This will prevent tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) from restarting the connectors during the automated update processing.

      Issues: CT-792

  • Tungsten Manager

    • Cross-site replicators within a Composite Active/Active deployment can now be configured to point to Replicas by default, and to prefer Replicas over Primaries during operation. In a standard deployment, cross-site replicators work via Primaries at each cluster site to read the remote information. To configure the service to use Replicas in preference to Primaries, use the --policy-relay-from-slave=true (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option to tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]). Both Primaries and Replicas remain in the list of possible hosts, if no Replicas are availble during a switch or failover event, then a Primary will be used.

      Issues: CT-776, CT-783

Bug Fixes

  • Installation and Deployment

    • When performing a tpm update (in [Tungsten Clustering (for MySQL) 6.0 Manual]) in a cluster with an active witness, the host with the witness will not be restarted correctly resulting in the witness being down on that host.

      Issues: CT-596

    • When using tpm diag (in [Tungsten Clustering (for MySQL) 6.0 Manual]), the command would fail to parse net-ssh options.

      Issues: CT-775

    • The Net::SSH internal options have been updated to reflect changes in the latest Net::SSH release.

      Issues: CT-781

    • When a site goes offline, connections to this site will be forced closed. Those connections will reconnect, as long as the site stays offline, they will be connected to remote site.

      You can now enable an option so that when the site comes back online, the connector will disconnect all these connections that couldn't get to their preferred site so that they will then reconnect to the expected site with the appropriate affinity.

      When not enabled, connections will continue to use the server originally configured until they disconnect through normal attrition. This is the default option.

      Note that this only applies to bridge mode. In proxy mode, relevancy of connected data source will be re-evaluated before every transaction.

      To enable this option, use the tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option --connector-reset-when-affinity-back=true (in [Tungsten Clustering (for MySQL) 6.0 Manual]).

      Issues: CT-789

  • Command-line Tools

    • In a Composite Active/Active deployment, once a datasource has been welcomed to the cluster, individual clusters within the composite may not agree on the overall state of the composite and individual clusters.

      Issues: CT-721

    • Tab completion within cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) would not always work in all cases, especially when the -multi (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option was in effect.

      Issues: CT-752

    • The check_tungsten_progress (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command could fail within Composite Active/Active deployments because there is no single default service.

      Issues: CT-757

    • Long service names within cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) could cause output to fail when displaying information. The underlying issue has been fixed. Because long service names can cause formatting issues, a new option, --cctrl-column-width (in [Tungsten Clustering (for MySQL) 6.0 Manual]) has been added which can be used to configure the minimum column width used to display information.

      Issues: CT-773, CT-926

    • During the lifetime of the cluster, switches may happen and the current Primary may well be a different node than what is reflected in the static ini file in the master= line. Normally, this difference is ignored during and update or an upgrade.

      However, if a customer has some kind of procedure (i.e. automation) which hand-edits the ini configuration file master= line at some point, and such hand-edits do not reflect the current reality at the time of the update/upgrade, an update/upgrade will fail and the cluster may be left in an indeterminate state.

      Warning

      The best practice is to NOT change the master= line in the INI configuration file after installation.

      Changed tpm check CurrentTopologyCheck from WARN to ERROR to prevent changed master= lines in ini files from breaking updates and upgrades.

      Warning

      Even with this fix, there is still a window of opportunity for failure. The update will continue, passing the CurrentTopologyCheck test and potentially leaving the cluster in an indeterminate state if the master= option is set to a hostname that is not the current Primary or the current host.

      Issues: CT-801

  • Tungsten Connector

    • The Connector has been modified to get the driver and JDBC URL of the datasource from the Connector-specific configuration, overriding the information normally distributed to it by the manager. This prevents the Connector from using incorrect settings, or empty values.

      Issues: CT-802

  • Tungsten Manager

    • Datasources could fail to be fenced correctly when a replicator fails.

      Issues: CT-424

    • Standby datasources would not be displayed within cctrl correctly.

      Issues: CT-749

    • The tungsten_prep_upgrade (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command could fail if there were certain special characters within the tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) options.

      Issues: CT-750

    • Changed the Manager logic so that the rules will not change the state of a Replicator in the OFFLINE:RESTORING state.

      Issues: CT-798

Tungsten Clustering 6.0.4 Includes the following changes made in Tungsten Replicator 6.0.4

Release 6.0.4 is a bugfix release.

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) command previously required the -service (in [Tungsten Replicator 6.0 Manual]) option to be the first option on the command-line. The option can now be placed in any position on the command-line.

      Issues: CT-758

    • If no service is specified then using trepctl (in [Tungsten Replicator 6.0 Manual]) and multiple services are configured, then an error would be reported, but no list of potential services would be provided. This has been updated so that trepctl (in [Tungsten Replicator 6.0 Manual]) will output the list available services and potential commands.

      Issues: CT-759

Bug Fixes

  • Installation and Deployment

    • When using tpm diag (in [Tungsten Replicator 6.0 Manual]), the command would fail to parse net-ssh options.

      Issues: CT-775

    • The Net::SSH internal options have been updated to reflect changes in the latest Net::SSH release.

      Issues: CT-781

  • Heterogeneous Replication

    • Within the Oracle to MySQL ddlscan (in [Tungsten Replicator 6.0 Manual]) templates, the TIMESTAMP datatype in Oracle has been updated to replicate into a DATETIME within MySQL.

      Issues: CT-785

  • Core Replicator

    • Changing the state machine so that RESTORING is not a substate of OFFLINE:NORMAL, but instead of OFFLINE. While a transition from OFFLINE:NORMAL:RESTORING to ONLINE was possible (which was wrong), it will not be possible to transition from OFFLINE:RESTORING to ONLINE.

      The proper sequance of events is: OFFLINE:NORMAL --restore--> OFFLINE:RESTORING --restore_complete--> OFFLINE:NORMAL

      Issues: CT-797

    • Heartbeats would be inserted into the replication flow using UTC even if the replicator had been configured to use a different timezone

      Issues: CT-803

1.13. Tungsten Clustering 6.0.3 GA (5 September 2018)

Version End of Life. 31 July 2020

Release 6.0.3 is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) now outputs a note and recommendation for performing backups of your cluster when installation has been completed.

      Issues: CT-730

  • Command-line Tools

    • The tungsten_prep_upgrade (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command has been updated to support an explicit host definition for the MySQL host in place of defaulting to the localhost (127.0.0.1). Use the --host (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option.

      Issues: CT-656

    • A new Nagios compatible check script has been added to the release, check_tungsten_policy (in [Tungsten Clustering (for MySQL) 6.0 Manual]), which returns the currently active policy mode.

      Issues: CT-675

      For more information, see The check_tungsten_policy Command (in [Tungsten Clustering (for MySQL) 6.0 Manual]).

  • Tungsten Connector

    • When receiving an error within MySQLPacket, the Connector now prints out the full content of the underlying error message.

      Issues: CT-636

    • The connector has been updated to allow dataservice selection to be deterministic and ordered rather than random by configuration. The updated configuration enables the connector to be set to use an ordered list of clusters within a composite solution.

      To set the order of the service selected during operation, the information must be set within the user.map (in [Tungsten Clustering (for MySQL) 6.0 Manual]). The configuration is based on an ordered, comma-separated list of services to use which are then selected in order. The specification operates on the following rules:

      • List of service names in order

      • If the service name has a dash prefix it is always explicitly excluded from the list of available datasources

      • If a datasource is not specified, it will always be picked last

      For example, in a setup made of three data service, usa, asia and europe, using affinity usa,asia,-europe will select data sources in data service usa. If usa is not available, in asia. If asia is not availble, then connection will not succeed since europe has been negated.

      Issues: CT-650

  • Tungsten Manager

    • The router gateway which provides communication between the manager and connector could shutdown even when quorum was available in a two-node cluster.

      Issues: CT-676

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) would fail during installation if the current directory was not writable by the current user.

      Issues: CT-564

    • Composite Active/Active cluster installations would fail if the hostname contains two or more hyphens or periods.

      Issues: CT-682, CT-695

    • tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) would fail to set properties within the defaults section of the configuration within Composite Active/Active clusters.

      Issues: CT-683

  • Command-line Tools

    • Using tpm diag (in [Tungsten Clustering (for MySQL) 6.0 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Clustering (for MySQL) 6.0 Manual]).

      Issues: CT-610

    • Using tpm connector (in [Tungsten Clustering (for MySQL) 6.0 Manual]) at the command-line would fail if the core MySQL configuration file (i.e. /etc/my.cnf) did not exist.

      Issues: CT-641

  • Tungsten Connector

    • The connector would fail to set reusable network addresses during configuration which could delay or slow startup until the address/port become available again.

      Issues: CT-694

    • When operating in bridge mode, the connector would fail to check whether the driver was in enabled/disabled mode, which could cause upgrades to fail as part of a graceful shutdown/update operation.

      Issues: CT-696

    • Multiple connectors within a cluster could all connect to the same manager within a given service, increasing the load on the single manager.

      Issues: CT-717

    • The Tungsten Connector could mistakenly get the Primary data source of the wrong data service within Composite Active/Active deployments during configuration.

      Issues: CT-719

  • Tungsten Manager

    • Performing a switch operation within a Composite Active/Active cluster with three or more clusters when the cluster was in MAINTENANCE (in [Tungsten Clustering (for MySQL) 6.0 Manual]) mode and the cross-site replicators are offline would lead to an unrecoverable cluster failure.

      Issues: CT-589

    • During a switch operation on a Composite Active/Active cluster when the cluster has been put into maintenance mode, the manager will put the cross-site replicators back into the online state.

      Issues: CT-591

    • When using the connector, the connector --cluster-status --json command would output header and footer information in place of bare JSON which would then cause JSON parsing to fail.

      Issues: CT-685

    • A memory leak within the manager, particularly in Composite Active/Active deployments, could cause the Java VM to consume more and more CPU cycles and then restart.

      Issues: CT-673, CT-691

    • During a relay failover within a Composite Active/Passive or Composite Active/Active deployment, if the communications had also failed between sites when the failover occured the manager would be unable to determine the correct Primary of the remote site.

      Issues: CT-703

    • Within Composite Active/Active deployments, during a cascading MySQL failure and switch operation across sites, the secondary site could misconfigure the cross-site relay.

      Issues: CT-713

    • A memory leak was identified in the router manager component that manages the communicating between the manager and the connector.

      Issues: CT-715

    • In a deployment, single cluster or Composite Active/Active where there is either the potential for high-latency across sites, or high latency within a site due to high loads on the connectors, the manager could mis-identify this high latency as a failure. This would trigger a quorum validation. These would be reported as network hangs, even though the result of the quorum check would be valid.

      To address this, the processing of router notifications processed by the connector and all other operations have been separated. This reduces the change of a heartbeat gap between hosts and therefore the connectors are available to the managers even under high loads or latency.

      Issues: CT-725

Tungsten Clustering 6.0.3 Includes the following changes made in Tungsten Replicator 6.0.3

Release 6.0.3 is a bugfix release.

Improvements, new features and functionality

  • Core Replicator

    • The output from thl list (in [Tungsten Replicator 6.0 Manual]) now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

    • The replicator has been updated to support the new character sets supported by MySQL 5.7 and MySQL 8.0, including the UTF-8-mb4 series.

      Issues: CT-700, CT-970

Bug Fixes

  • Installation and Deployment

    • During installation, tpm (in [Tungsten Replicator 6.0 Manual]) attempts to find the system commands such as service and systemctl used to start and stop databases. If these were not in the PATH, tpm (in [Tungsten Replicator 6.0 Manual]) would fail to find a start/stop for the configured database. In addition to looking for these tools in the PATH tpm (in [Tungsten Replicator 6.0 Manual]) also explicitly looks in the /sbin, /bin, /usr/bin and /usr/sbin directories.

      Issues: CT-722

  • Command-line Tools

    • Using tpm diag (in [Tungsten Replicator 6.0 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-610

    • When running tpm diag (in [Tungsten Replicator 6.0 Manual]), the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

    • Due to the operating taking a long time or timing out, the capture of the output from lsof has been removed from running tpm diag (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-731

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

    • When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when log_slave_updates was enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.

      Issues: CT-686

1.14. Tungsten Clustering 6.0.2 GA (27 June 2018)

Version End of Life. 31 July 2020

This is a bugfix release.

Bug Fixes

  • Tungsten Manager

    • Within a Composite Active/Active cluster, the manager could set the Primary to read-only when performing a switch operation.

      Issues: CT-672

1.15. Tungsten Clustering 6.0.1 GA (30 May 2018)

Version End of Life. 31 July 2020

This is a bugfix release.

Known Issue

The following issues may affect the operation of Tungsten Cluster and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • It was previously impossible to change from a non-SSL installation to an SSL installation using self-generated certificates if an INI style configuration was being used. This can now be achieved by using the following command-line:

      shell> tools/tpm update --replace-release --replace-jgroups-certificate --replace-tls-certificate

      Issues: CT-442

    • Previously the system had been configured to dump heap files by default when the system ran out of memory which was useful for debugging by the development team. This has now been disabled.

      Issues: CT-604

Improvements, new features and functionality

  • Installation and Deployment

    • The tpm diag (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command has been improved to include more information about the environment, including:

      • The output from the lsof command.

      • The output from the ps command.

      • The output from the show full processlist command within mysql.

      • Copies of all the .properties configuration files.

      • Copies of all the cluster configuration and .properties files.

      • Copies of all the my.cnf files, including directory configurations.

      • The output from the connector cluster-status (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command.

      • The output from all services in active/active clustering deployments.

      • Improvements to the clarity of some commands.

      • The INI files used by tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) (if using INI installs) are included.

      Issues: CT-530, CT-611, CT-615, CT-623

  • Tungsten Manager

    • The REASON FOR MAINTENANCE MODE message has been updated when a failover has occured to specifically indicate a failover rather than a switch.

      Issues: CT-624

Bug Fixes

  • Tungsten Manager

    • A script used internally by the manager to determine the status of replication, called mysql_checker_query.sql, had been identified as providing bad information under certain complex circumstances. The effects of the bad script could include out of memory failures. The script and query has been rewritten.

      Issues: CT-457

    • The first execution of ls (in [Tungsten Clustering (for MySQL) 6.0 Manual]) within cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) within active/active clusters could fail to provide the cluster status information at the top (world) level.

      Issues: CT-551

    • Performing a switch in a two-cluster active/active deployment could fail if the cross-site replicators were not accessible.

      Issues: CT-592

    • An error executing the query checker script would not get identified and trapped properly.

      Issues: CT-632

    • Within a running cluster, managers on different hosts with a composite cluster could show different cluster state information after a switch operation.

      Issues: CT-633, CT-634

    • The API has been updated to improve compatiblity with the Tungsten Dashboard.

      Issues: CT-639

Tungsten Clustering 6.0.1 Includes the following changes made in Tungsten Replicator 6.0.1

Release 6.0.1 is a bugfix release.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Command-line Tools

    • The tungsten_set_position (in [Tungsten Replicator 6.0 Manual]) and tungsten_get_position commands have been deprecated and will be removed in the 6.1.0 release. These commands only worked with MySQL datasources. Use the dsctl (in [Tungsten Replicator 6.0 Manual]) command, which works with a much wider range of datasources.

      Issues: CT-517

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl services (in [Tungsten Replicator 6.0 Manual]) has been updated to support the auto-refresh option using the -r command-line optionoption.

      Issues: CT-627

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) has been updated with a new command, servicetable (in [Tungsten Replicator 6.0 Manual]) command. This outputs the status information for multiple services in a tabular format to make it easier to identify the state for multi-service replicators. For example:

      shell> trepctl servicetable
      Processing servicetable command...
      Service | Status | Role | MasterConnectUri | SeqNo | Latency
      -------------------- | ------------------------------ | ---------- | ------------------------------ | ---------- | ----------
      alpha | ONLINE | slave | thl://trfiltera:2112/ | 322 | 0.00
      beta | ONLINE | slave | thl://ubuntuheterosrc:2112/ | 12 | 4658.59
      Finished servicetable command...

      The command also supports the auto-refresh option, -r.

      Issues: CT-637

Bug Fixes

  • Installation and Deployment

    • Support for the GEOMETRY data type within MySQL 5.7 and above has been added. This provides full support for both extracting and applying of the datatype to MySQL.

      This change is not backwards compatible; when upgrading, you should upgrade Replicas first and then the Primary to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.

      Issues: CT-403

    • When using Net::SSH within tpm (in [Tungsten Replicator 6.0 Manual]), more detailed information about any specific failures or errors is now provided.

      Issues: CT-523

    • tpm (in [Tungsten Replicator 6.0 Manual]) would mistakenly report issues with JSON columns during installation which no longer applies as JSON support for MySQL 5.7 was added in 6.0.0.

      Issues: CT-635

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 6.0 Manual]) could hang within different scenarios, including being executed in the background, or part of a background script or cronjob. The script could also fail to restart MySQL correctly

      Issues: CT-319, CT-572

    • The trepctl status (in [Tungsten Replicator 6.0 Manual]) would fail badly if the service name did not exist in the configuration, or if multipl services were configured.

      Issues: CT-545, CT-593

    • When using tpm (in [Tungsten Replicator 6.0 Manual]) with the INI method, the command would search multiple locations for suitable INI files. This could lead to multiple definitions of the same service, which could in turn lead to duplication of the installation process and occasional failures. If multiple INI files are found, a warning is now produced to highlight the potential for failures.

      Issues: CT-626

    • When setting optimizeRowEvents back to false (it is enabled by default), the replicator could fail with IndexOutOfBound errors.

      Issues: CT-631

    • Using trepctl qs (in [Tungsten Replicator 6.0 Manual]) where the sequence number could be larger than an INT would cause an error.

      Issues: CT-642

  • Oracle Replication

    • The prepare_offboard_fetcher script could fail due to the use of command that may not exist on some platforms. Under some circumstances the script could also be installed as non-executable.

      Issues: CT-420, CT-421

  • Heterogeneous Replication

    • The templates for ddlscan (in [Tungsten Replicator 6.0 Manual]) for MySQL to Oracle do not escape field names correctly.

      Issues: CT-249

    • When replicating data into MongoDB, numeric values and date values would be represented in the target database as strings, not as their native values.

      Issues: CT-581, CT-582

    • The default partition method used when loading data through CSV files showed an incorrect example format. Previously it was advised to use:

      'commit_hour='yyyy-MM-dd-HH

      It should just show the data format:

      yyyy-MM-dd-HH

      Issues: CT-607

    • The Javascript batch loader for Redshift could generate an error when loading the object used to derive information during loading.

      Issues: CT-620

    • The templates for ddlscan (in [Tungsten Replicator 6.0 Manual]) for Oracle to Redshift failed to handle the NUMBER type correctly.

      Issues: CT-621

  • Core Replicator

    • Optimizing deletes in row-based replication could delete the wrong rows if the pkey (in [Tungsten Replicator 6.0 Manual]) had not been enabled.

      Issues: CT-557

    • The included Drizzle driver would incorrectly assign values to prepared statements if the fields in the prepared statement included a question mark

      Issues: CT-608

    • During replication, the replictor could raise the java.util.ConcurrentModificationException error intermittently.

      Warning

      This change is not backwards compatible; when upgrading, you should upgrade Replicas first and then the Primary to ensure compatibility with the metadata.

      Issues: CT-618

  • Filters

    • The truncatetext (in [Tungsten Replicator 6.0 Manual]) filter was not configurable within all topologies. The configuration has now been updated so that the filter can be used in MySQL and other database environments.

      Issues: CT-386

1.16. Tungsten Clustering 6.0.0 GA (4 April 2018)

Version End of Life. 31 July 2020

Tungsten Cluster 6.0.0 is a major update to the operation and deployment of composite clusters. Within the new framework, a Composite Active/Active cluster is configured as follows:

  • Clusters within a composite cluster are now managed in a unified fashion, including the overall replication progress across clusters.

  • Cross-site replicators are configured as additional services within the main replicator.

  • Cross-site replicators are managed by the manager as part of a complete composite cluster solution.

  • A new global progress counter displays the current progress for the local and cross-site replication.

  • Connectors are configured by default to provide affinity for the local, and then the remote cluster.

The cluster package name has been changed, and upgrades from older versions to the new configuration and layout are supported.

Behavior Changes

The following changes have been made to Tungsten Cluster and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Installation and Deployment

    • A new unified cluster deployment is available, the Composite Active/Active. This is an updated version of the Multi-Site/Active-Active deployment in previous releases. It encompasses a number of significant changes and improvements:

      • Single, cluster-based, deployment using the new deployment type of composite-multi-master.

      • Unified Composite Active/Active cluster status within cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]).

      • Global progress counter indiciting the current cluster and cross-cluster performance.

      Issues: CT-105, CT-313, CT-431, CT-467

    • The name of the cluster deployment package for Tungsten Cluster has changed. Packages are now named to match the product, for example, release-notes-1-99.tar.gz.

      Issues: CT-271, CT-438

    • Support for using Java 7 with Tungsten Cluster has been removed. Java 8 or higher must be used for all deployments.

      Issues: CT-450

    • The behavior of the cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) has changed to operate better within the new composite deployments. Without the -multi (in [Tungsten Clustering (for MySQL) 6.0 Manual]) argument, cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) will cd (in [Tungsten Clustering (for MySQL) 6.0 Manual]) into the local standalone service. This matches the previous releases for cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]), but instead all services are still accessible without needing to use the -multi (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option. With the -multi (in [Tungsten Clustering (for MySQL) 6.0 Manual]) argument, cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) will not automatically cd (in [Tungsten Clustering (for MySQL) 6.0 Manual]) into the local standalone service but will show all available services.

      Issues: CT-524

  • Command-line Tools

    • Due to the change in the nature of the services and clustering within Composite Active/Passive and Composite Active/Active configurations, the tungsten_provision_slave (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command has been updated to support cross-cluster provisioning. Because there would now be a conflict of service names, a cross cluster provision should use the --force (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option. The --service (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option should still be set to the local service being reset. For example:

      shell> tungsten_provision_slave --source=db4 --service=east --direct --force

      Issues: CT-567

Known Issue

The following issues may affect the operation of Tungsten Cluster and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • During an upgrade installation from a v4 or v4 MSMM deployment, you may get additional, empty, schemas creates within your MySQL database. These schemas are harmless and can safely be removed. For example, if you have two services in your MSMM deployment, east and west, during the upgrade you will get two empty schemas, tungsten_east_from_west and tungsten_west_from_east.

      This will be addressed in a future release.

      Issues: CT-559

    • When performing a tpm update (in [Tungsten Clustering (for MySQL) 6.0 Manual]) operation to change the configuration and the cluster is in AUTOMATIC (in [Tungsten Clustering (for MySQL) 6.0 Manual]) mode, the update will complete correctly but the cluster may be left in MAINTENANCE (in [Tungsten Clustering (for MySQL) 6.0 Manual]) mode instead of being placed back into AUTOMATIC (in [Tungsten Clustering (for MySQL) 6.0 Manual]) mode.

      This will be addressed in a future release.

      Issues: CT-595

    • When performing a tpm update (in [Tungsten Clustering (for MySQL) 6.0 Manual]) in a cluster with an active witness, the host with the witness will not be restarted correctly resulting in the witness being down on that host.

      This will be addressed in a future release.

      Issues: CT-596

    • In a Composite Active/Active cluster deployment where there are three or more clusters, a failure in the MySQL server in one node in a clsuter could fail to be identified, and ultimately the failover within the environment to fail, either within the cluster or across clusters.

      This will be addressed in a future release.

      Issues: CT-619

  • Tungsten Manager

    • During a switch operation on a Composite Active/Active cluster when the cluster has been put into maintenance mode, the manager will put the cross-site replicators back into the online state.

      This will be addressed in a future release.

      Issues: CT-591

Improvements, new features and functionality

  • Installation and Deployment

    • A new utility script, tungsten_prep_upgrade (in [Tungsten Clustering (for MySQL) 6.0 Manual]) has been provided as part of the standard installation. The script is specifically designed to assist during the upgrade of a Multi-Site/Active-Active deployment from 5.3.0 and earlier to the new Composite Active/Active 6.0.0 deployment.

      Issues: CT-104

  • Command-line Tools

    • The cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command now includes a show topology (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command, that outputs the current toplogy for the cluster or component being viewed.

      Issues: CT-429

    • The tpm diag (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command has been extended to include Composite Active/Active cluster status information, one for each configured service and cross-site service.

      Issues: CT-594

  • Tungsten Connector

    • By default, within Composite Active/Active clusters, the affinity for the connector is configured to connect to the Primary for the site on which the connector lives first and then, if that Primary is not available, connect to the other site

      Issues: CT-448

Bug Fixes

  • Command-line Tools

    • The mm_tpm diag command could complain that an extra replicator is configured and running, even though it would be valid as part of a Composite Active/Active deployment.

      Issues: CT-396

    • The mm_trepctl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command could fail to display any status information while obtaining the core statistic information from each host.

      Issues: CT-437

  • Tungsten Manager

    • When performing a recover or switch operation within maintenance mode, the cluster would automatically revert to automatic mode just before and immediately after a switch, which could lead to problems correctly recovering a cluster.

      Issues: CT-472

Tungsten Clustering 6.0.0 Includes the following changes made in Tungsten Replicator 6.0.0

Release 6.0.0 is a feature and bugfix release. This release contains the following key fixes:

  • Added PostgreSQL applier support.

  • Added support for custom primary key field selection for source tables that cannot be configured with a primary key within the database.

  • Added a new filter for including whole of transaction metadata information into each event.

  • Added support for extended transaction information within the Kafka applier so that all the messages for a given transaction can be identified.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

Improvements, new features and functionality

  • Heterogeneous Replication

    • The Kafka applier now supports the inclusion of transaction information into each Kafka message broadcast, including the list of schema/tables and row counts for the entire transaction, as well as information about whether the message is the first or last message/row within an overall transaction. The transaction information can also be sent as a separate message on an independent Kafka topic.

      Issues: CT-496, CT-586

      For more information, see Optional Configuration Parameters for Kafka (in [Tungsten Replicator 6.0 Manual]).

  • Core Replicator

    • Experimental support for writing row-based data through SQL into PostgreSQL has been added back to the replicator. This includes basic support fr the replication of the data. Currently databases and tables must be created by hand. A future release will incorporate full support for DDL translation.

      Issues: CT-149

  • Filters

    • The pkey (in [Tungsten Replicator 6.0 Manual]) has been extended to support the specification of custom primary key fields. This enables fields in the source data to be marked as primary keys even if the source database does not have the keys specified. This is useful for heterogeneous loading of data where a unique key may exist, but cannot be defined due to the application or database that created the tables.

      Issues: CT-481

    • A new filter, rowaddtxninfo (in [Tungsten Replicator 6.0 Manual]) has been added which embeds row counts, both total and per schema/table, to the metadata for a THL event/transaction.

      Issues: CT-497

Bug Fixes

  • Installation and Deployment

    • When performing a tpm reverse (in [Tungsten Replicator 6.0 Manual]), the --replication-port (in [Tungsten Replicator 6.0 Manual]) setting would be replaced with it's alias, --oracle-tns-port.

      Issues: CT-597

  • Core Replicator

    • An internal optimization within the replicator that would attempt to optimise row-based information and operations has been removed. The effects of the optimization were actually seen in very few situations, and it duplicated work and operations performed by the pkey (in [Tungsten Replicator 6.0 Manual]) filter. Unfortunately the same optimization could also cause issues within heterogeneous deployments with the removal of information.

      Issues: CT-318

    • The internal storage of the MySQL server ID has been updated to support larger server IDs. This works with any MySQL deployment, but has been specifically expanded to work better with some cloud deployments where the server ID cannot be controlled.

      Issues: CT-439

    • The format of some errors and log entries would contain invalid characters.

      Issues: CT-493

1.17. Tungsten Clustering 5.4.1 GA (28 October 2019)

Version End of Life. Not Yet Set

Release 5.4.1 contains both significant improvements as well as some needed bugfixes.

Improvements, new features and functionality

  • Tungsten Manager

    • Improved how the Manager and Replicator behave when MySQL dies on the Primary node.

      This improvement will induce a change of behavior in the product during failover by default, possibly causing a delay in failover as a way to protect data integrity.

      The new default setting for 6.1.1 is:

      replicator.store.thl.stopOnDBError=false

      This means that the Manager will wait until the Replicator reads all remaining binlog events on the failing Primary node.

      Failover will only continue once:

      • all available events are completely read from the binary logs on the Primary node

      • all events have reached the Replicas

      WARNING:

      The new default means that the failover time could take longer than it used to.

       

      When replicator.store.thl.stopOnDBError=true, then the Replicator will stop extracting once it is unable to update the trep_commit_seqno table in MySQL, and the Manager will perform the failover without waiting, at the risk of possible data loss due to leaving binlog events behind. All such situations are logged.

      For use cases where failover speed is more important than data accuracy, those NOT willing to wait for long failover can set replicator.store.thl.stopOnDBError=true and still use tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 5.4 Manual]) to manually analyze and perform the data recovery. For more information, please see The tungsten_find_orphaned Command (in [Tungsten Clustering (for MySQL) 5.4 Manual]).

      Issues: CT-583

    • Improved the ability of the manager to detect un-extracted, desirable binary log events when recovering the old Primary via cctrl (in [Tungsten Clustering (for MySQL) 5.4 Manual]) after a failover.

      The cctrl recover command will now fail if:

      • any unextracted binlog events exist on the old Primary that we are trying to recover

      • the old Primary THL contains more events than the Replicas

      In this case, the cctrl recover command will display text similar to the following:

      Recovery failed because the failed Primary has unextracted events in
      the binlog. Please run the tungsten_find_orphaned script to inspect
      this events. Provided you have a recent backup available, you can
      try to restore the data source by issuing the following command:
                     datasource {hostname} restore
      Please consult the user manual at:
      https://docs.continuent.com/tungsten-clustering-6.1/operations-restore.html

      The tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 5.4 Manual]) script is designed to locate orphaned MySQL binary logs that were not extracted into THL before a failover. For more information, please see The tungsten_find_orphaned Command (in [Tungsten Clustering (for MySQL) 5.4 Manual]).

      Issues: CT-996

    • Improved the ability to configure the manager's behavior upon failover.

      During a failover, the manager will now wait until the selected Replica has applied all stored THL events before promoting that node to Primary.

      This wait time can be configured via the manager.failover.thl.apply.wait.timeout=0 property.

      The default value is 0, which means "wait indefinitely until all stored THL events are applied".

      Any value other than zero invites data loss due to the fact that once the Replica is promoted to Primary, any unapplied stored events in the THL will be ignored, and therefore lost.

      Whenever a failover occurs, the Replica with most events stored in the local THL is selected so that when the events are eventually applied, the data is as close to the original Primary as possible with the least number of events missed.

      That is usually, but not always, the most up-to-date Replica, which is the one with the most events applied.

      There should be a good balance between the value for manager.failover.thl.apply.wait.timeout and the value for policy.slave.promotion.latency.threshold=900, which is the number of seconds to which a Replica must be current with the Primary in order to qualify as a candidate for failover. The default is 15 minutes (900 seconds).

      Issues: CT-1022

Bug Fixes

  • Command-line Tools

    • Installing with disable-security-controls=false or when updating using: tools/tpm update --replace-jgroups-certificate --replace-tls-certificate would generate self-signed security certs that have a 1-year expiration which will cause installs to break eventually.

      This expiration time value is controlled by the tpm (in [Tungsten Clustering (for MySQL) 5.4 Manual]) command option --java-tls-key-lifetime, which is now set to 10 years or 3,650 days by default.

      Issues: CT-937

    • Updated the check_tungsten.sh command to have the executable bit set.

      Issues: CT-1037

    • Updated the check_tungsten_services (in [Tungsten Clustering (for MySQL) 5.4 Manual]) and zabbix_tungsten_services (in [Tungsten Clustering (for MySQL) 5.4 Manual]) commands to auto-detect active witnesses.

      Issues: CT-1043

  • Tungsten Manager

    • Fixed an issue where the Manager would show an exception when the MySQL check script did not get expected results.

      Issues: CT-912

    • Fixed use case where xtrabackup would timeout during backup via cctrl

      Issues: CT-1045

    • Improve ability to find needed binaries, both locally and over SSH, for commands: tungsten_find_orphaned (in [Tungsten Clustering (for MySQL) 5.4 Manual]) and tungsten_is_recoverable

      Issues: CT-1053

1.18. Tungsten Clustering 5.4.0 GA (31 July 2019)

Version End of Life. Not Yet Set

Release 5.4.0 contains both significant improvements as well as some needed bugfixes. One of the main features of this release is MySQL 8 support.

The Tungsten Stack now supports the new MySQL 8.0 authentication plugins. Both sha256_password and caching_sha2_password (the new default) are supported by the Replicator, Manager and Connector.

More info on these authentication plugins can be found here: https://dev.mysql.com/doc/refman/8.0/en/sha256-pluggable-authentication.html

The Drizzle driver has been updated to support these new authentication methods, and the MySQL Connector/J 8 is also supported.

Behavior Changes

The following changes have been made to Tungsten Cluster and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Tungsten Connector

    • The Connector passThroughMode configuration option is now deprecated.

      The following passThroughMode entry will be removed from tungsten-connector/conf/connector.properties. There is currently no tpm (in [Tungsten Clustering (for MySQL) 5.4 Manual]) option for this, and it is undocumented. The default will be kept to passThroughMode=true.

      # The Tungsten Connector offers an extra fast data transfer mode known as
      # pass-through. When the following switch enabled (default), the Connector
      # will directly transfer data packets between the client and the server.
      # When disabled, every native MySQL command will be translated into a JDBC call.
      passThroughMode=true

      Issues: CT-897

Known Issue

The following issues may affect the operation of Tungsten Cluster and should be taken into account when deploying or updating to this release.

  • Tungsten Connector

    • Some applications might fail to connect to the Connector with MariaDB 10+

      When using MariaDB 10+, the Connector will be confused by the 10 and will think it is a MySQL 8+ server. By default, the Connector will offer to connect with caching_sha2_password. If the application does not know how to switch authentication plugins, it will fail with a message similar to the following:

      The server requested authentication method unknown to the client [caching_sha2_password]

      As a work-around, you may specify the authentication plugin using the following tpm (in [Tungsten Clustering (for MySQL) 5.4 Manual]) command option:

      --property=defaultAuthPlugin=mysql_native_password

      Issues: CT-1033

Improvements, new features and functionality

  • Command-line Tools

    • A new utility script has been added to the release, tungsten_post_process (in [Tungsten Clustering (for MySQL) 5.4 Manual]), which assists with the graceful maintenance of the static cross-site replicator configuration files on disk.

      Issues: CT-761

      For more information, see The tungsten_post_process Command (in [Tungsten Clustering (for MySQL) 5.4 Manual]).

    • A new utility script has been added to the release, tungsten_reset_manager (in [Tungsten Clustering (for MySQL) 5.4 Manual]), which assists with the graceful reset of the manager's dynamic state files on disk.

      Issues: CT-850

      For more information, see The tungsten_reset_manager Command (in [Tungsten Clustering (for MySQL) 5.4 Manual]).

  • Tungsten Connector

    • The Tungsten Stack now supports the new MySQL 8.0 authentication plugins. Both sha256_password and caching_sha2_password (the new default) are supported by the Replicator, Manager and Connector.

      More info on these authentication plugins can be found here: https://dev.mysql.com/doc/refman/8.0/en/sha256-pluggable-authentication.html

      The Drizzle driver has been updated to support these new authentication methods, and the MySQL Connector/J 8 is also supported.

      In order to be fully transparent with the new defaults, when connected to a MySQL 8+ data source, the Connector will advertise caching_sha2_password as the default plugin.

      With earlier versions of MySQL (pre-8.0), the previous default mysql_native_password is used by default and advertised to the client applications.

      In order to override the default behavior, a new Connector property option for tpm (in [Tungsten Clustering (for MySQL) 5.4 Manual]), property=defaultAuthPlugin={autodetect|caching_sha2_password|mysql_native_password}, and is set to autodetect by default.

      Note that if property=defaultAuthPlugin is set to caching_sha2_password, the sha256_password authentication is automatically also supported.

      Warning

      Please note that the Connector does not support public key retrieval as of yet.

      Also note that, for backwards compatibility, the Connector forces the “CLIENT_DEPRECATE_EOF” to false, disallowing the usage of client session tracking requests (https://dev.mysql.com/doc/refman/5.7/en/session-state-tracking.html)

      Issues: CT-771

    • When logging is set to debug or trace, the Connector will print individual queries. In the past, advanced logging limited the display size of requests to 256 characters to prevent overwhelming the logs in terms of both space and filesystem I/O.

      Some customers need to display more than that, so it is now possible to adjust the size of the statements displayed in debug or trace logging modes. This is handled by a new Connector property option for tpm (in [Tungsten Clustering (for MySQL) 5.4 Manual]), property=statement.display.size.in.kb=NNN, which is defined as the maximum query length to display in Kbytes, and now defaults to 1KB.

      Warning

      Warning: setting this option to a high value while DEBUG or TRACE is enabled will quickly fill logs and disk, in addition to using up disk I/O's!

      For example, if the raw query size is 4KB, then a setting of 1KB would simply display the first 1024 bytes of the query and truncate/discard the rest from a logging perspective.

      For more information about configuring debug and trace logging, please visit Generating Advanced Diagnostic Information (in [Tungsten Clustering (for MySQL) 5.4 Manual]).

      Issues: CT-990

Bug Fixes

  • Installation and Deployment

    • Fixing the rpm-based post-install chown command so that symlinked directories get correct ownership.

      Issues: CT-767

    • The Tungsten Clustering RPM now preserves the original OS group memberships for the tungsten user.

      Issues: CT-867

  • Command-line Tools

    • Long service names within cctrl (in [Tungsten Clustering (for MySQL) 5.4 Manual]) could cause output to fail when displaying information. The underlying issue has been fixed. Because long service names can cause formatting issues, a new option, --cctrl-column-width has been added which can be used to configure the minimum column width used to display information.

      Issues: CT-773, CT-926

  • Tungsten Connector

    • MySQL ping commands are now reconnected/retried upon "server gone away" error (Proxy mode ONLY).

      Issues: CT-863, CT-885

    • OLD BEHAVIOR: If the Primary data source was not accessible when the Connector started (i.e. connection refused, etc.), the connector would still fully initialize, leading to a running Connector without an accessible data source. This has the side effects of having default configuration values for both wait_timeout and server_version instead of properly auto-detected values based on the MySQL server settings.

      NEW BEHAVIOR: The Connector will now wait indefinitely for a Primary to become available before finishing startup.

      Issues: CT-930

    • Introduced a new tpm (in [Tungsten Clustering (for MySQL) 5.4 Manual]) flag allowing for tuning Connector thread stack size, which can be required in particular cases where large requests are sent as text to a connector configured for automated read/write splitting (smartscale and direct reads).

      The setting is commented out by default, leaving the JVM use its own default, generally 1024.

      Setting tpm (in [Tungsten Clustering (for MySQL) 5.4 Manual]) option connector-thread-stack-size={value in kb} will override this value.

      Note

      Please note that since the new size will be allocated for each incoming connection, increasing the thread stack size will affect the total runtime memory used by the connector instance

      Issues: CT-973

  • Tungsten Manager

    • Fixed an edge case where the Primary node and the coordinator node are the same, then the node was rebooted. The failover would not complete and throws an error.

      Issues: CT-479

    • Remove spurious warnings during composite switch or failover.

      Issues: CT-487

    • Fixed a case when get_replicator_roles and cctrl ‘ls -l’ didn't work if a replicator was stopped.

      When a replicator is not running insert the Replicator.HOST to the ReplicationNotification. It was wrongly inserted into the Replicator.DATASERVERHOST. This fixes the get_replicator_roles script. Also substituted hard-coded strings for their constant values.

      Issues: CT-760, CT-876

1.19. Tungsten Clustering 5.3.6 GA (04 February 2019)

Version End of Life. 31 July 2020

This is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • When installing from an RPM, the installation would automatically restart the connector during the installation. This behavior can now be controlled by setting the parameter no-connectors within the ini configuration. This will prevent tpm (in [Tungsten Clustering (for MySQL) 5.3 Manual]) from restarting the connectors during the automated update processing.

      Issues: CT-792

  • Command-line Tools

    • A new Nagios compatible check script has been added to the release, check_tungsten_policy (in [Tungsten Clustering (for MySQL) 5.3 Manual]), which returns the currently active policy mode.

      Issues: CT-675

      For more information, see The check_tungsten_policy Command (in [Tungsten Clustering (for MySQL) 5.3 Manual]).

Bug Fixes

  • Command-line Tools

    • Do not try to backup a witness server.

      Issues: CT-669

    • Include additional views of cctrl output in tpm diag (cctrl_status_simple_SVCNAME).

      Issues: CT-681

    • The MySQL MyISAM check seems to fail intermittantly with no way to bypass it so the check has been disabled completely.

      Issues: CT-756

    • During the lifetime of the cluster, switches may happen and the current Primary may well be a different node than what is reflected in the static ini file in the master= line. Normally, this difference is ignored during and update or an upgrade.

      However, if a customer has some kind of procedure (i.e. automation) which hand-edits the ini configuration file master= line at some point, and such hand-edits do not reflect the current reality at the time of the update/upgrade, an update/upgrade will fail and the cluster may be left in an indeterminate state.

      Warning

      The best practice is to NOT change the master= line in the INI configuration file after installation.

      Changed tpm check CurrentTopologyCheck from WARN to ERROR to prevent changed master= lines in ini files from breaking updates and upgrades.

      Warning

      Even with this fix, there is still a window of opportunity for failure. The update will continue, passing the CurrentTopologyCheck test and potentially leaving the cluster in an indeterminate state if the master= option is set to a hostname that is not the current Primary or the current host.

      Issues: CT-801

    • Remove any clear-text passwords gathered via tpm diag.

      Issues: CT-822

  • Tungsten Connector

    • The Connector has been modified to get the driver and JDBC URL of the datasource from the Connector-specific configuration, overriding the information normally distributed to it by the manager. This prevents the Connector from using incorrect settings, or empty values.

      Issues: CT-802

  • Tungsten Manager

    • mysql_checker_query script was returning unexpected errors and creating false positives. Changed the script logic to use the timestampdiff function for better accuracy.

      Issues: CT-824

    • Change the Manager behavior so as to place the replicator online asynchronously to prevent cctrl from hanging if a Replica replicator is put online while the Primary is offline. Now, if the Primary is offline the Replica will go into the SYNCHRONIZING state. As the Primary comes online the Replicas will come online as well.

      Issues: CT-825

1.20. Tungsten Clustering 5.3.5 GA (06 November 2018)

Version End of Life. 31 July 2020

This is a bugfix release.

Bug Fixes

  • Installation and Deployment

    • When using tpm diag (in [Tungsten Clustering (for MySQL) 5.3 Manual]), the command would fail to parse net-ssh options.

      Issues: CT-775

    • The Net::SSH internal options have been updated to reflect changes in the latest Net::SSH release.

      Issues: CT-781

    • When a site goes offline, connections to this site will be forced closed. Those connections will reconnect, as long as the site stays offline, they will be connected to remote site.

      You can now enable an option so that when the site comes back online, the connector will disconnect all these connections that couldn't get to their preferred site so that they will then reconnect to the expected site with the appropriate affinity.

      When not enabled, connections will continue to use the server originally configured until they disconnect through normal attrition. This is the default option.

      Note that this only applies to bridge mode. In proxy mode, relevancy of connected data source will be re-evaluated before every transaction.

      To enable this option, use the tpm (in [Tungsten Clustering (for MySQL) 5.3 Manual]) option --connector-reset-when-affinity-back=true.

      Issues: CT-789

  • Tungsten Connector

    • When using smartscale, if you specify RW_STRICT (in [Tungsten Clustering (for MySQL) 5.3 Manual]), you will be connected to a Replica even though RW_STRICT (in [Tungsten Clustering (for MySQL) 5.3 Manual]) specifies that you should be a connected to the Primary.

      Issues: CT-782

1.21. Tungsten Clustering 5.3.4 GA (11 October 2018)

Version End of Life. 31 July 2020

This is a bugfix release.

Bug Fixes

  • Command-line Tools

    • When using tpm diag (in [Tungsten Clustering (for MySQL) 5.3 Manual]), the command could fail with the error text ClusterDiagnosticPackage::Zip.

      Issues: CT-763

1.22. Tungsten Clustering 5.3.3 GA (20 September 2018)

Version End of Life. 31 July 2020

This is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Tungsten Clustering (for MySQL) 5.3 Manual]) now outputs a note and recommendation for performing backups of your cluster when installation has been completed.

      Issues: CT-730

  • Command-line Tools

    • The tungsten_prep_upgrade command has been updated to support an explicit host definition for the MySQL host in place of defaulting to the localhost (127.0.0.1). Use the --host option.

      Issues: CT-656

  • Tungsten Connector

    • When receiving an error within MySQLPacket, the Connector now prints out the full content of the underlying error message.

      Issues: CT-636

  • Tungsten Manager

    • The router gateway which provides communication between the manager and connector could shutdown even when quorum was available in a two-node cluster.

      Issues: CT-676

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Clustering (for MySQL) 5.3 Manual]) would fail during installation if the current directory was not writable by the current user.

      Issues: CT-564

    • When performing a tpm update (in [Tungsten Clustering (for MySQL) 5.3 Manual]) in a cluster with an active witness, the host with the witness will not be restarted correctly resulting in the witness being down on that host.

      Issues: CT-596

  • Command-line Tools

    • Using tpm diag (in [Tungsten Clustering (for MySQL) 5.3 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Clustering (for MySQL) 5.3 Manual]).

      Issues: CT-610

    • Using tpm connector (in [Tungsten Clustering (for MySQL) 5.3 Manual]) at the command-line would fail if the core MySQL configuration file (i.e. /etc/my.cnf) did not exist.

      Issues: CT-641

  • Tungsten Connector

    • The connector would fail to set reusable network addresses during configuration which could delay or slow startup until the address/port become available again.

      Issues: CT-694

    • When operating in bridge mode, the connector would fail to check whether the driver was in enabled/disabled mode, which could cause upgrades to fail as part of a graceful shutdown/update operation.

      Issues: CT-696

    • Multiple connectors within a cluster could all connect to the same manager within a given service, increasing the load on the single manager.

      Issues: CT-717

  • Tungsten Manager

    • When using the connector, the connector --cluster-status --json command would output header and footer information in place of bare JSON which would then cause JSON parsing to fail.

      Issues: CT-685

    • A memory leak within the manager, particularly in Composite Active/Active deployments, could cause the Java VM to consume more and more CPU cycles and then restart.

      Issues: CT-673, CT-691

    • During a relay failover within a Composite Active/Active or Multi-Site/Active-Active deployment, if the communications had also failed between sites when the failover occured the manager would be unable to determine the correct Primary of the remote site.

      Issues: CT-703

    • A memory leak was identified in the router manager component that manages the communicating between the manager and the connector.

      Issues: CT-715

    • In a deployment, single cluster or Composite Active/Active where there is either the potential for high-latency across sites, or high latency within a site due to high loads on the connectors, the manager could mis-identify this high latency as a failure. This would trigger a quorum validation. These would be reported as network hangs, even though the result of the quorum check would be valid.

      To address this, the processing of router notifications processed by the connector and all other operations have been separated. This reduces the change of a heartbeat gap between hosts and therefore the connectors are available to the managers even under high loads or latency.

      Issues: CT-725

Tungsten Clustering 5.3.3 Includes the following changes made in Tungsten Replicator 5.3.3

Release 5.3.3 is a bug fix release.

Improvements, new features and functionality

  • Core Replicator

    • The output from thl list (in [Tungsten Replicator 5.3 Manual]) now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

Bug Fixes

  • Command-line Tools

    • Using tpm diag (in [Tungsten Replicator 5.3 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-610

    • When running tpm diag (in [Tungsten Replicator 5.3 Manual]), the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

1.23. Tungsten Clustering 5.3.2 GA (4 June 2018)

Version End of Life. 31 July 2020

This is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • The tpm diag (in [Tungsten Clustering (for MySQL) 5.3 Manual]) command has been improved to include more information about the environment, including:

      • The output from the lsof command.

      • The output from the ps command.

      • The output from the show full processlist command within mysql.

      • Copies of all the .properties configuration files.

      • Copies of all the cluster configuration and .properties files.

      • Copies of all the my.cnf files, including directory configurations.

      • The output from the connector cluster-status (in [Tungsten Clustering (for MySQL) 5.3 Manual]) command.

      • The output from all services in active/active clustering deployments.

      • Improvements to the clarity of some commands.

      • The INI files used by tpm (in [Tungsten Clustering (for MySQL) 5.3 Manual]) (if using INI installs) are included.

      Issues: CT-530, CT-611, CT-615, CT-623

Bug Fixes

  • Tungsten Manager

    • A script used internally by the manager to determine the status of replication, called mysql_checker_query.sql, had been identified as providing bad information under certain complex circumstances. The effects of the bad script could include out of memory failures. The script and query has been rewritten.

      Issues: CT-457

    • The first execution of ls (in [Tungsten Clustering (for MySQL) 5.3 Manual]) within cctrl (in [Tungsten Clustering (for MySQL) 5.3 Manual]) within active/active clusters could fail to provide the cluster status information at the top (world) level.

      Issues: CT-551

    • An error executing the query checker script would not get identified and trapped properly.

      Issues: CT-632

    • Within a running cluster, managers on different hosts with a composite cluster could show different cluster state information after a switch operation.

      Issues: CT-633, CT-634

Tungsten Clustering 5.3.2 Includes the following changes made in Tungsten Replicator 5.3.2

Release 5.3.2 is a bug fix release.

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 5.3 Manual]) would mistakenly report issues with JSON columns during installation which no longer applies as JSON support for MySQL 5.7 was added in 6.0.0.

      Issues: CT-635

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.3 Manual]) could hang within different scenarios, including being executed in the background, or part of a background script or cronjob. The script could also fail to restart MySQL correctly

      Issues: CT-319, CT-572

    • When setting optimizeRowEvents back to false (it is enabled by default), the replicator could fail with IndexOutOfBound errors.

      Issues: CT-631

    • Using trepctl qs (in [Tungsten Replicator 5.3 Manual]) where the sequence number could be larger than an INT would cause an error.

      Issues: CT-642

  • Core Replicator

    • During replication, the replictor could raise the java.util.ConcurrentModificationException error intermittently.

      Warning

      This change is not backwards compatible; when upgrading, you should upgrade Replicas first and then the Primary to ensure compatibility with the metadata.

      Issues: CT-618

1.24. Tungsten Clustering 5.3.1 GA (18 April 2018)

Version End of Life. 31 July 2020

Release 5.3.1 is a bug fix release that adds support for the GEOMETRY data type in MySQL 5.7 and above, and a number of bug fixes.

Known Issue

The following issues may affect the operation of Tungsten Cluster and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • It was previously impossible to change from a non-SSL installation to an SSL installation using self-generated certificates if an INI style configuration was being used. This can now be achieved by using the following command-line:

      shell> tools/tpm update --replace-release --replace-jgroups-certificate --replace-tls-certificate

      Issues: CT-442

    • Previously the system had been configured to dump heap files by default when the system ran out of memory which was useful for debugging by the development team. This has now been disabled.

      Issues: CT-604

Tungsten Clustering 5.3.1 Includes the following changes made in Tungsten Replicator 5.3.1

Release 5.3.1 is a bug fix release that adds support for the GEOMETRY data type in MySQL 5.7 and above, and a number of bug fixes.

Bug Fixes

  • Installation and Deployment

    • Support for the GEOMETRY data type within MySQL 5.7 and above has been added. This provides full support for both extracting and applying of the datatype to MySQL.

      This change is not backwards compatible; when upgrading, you should upgrade Replicas first and then the Primary to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.

      Issues: CT-403

1.25. Tungsten Clustering 5.3.0 GA (12 December 2017)

Version End of Life. 31 July 2020

Release 5.3.0 is a new feature release that contains improvements to the core replicator and manager, including adding new functionality in preparation for the next major release (6.0.0) and future functionality.

Key improvements include:

  • Improved and simplified user-focused logging, to make it easier to identify issues and problems.

  • Easier access to the overall cluster status from the command-line through the Connector cluster-status command.

  • Many fixes and stabilisation improvements to the Connector.

Improvements, new features and functionality

  • Tungsten Connector

    • The connector (in [Tungsten Clustering (for MySQL) 5.3 Manual]) has been extended to provide cluster status information, and to also to provide this information encapsulated in a JSON format. To get the cluster status through the connector (in [Tungsten Clustering (for MySQL) 5.3 Manual]) command:

      shell> connector cluster-status

      To get the information in JSON format:

      shell> connector cluster-status -json

      Issues: CONT-630

      For more information, see Connector connector cluster status on the Command-line (in [Tungsten Clustering (for MySQL) 5.3 Manual]).

Bug Fixes

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. These files are:

      • manager-user.log

      • connector-user.log

      • replicator-user.log

      These files should be much smaller, and much simpler to read and digest in the event of a problem. Currently the information and warnings added to the logs are being adjusted so that the new log files do not contain unnecessary entries.

      The original log files (tmsvc.log, connector.log, trepsvc.log) remain unchanged in terms of the information logged to them.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

  • Command-line Tools

    • Backups using datasource backup (in [Tungsten Clustering (for MySQL) 5.3 Manual]) could fail to complete properly when using xtrabackup.

      Issues: CT-352

    • The tpm diag (in [Tungsten Clustering (for MySQL) 5.3 Manual]) would fail to get manager logs from hosts that were configured without a replicator, for example standalone connector or witness hosts.

      Issues: CT-360

  • Tungsten Connector

    • If the MySQL server returns a 'too many open connections' error when connecting through the Drizzle driver, the connector could hang with a log message about BufferUnderFlow.

      Issues: CT-86

    • Support for complex passwords within user.map (in [Tungsten Clustering (for MySQL) 5.3 Manual]) that may include one or more single or double quotes have been updated. The following rules now apply for passwords in user.map (in [Tungsten Clustering (for MySQL) 5.3 Manual]):

      • Quotes ' and double quotes " are now supported in the user.map password.

      • If there's a space in the password, the password needs to be surrounded with " or ':

        "password with space"
      • If there's one or several " or ' in the password without space, the password doesn't need to be surrounded

        my"pas'w'or"d"
      • If the password itself starts and ends with the same quote (" or '), it needs to be surrounded by quotes.

        "'mypassword'" so that the actual password is 'mypassword'.

      As a general rule, if the password is enclosed in either single or double quotes, these are not included as part of the password during authentication.

      Issues: CONT-239

    • When starting up, the Connector would connect to the first Primary in the first data service within it's own internal list, now the 1st entry of the user.map (in [Tungsten Clustering (for MySQL) 5.3 Manual]) configuration.

      Issues: CT-385

    • When a connection gets its channel updated by a read/write split (either automatically because Smartscale has been enabled, or manually with selective read/write splitting), the channel that is left in background will be wrongly set as "in use", so the keepalive task won't be able to ping it anymore.

      Issues: CT-388

    • The bridgeServerToClientForcedCloseTimeout property default value has been reduced from 500ms to 50ms.

      Issues: CT-392

      For more information, see Adjusting the Bridge Mode Forced Client Disconnect Timeout (in [Tungsten Clustering (for MySQL) 5.3 Manual]).

    • Under certain circumstances it would be possible for the Connector, when configured to choose a Replica based on the Replica latency (i.e. using the --connector-max-slave-latency (in [Tungsten Clustering (for MySQL) 5.3 Manual]) configuration option), to select the wrong Replica. Rather than choosing the most advanced Replica in terms of the latency, the Replica with the highest latency could be selected instead.

      Issues: CONT-421

    • The connector would log a message each time a connection disappeared without being properly closed. For connections through load balancers this is standard behavior, and could lead to a large number of log entries that would make it difficult to find other errors. The default setting has been changed so the connection warnings are no longer produced by default. This can be changed by setting the printConnectionWarnings property to true.

      Issues: CT-456

  • Tungsten Manager

    • If the manager is on the same host as the coordinator, and there was an error writing information to the disk, and a failover situation occurred, the failiver would not take place. Since a disk write failure is a possible scenario for the the failure to occur, it could lead to the cluster being in an unstable state.

      Issues: CT-364

    • Within a composite deployment, switching a node in a local cluster would cause all relays within the entire composite cluster to point to that node as a Primary datasource.

      Issues: CT-378

Tungsten Clustering 5.3.0 Includes the following changes made in Tungsten Replicator 5.3.0

Release 5.3.0 is an important feature release that contains some key new functionality for replication. In particular:

  • JSON data type column extraction support for MySQL 5.7 and higher.

  • Generated column extraction support for MySQL 5.7 and higher.

  • DDL translation support for heterogeneous targets, initially support DDL translation for MySQL to MySQL, Vertica and Redshift targets.

  • Support for data concentration support for replication into a single target schema (with additional source schema information added to each table) for both HPE Vertica and Amazon Redshift targets.

  • Rebranded and updated support for Oracle extraction with the Oracle Redo Reader, including improvements to offboard deployment, more configuration options, and support for the deployment and installation of multiple offboard replication services within a single replicator.

This release also contains a number of important bug fixes and minor improvements to the product.

Improvements, new features and functionality

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. The new file is replicator-user.log. The original file, trepsvc.log remains unchanged.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

    • Support for Java 7 (JDK or JRE 1.7) has been deprecated, and will be removed in the 6.0.0 release. The software is compiled using Java 8 with Java 7 compatibility.

      Issues: CT-252

    • Some Javascript filters had DOS style line breaks.

      Issues: CT-376

    • Support for JSON datatypes and generated columns within MySQL 5.7 and greater has been added to the MySQL extraction component of the replicator.

      Important

      Due to a MySQL bug, the way that JSON and generated columns is represented within MySQL binary log, it is possible for the size of the data, and the reported size re different and this could cause data corruption To account for this behavior and to prevent data inconsistencies, the replicator can be configured to either ignore, warn, or stop, if the mismatch occurs.

      This can be set by modifying the property replicator.extractor.dbms.json_length_mismatch_policy.

      Until this problem is addressed within MySQL, tpm (in [Tungsten Replicator 5.3 Manual]) will still generate a warning about the issue which can be ignored during installation by using the --skip-validation-check=MySQLGeneratedColumnCheck (in [Tungsten Replicator 5.3 Manual]).

      For more information on the effects of the bug, see MySQL Bug #88791.

      Issues: CT-5, CT-468

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.3 Manual]) command has been updated to correctly operate with CentOS 7 and higher. Due to an underlying change in the way IP configuration information was sourced, the extraction of the IP address information has been updated to use the ip addr command.

      Issues: CT-35

    • The THL retention setting is now checked in more detail during installation. When the --thl-log-retention (in [Tungsten Replicator 5.3 Manual]) is configured when extracting from MySQL, the value is compared to the binary log expiry setting in MySQL (expire_logs_days). If the value is less, then a warning is produced to highlight the potential for loss of data.

      Issues: CT-91

    • A new option, --oracle-redo-temp-tablespace has been added to configure the temporary tablespace within Oracle redo reader extractor deployments.

      Issues: CT-321

  • Command-line Tools

    • 1743

      The sizes outputs for the thl list (in [Tungsten Replicator 5.3 Manual]) command, such as -sizes (in [Tungsten Replicator 5.3 Manual]) or -sizesdetail (in [Tungsten Replicator 5.3 Manual]) command now additionally output summary information for the selected THL events:

      Total ROW chunks: 8 with 7 updated rows (50%)
      Total STATEMENT chunks: 8 with 2552 bytes (50%)
      16 events processed

      A new option has also been added, -sizessummary (in [Tungsten Replicator 5.3 Manual]), that only outputs the summary information.

      Issues: CT-433

      For more information, see thl list -sizessummary Command (in [Tungsten Replicator 5.3 Manual]).

  • Filters

    • A new filter, rowadddbname (in [Tungsten Replicator 6.0 Manual]), has been added to the replicator. This filter adds the incoming schema name, and optional numeric hash value of the schema, to every row of THL row-based changes. The filter is designed to be used with heterogeneous and analytics applications where data is being concentrated into a single schema and where the source schema name will be lost during the concentration and replication process.

      In particular, it is designed to work in harmony with the new Redshift and Vertica based single-schema appliers where data from multiple, identical, schemas are written into a single target schema for analysis.

      Issues: CT-98

    • A new filter has been added, rowadddbname (in [Tungsten Replicator 6.0 Manual]), which adds the source database name and optional database hash to every incoming row of data. This can be used to help identify source information when concentrating information into a single schema.

      Issues: CT-407

Bug Fixes

  • Installation and Deployment

    • An issue has been identified with the way certain operating systems now configure their open files limits, which can upset the checks within tpm (in [Tungsten Replicator 5.3 Manual]) that determine the open files limits configured for MySQL. To ensure that the open files limit has been set correctly, check the configuration of the service:

      1. Copy the system configuration:

        shell> sudo cp /lib/systemd/system/mysql.service /etc/systemd/system/
        shell> sudo vim /etc/systemd/system/mysql.service
      2. Add the following line to the end of the copied file:

        LimitNOFILE=infinity
      3. Reload the systemctl daemon:

        shell> sudo systemctl daemon-reload
      4. Restart MySQL:

        shell> service mysql restart

      That configures everything properly and MySQL should now take note of the open_files_limit config option.

      Issues: CT-148

    • The check to determine if triggers had been enabled within the MySQL data source would not get executed correctly, meaning that warnings about unsupported triggers would not trigger a notification.

      Issues: CT-185

    • When using tpm diag (in [Tungsten Replicator 5.3 Manual]) on a MySQL deployment, the MySQL error log would not be identified and included properly if the default datadir option was not /var/lib/mysql.

      Issues: CT-359

    • Installation when enabling security through SSL could fail intermittently during installation because the certificates would fail to get copied to the required directory during the installation process.

      Issues: CT-402

    • The Net::SSH libraries used by tpm (in [Tungsten Replicator 5.3 Manual]) have been updated to reflect the deprecation of paranoid parameter.

      Issues: CT-426

    • Using a complex password, particularly one with single or double quotes, when specifying a password for tpm (in [Tungsten Replicator 5.3 Manual]), could cause checks and the installation to raise errors or fail, although the actual configuration would work properly. The problem was limited to internal checks by tpm (in [Tungsten Replicator 5.3 Manual]) only.

      Issues: CT-440

  • Command-line Tools

    • The startall (in [Tungsten Replicator 5.3 Manual]) command would fail to correctly start the Oracle redo reader process.

      Issues: CT-283

    • The tpm (in [Tungsten Replicator 5.3 Manual]) command would fail to remove the Oracle redo reader user when using tpm uninstall (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-299

    • The replicator stop (in [Tungsten Replicator 5.3 Manual]) command would not stop the Oracle redo reader process.

      Issues: CT-300

    • Within Vertica deployments, the internal identity of the applier was set incorrectly to PostgreSQL. This would make it difficult for certain internal processes to identify the true datasource type. The setting did not affect the actual operation.

      Issues: CT-452

  • Core Replicator

    • When parsing THL data it was possible for the internal THL processing to lead to a java.util.ConcurrentModificationException. This indicated that the underlying THL event metadata structure used internally had changed between uses.

      Issues: CT-355

2. Tungsten Replicator Release Notes

2.1. Tungsten Replicator 6.1.9 GA (23 Nov 2020)

Version End of Life. Not Yet Set

Release 6.1.9 is a minor bug fix release containing a fix for a critial bug within the Replicator related to the handling of Timezone.

Bug Fixes

  • Command-line Tools

    • tpm update (in [Tungsten Replicator 6.1 Manual]) no longer fails when using the staging method to upgrade to a new version.

      Issues: CT-1381

  • Core Replicator

    • In some edge case scenarios, the replicator was not setting the session time_zone correctly when proceeding sessions had a different time_zone applied, this could lead to situations where TIMESTAMP values could be applied into replica nodes with an incorrect time_zone offset applied.

      Issues: CT-1390

2.2. Tungsten Replicator 6.1.8 GA (2 Nov 2020)

Version End of Life. Not Yet Set

Release 6.1.8 is a minor bug fix release.

Bug Fixes

  • Core Replicator

    • Fixes an issue that would prevent a service from going offline at a specified time (trepctl online -until-time (in [Tungsten Replicator 6.1 Manual])) or at a specific seqno (trepctl online -until-seqno (in [Tungsten Replicator 6.1 Manual])) when parallel apply is enabled.

      Issues: CT-1243

    • In MySQL releases using old row events format (MySQL 5.6 or earlier), Delete_rows_v1 were badly handled, leading to an extraction error when handling such an event type.

      Issues: CT-1358

2.3. Tungsten Replicator 6.1.7 GA (5 Oct 2020)

Version End of Life. Not Yet Set

Release 6.1.7 is a minor bug fix release containing a fix for SSL Communications specific to clustering deployments.

This Tungsten Replicator release contains no changes, but was released as part of Continuents Standard release process to maintain consistency with version numbers.

2.4. Tungsten Replicator 6.1.6 GA (20 Aug 2020)

Version End of Life. Not Yet Set

Release 6.1.6 is a minor bug fix containing a fix for bi-directional standlone Replicator deployments.

Bug Fixes

  • Core Replicator

    • Allows multiple service names to be supplied to property=local.service.name when configuring bi-directional replication between a Composite Active/Active cluster topology and a MySQL target to prevent loopback of transactions.

      Issues: CT-1308

2.5. Tungsten Replicator 6.1.5 GA (5 Aug 2020)

Version End of Life. Not Yet Set

Release 6.1.5 is a small interim bug fix with a number of issues resolved within the Core Replicator, sepcifically for heterogeneous environments.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Heterogeneous Replication

    • When using the batch applier in Heterogeneous pipelines (Redshift, Vertica, Hadoop) the batch applier removes DDL statements.

      There may be occasions when you intentionally want DDL to pass through, such as if you have a custom filter that injects custom DDL statements into the pipeline, however the batch applier would always remove them.

      A new property is now available to control this behaviour. Set property=replicator.applier.dbms.applyStatements=true to allow the batch applier to retain DDL statements. The default value of false retains the original behaviour of removing DDL.

      Issues: CT-1270

Known Issues

  • Core Replicator

    • In MySQL release 8.0.21 the behavior of CREATE TABLE ... AS SELECT ... has changed, resulting in the transactions being logged differenly in the binary log. This change in behavior will cause the replicators to fail.

      Until a fix is implemented within the replicator, the workaround for this will be to split the action into separate CREATE TABLE ... followed by INSERT INTO ... SELECT FROM... statements.

      If this is not possible, then you will need to manually create the table on all nodes, and then skip the resulting error in the replicator, allowing the subsequent loading of the data to continue.

      Issues: CT-1301

Bug Fixes

  • Core Replicator

    • When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when log_slave_updates was enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.

      This was most commonly seen when using the standalone replicator to replicate into a Cluster, either from a standlone source, or a cluster source.

      Issues: CT-1255

    • For offboard extraction, the replicator would appear to be ONLINE but not actually processing new events.

      This is due to the relay client getting an incomplete packet from the remote database and going into a WAITING state.

      To handle these situations, a new property has been included that will set a timeout and if the replicator does not process an event in the given timeout period, we assume we have lost the link to the remote database and will place the replicator into an OFFLINE:ERROR state.

      Providing auto-recovery has been enabled using the auto-recovery-max-attempts (in [Tungsten Replicator 6.1 Manual]) parameter, the replicator will then restart and proceed successfully.

      The new property to include is property=replicator.extractor.dbms.relayLogIdleTimeout

      The default value (0) will disable the timeout. Values provided are in seconds, so 300 would be 5 minutes.

      Setting the timeout too low in quieter systems may result in unnecessary replicator restarts. The value should be set according to the activity levels of your database. If the source is very active with constant updates, then a low value would be appropriate. On quieter systems that may have long periods of inactivty, should have a timeout value set no less than the longest, normal, period of inactivity within your system.

      Issues: CT-1262

    • If filtering is in use, and a space appeared either side of the delimiter in a "schema.table" definition in your SQL, the replicator would fail to parse the statement correctly.

      For example, all of the examples below are valid SQL but would cause a failure in the replicator:

      sql> CREATE TABLE myschema. mytable (....
      
      sql> CREATE TABLE myschema .mytable (....
      
      sql> CREATE TABLE myschema . mytable (....

      Issues: CT-1278

    • Fixes a bug in the Drizzle Driver whereby a failing prepared statement that exceeds 1000 characters would report a String index out of range: 999 error rather than the actual error.

      Issues: CT-1303

2.6. Tungsten Replicator 6.1.4 GA (4 June 2020)

Version End of Life. Not Yet Set

Release 6.1.4 contains a number of improvements and bug fixes, specifically for the tpm command line tool and improvements to the Redshift Applier. In addition this release now fully supports the latest binlog compression feature of MySQL 8.0.20.

Improvements, new features and functionality

  • Command-line Tools

    • Improves tpm (in [Tungsten Replicator 6.1 Manual]) performance by using more efficient routines to calculate paths.

      Issues: CT-1168

    • Added the ability for tpm diag (in [Tungsten Replicator 6.1 Manual]) to skip both individual gather subroutines along with entire groups of gather subroutines.

      Also added ability to list all gather groups and subroutines using --list for use with the --skip and --skipgroups cli arguments.

      Issues: CT-1176

    • tungsten_provision_slave (in [Tungsten Replicator 6.1 Manual]) has been rewritten fixing a number of issues in the previous release. This version was previously released as the Beta script prov-sl.sh.

      Issues: CT-1208

    • A number of performance improvements and fixes have been incorporated into the Drizzle Driver used for communication between components and MySQL, these include:

      • Better handling of large queries close to max network packet size.

      • Batch Support. Instead of sending statements one by one, the driver will be able to send multiple statements at once, avoiding round trips between the driver and MySQL server.

      • Fixes issues with interpreting useSSL on connect string URLs.

      Issues: CT-1215, CT-1216, CT-1217, CT-1228

    • The tungsten_send_diag (in [Tungsten Replicator 6.1 Manual]) command now accepts arguments for the tpm diag (in [Tungsten Replicator 6.1 Manual]) command using --args or -a for short.

      You must enclose the arguments in quotes, for example:

      shell> tungsten_send_diag -c 9999 -d --args ‘--all -v’

      Issues: CT-1220

  • Core Replicator

    • debug has been disabled by default in the schemachange filter. Resulting in reduced noise in the replicator log file.

      Issues: CT-1013

    • A new replicator role thl-client has been added. This new role allows the replicator to download THL from a Primary, but not apply to the target database.

      Issues: CT-1123

    • A new delayInMs (in [Tungsten Replicator 6.1 Manual]) filter has been added which allows the applying of THL to a Replica to be delayed. The filter allows millisecond precision. This filter works in the same way as the TimeDelayFilter (in [Tungsten Replicator 6.1 Manual]), however that filter only allow second precision.

      Issues: CT-1191

    • A new droprow JavaScript filter has been added.

      The filter works at ROW level and allows the filtering out of rows based on one or more column/value matches

      Issues: CT-1213

    • When configuring the Redshift applier, you can now configure which tool the applier will use for posting CSV files to S3. Options are s3cmd (default), s4cmd or aws.

      Issues: CT-1218

    • A number of improvements have been made to the Redshift Applier to allow optional levels of table locking.

      This is particularly useful when you have multiple Redshift Appliers in a Fan-In topology, and/or very high volumes of data to process.

      The additional locking options reduce the risk of Redshift Serializable Isolation Violation errors occuring.

      Full details of how to utilise the new options can be read at Handling Concurrent Writes from Multiple Appliers (in [Tungsten Replicator 6.1 Manual])

      Issues: CT-1221

    • Tungsten Replicator can now correctly extract and parse Binary Log entries when the MySQL option binlog-transaction-compression has been enabled.

      binlog-transaction-compression is a new parameter introduced from MySQL 8.0.20.

      Issues: CT-1223

Bug Fixes

  • Command-line Tools

    • In certain edge cases, tungsten_provision_slave (in [Tungsten Replicator 6.1 Manual]) would fail to detect if mysql was shutdown.

      Issues: CT-1096

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now collects directories specified with !includedir in the /etc/my.cnf file.

      Issues: CT-1153

    • Fixes the tpm update (in [Tungsten Replicator 6.1 Manual]) command, which would exit with the error:

      Argument " (error code 1)" isn't numeric

      Issues: CT-1165

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now collects any files specified by !include directives in the /etc/my.cnf and /etc/mysql/my.cnf files.

      tpm diag (in [Tungsten Replicator 6.1 Manual]) also looks in /etc/mysql/my.cnf for !includedir directives

      Issues: CT-1169

    • Fixes a bug which prevented tungsten_send_diag (in [Tungsten Replicator 6.1 Manual]) from uploading a self-generated diagnostic zip file.

      Issues: CT-1170

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now properly derives the correct target path to the releases directory if the home directory in the configuration points to a sym-link.

      Issues: CT-1172

    • Removed tpm diag (in [Tungsten Replicator 6.1 Manual]) call to sudo for gathering ifconfig and lsb_release commands.

      Issues: CT-1175

    • tpm update (in [Tungsten Replicator 6.1 Manual]) would fail and report errors if deployall (in [Tungsten Replicator 6.1 Manual]) had been executed.

      Issues: CT-1179

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) no longer requires the mysql command-line client when running on non-MySQL Applier nodes, and no longer attempts to gather any MySQL diagnostic information.

      Issues: CT-1188

    • Removes the requirement to execute component start/stop commands with sudo when called through systemd.

      This is specific to start/stop actions following the use of the deployall (in [Tungsten Replicator 6.1 Manual]) scripts.

      Issues: CT-1193

    • Fixes cases where tpm (in [Tungsten Replicator 6.1 Manual]) fails when the OS hostname command returns a different string than what is used in the configuration (i.e. hostname returns a FQDN, yet the configuration contains shortnames like db1, db2, etc.).

      Issues: CT-1206

    • In certain cases, after a reprovision, tungsten_provision_slave (in [Tungsten Replicator 6.1 Manual]) didn’t always run the steps to reset the local replicator service. This made the replicator go into an error state after provision had completed.

      Issues: CT-1210

    • The tpm diag (in [Tungsten Replicator 6.1 Manual]) command now handles the cluster-slave topology more gracefully, and properly handles cluster nodes without the Connector installed.

      Improved output text clarity by converting multiple verbose-level outputs to debug, and warnings to notice-level.

      Issues: CT-1222

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now gathers sym-linked files correctly.

      Issues: CT-1232

    • ddlscan (in [Tungsten Replicator 6.1 Manual]) now sets the datatype for sequence number columns to BIGINT when generating staging table DDL for Redshift deployments.

      Issues: CT-1235

    • Fixes a situation where tpm update (in [Tungsten Replicator 6.1 Manual]) exits with a Data::Dumper error.

      Issues: CT-1249

  • Core Replicator

    • In heterogeneous replicator deployments, the convertstringfrommysql filter would fail to convert strings for alphanumeric key values.

      Issues: CT-1128

    • Tungsten Replicator could wrongly think it is already in DEGRADED mode when trying to put it into DEGRADED state.

      Issues: CT-1131

    • Tungsten Replicator now recognises Amazon AWS SSL Certificates to enable SSL communication with AWS Aurora.

      Issues: CT-1173

    • When host server time (and thus MySQL time) is not configured as UTC, issuing cluster heartbeat or trepctl heartbeat (in [Tungsten Replicator 6.1 Manual]) in the first hours around daylight savings time would create an invalid time in MySQL.

      For more information on timezones when issuing heartbeats, see trepctl heartbeat Time Zone Handling (in [Tungsten Replicator 6.1 Manual])

      Issues: CT-1174

    • The replicator would fail to apply into Vertica when configured as an offboard installation due to the applier incorrectly expecting the csv files to exist locally on the remote Vertica host.

      Issues: CT-1194

    • Due to a change in the Binary log structure introduced in MySQL 8.0.16, the replicator would fail to extract transactions for Partitioned tables.

      Issues: CT-1201

    • The replicator would fail to correctly parse statements with leading comment blocks in excess of 200 characters.

      Issues: CT-1236

2.7. Tungsten Replicator 6.1.3 GA (17 February 2020)

Version End of Life. Not Yet Set

Release 6.1.3 contains a small number of improvements and fixes to common command line tools, and introduces compatibility with MongoDB Atlas.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Command-line Tools

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) has been updated to provide additional feedback detailing the hosts that were gathered during the execution, and also provides examples of how to handle failures

      When running on a single host configured via the ini method:

      shell> tpm diag
      Collecting localhost diagnostics.
      Note: to gather for all hosts please use the "-a" switch and ensure that you have paswordless »
      ssh access set between the hosts.
      Collecting diag information on db1.....
      Diagnostic information written to /home/tungsten/tungsten-diag-2020-02-06-19-34-25.zip

      When running on a staging host, or with the -a flag:

      shell> tpm diag [-a|--allhosts]
      Collecting full cluster diagnostics
      Note: if ssh access to any of the cluster hosts is denied, use "--local" or "--hosts=<host1,host2,...>"
      Collecting diag information on db1.....
      Collecting diag information on db2.....
      Collecting diag information on db3.....
      Diagnostic information written to /home/tungsten/tungsten-diag-2020-02-06-19-34-25.zip

      Issues: CT-1137

Bug Fixes

  • Command-line Tools

    • tpm would fail to run on some Operating Systems due to missing realpath

      tpm (in [Tungsten Replicator 6.1 Manual]) has been changed to use readlink which is commonly installed by default on most operating systems, however if it is not available, you may be required to install GNU coreutils to satisfy this dependency

      Issues: CT-1124

    • Removed dependency on perl module Time::HiRes from tpm

      Issues: CT-1126

    • Added support for handling missing dependency (Data::Dumper) within various tpm subcommands

      Issues: CT-1130

    • tpm (in [Tungsten Replicator 6.1 Manual]) will now work on MacOS/X systems, provided greadlink is installed.

      Issues: CT-1147

    • tpm install (in [Tungsten Replicator 6.1 Manual]) will no longer report that the linux distribution cannot be determined on SUSE platforms.

      Issues: CT-1148

    • Fixes a condition where tpm diag (in [Tungsten Replicator 6.1 Manual]) would fail to set the working path correctly, especially on Debian 8 hosts.

      Issues: CT-1150

    • tpm diag (in [Tungsten Replicator 6.1 Manual]) now checks for OS commands in additional paths (/bin, /sbin, /usr/bin and /usr/sbin)

      Issues: CT-1160

    • Fixes an issue introduced in v6.1.2 where the use of the undeployall (in [Tungsten Replicator 6.1 Manual]) script would stop services as it removed them from systemctl control

      Issues: CT-1166

  • Core Replicator

    • The MongoDB Applier has been updated to use the latest MongoDB JDBC Driver

      Issues: CT-1134

    • The MongoDB Applier now supports MongoDB Atlas as a target

      Issues: CT-1142

    • The replicator would fail with Unknown column '' in 'where clause when replicating between MySQL 8 hosts where the client application would write data into the source database host using a different collation to that of the default on the target database.

      The replicator would fail due to a mismatch in these collations when querying the information_schema.columns view to gather metadata ahead of applying to the target

      Issues: CT-1145

2.8. Tungsten Replicator 6.1.2 GA (20 January 2020)

Version End of Life. Not Yet Set

Release 6.1.2 contains both significant improvements as well as some needed bugfixes.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Behavior Changes

    • Certified the Tungsten product suite with Java 11.

      A small set of minor issues have been found and fixed (CT-1091, CT-1076) along with this certification.

      The code is now compiled with Java compiler v11 while keeping Java 8 compatibility.

      Java 9 and 10 have been tested and validated but certification and support will only cover Long Term releases.

      Note

      Known Issue

      With Java 11, command line tools are slower. There is no impact on the overall clustering or replication performance but this can affect manual operations using CLI tools such as cctrl and trepctl (in [Tungsten Replicator 6.1 Manual])

      Issues: CT-1052

Improvements, new features and functionality

  • Command-line Tools

    • The tpm (in [Tungsten Replicator 6.1 Manual]) command was originally written in Ruby. This improvement converts tpm to Perl over time, starting with the tpm shell wrapper and refactoring each sub-command one-by-one.

      For this release, we have redone the diag and mysql sub-commands.

      Issues: CT-1048

  • Core Replicator

    • A new Replicator role, thl-server, has been added.

      This new feature allows your Replica replicators to still pull generated THL from a Primary even when the Primary replicator has stopped extracting from the binlogs.

      If used in Tungsten Clustering, this feature must only be enabled when the cluster is in MAINTENANCE mode.

      Issues: CT-58

      For more information, see Understanding Replicator Roles (in [Tungsten Replicator 6.1 Manual]).

    • A new JavaScript filter dropddl.js (in [Tungsten Replicator 6.1 Manual]) has been added to allow selective removal of specific object DDL from THL.

      Issues: CT-1092

Bug Fixes

  • Behavior Changes

    • If you need to reposition the extractor, there are a number of ways to do this, including the use of the options -from-event or -base-seqno

      Both of these options are mutually exclusive, however in some situations, such as when positioning against an Aurora source, you may need to issue both of these options together. Previously this was not possible. In this release both options can now be supplied providing that you include the additional -force option, for example

      shell> trepctl -service serviceName online -base-seqno 53 -from-event 000412:762897 -force

      Issues: CT-1065

    • When the Replicator inserts a heartbeart there is an associated timezone. Previously, the heartbeat would be inserted using the GMT timezone, which fails during the DST switch window. The new default uses the Replicator host's timezone instead.

      This defaults change corrects an edge case where inserting a heartbeat will fail during the DST switch window when the MYSQL server is running in a different timezone than the Replicator (which runs in GMT).

      For example, on 31th March 2019, the time switch occurred @ 2AM in the Europe/Paris timezone. When inserting a heartbeat in the window between 4 and 5 AM (say at 4:15am), the corresponding GMT time would be 2:15am, which is invalid in the Europe/Paris timezone. Replicator would then fail if the MySQL timezone was set to Europe/Paris, as it would try to insert an invalid timestamp.

      A new option, -tz has been added into the trepctl heartbeat (in [Tungsten Replicator 6.1 Manual]) command to force the use of a specific timezone.

      For example, use GMT as the timezone when inserting a heartbeat:

      shell> trepctl heartbeat -tz NONE

      Use the Replicator host's timezone to insert the heartbeat:

      shell> trepctl heartbeat -tz HOST

      Use the given timezone to insert the heartbeat:

      shell> trepctl heartbeat -tz {valid timezone id}

      If the MySQL server timezone is different from the host timezone (which is strongly not recommended), then -tz {valid timezone id} should be used instead where {valid timezone id} should be the same as the MySQL server timezone.

      Issues: CT-1066

    • Corrected resource leak when loading Java keystores

      Issues: CT-1091

  • Command-line Tools

    • Fixed error message to indicate the need to specify a service on Composite Active/Active clusters for the tungsten_find_position and tungsten_find_seqno commands.

      Issues: CT-1098

    • The tpm (in [Tungsten Replicator 6.1 Manual]) command no longer reports warnings about existing system triggers with MySQL 8+

      Issues: CT-1099

  • Core Replicator

    • When configuring a Kafka Applier, the Kafka Port was set incorrectly

      Issues: CT-693

    • If a JSON field contained a single quote, the replicator would break during the apply stage whilst running the generated SQL into MySQL.

      Single quotes will now be properly escaped to solve this issue

      Issues: CT-983

    • Under rare circumstances (network packet loss or MySQL Server hang), the replicator would also hang until restarted.

      This issue has been fixed by using specific network timeouts in both the replicator and in the Drizzle jdbc driver connection logic

      Issues: CT-1034

    • When configuring Active/Active, standalone replicators, with the BidiSlave filter enabled, the replicator was incorrectly parsing certain DDL Statements and marking them as unsafe, as a result they were being dropped by the applier and ignored

      The full list of DDL commands fixed in this release are as follows:

      • CREATE|DROP TRIGGER

      • CREATE|DROP FUNCTION

      • CREATE|DROP|ALTER|RENAME USER

      • GRANT|REVOKE

      Issues: CT-1084, CT-1117

    • The following warnings would appear in the replicator log due to GTID events not being handled.

      WARN extractor.mysql.LogEvent Skipping unrecognized binlog event type 33, 34 or 35)

      The WARN message will no longer appear, however GTID Events are still not handled in this release, but will be fully extracted in a future release.

      Issues: CT-1114

2.9. Tungsten Replicator 6.1.1 GA (28 October 2019)

Version End of Life. Not Yet Set

Release 6.1.1 contains both significant improvements as well as some needed bugfixes.

Improvements, new features and functionality

  • Core Replicator

    • Added Clickhouse applier support.

      Issues: CT-383

    • If using the dropcolumn filter during extraction, in conjunction with the Batch Applier (eg Replicating to Redshift, Hadoop, Vertica) writes would fail with a CSV mismatch error due to gaps in the THL Index.

      However, for JDBC appliers, the gaps are required to ensure the correct column mapping

      To handle the two different requirements, a new property has been added to the filter to control whether or not to leave the THL index untouched (the default) or to re-order the Index ID's

      If applying to Batch targets, then the following property should be added to your configuration. The property is not required for JDBC targets.

      --property=replicator.filter.dropcolumn.fillGaps=true

      Issues: CT-1025

Bug Fixes

  • Command-line Tools

    • Fixed an issue that would prevent reading remote binary logs when using SSL.

      Issues: CT-958

    • Fixed an issue where the command trepctl -all-services status -name watches fails.

      Issues: CT-977

    • Restored previously-removed log file symbolic links under $CONTINUENT_ROOT/service_logs/

      Issues: CT-1026

    • Fixed a bug where tpm diag (in [Tungsten Replicator 6.1 Manual]) would generate an empty zip file if the hostnames contain hyphens (-) or periods (.)

      Issues: CT-1032

    • Improve ability to find needed binaries for commands: tungsten_find_position, tungsten_find_seqno and tungsten_get_rtt

      Issues: CT-1054

2.10. Tungsten Replicator 6.1.0 GA (31 July 2019)

Version End of Life. Not Yet Set

Release 6.1.0 contains both significant improvements as well as some needed bugfixes. One of the main features of this release is MySQL 8 support.

Improvements, new features and functionality

  • Command-line Tools

    • Two new utility scripts have been added to the release to help with setting the Replicator position:

      - tungsten_find_position, which assists with locating information in the THL based on the provided MySQL binary log event position and outputs a dsctl set (in [Tungsten Replicator 6.1 Manual]) command as output.

      - tungsten_find_seqno, which assists with locating information in the THL based on the provided sequence number and outputs a dsctl set (in [Tungsten Replicator 6.1 Manual]) command as output.

      Issues: CT-934

  • Core Replicator

    • A new, beta-quality command has been included called prov-sl.sh which is intended to eventually replace the current tungsten_provision_slave (in [Tungsten Replicator 6.1 Manual]) script.

      Currently, prov-sl.sh supports provisioning Replicas using mysqldump and xtrabackup tools, and is MySQL 8-compatible. 

      The prov-sl.sh command is written in Bash, has less dependencies compared to the current version and is meant to fix a number of issues with the current version.

      Backups are streamed from source to target so that an intermediate write to disk is not performed, resulting in faster provisioning times.

      Logs are written to $CONTINUENT_ROOT/service_logs/prov-sl.log (i.e. /opt/continuent/service_logs/prov-sl.log).

      For example, provision a Replica from [source db] using mysqldump (default):

      shell> prov-sl.sh -s {source db}

      As another example, use xtrabackup for the backup method, with 10 parallel threads (default is 4), and ssh is listening on port 2222:

      shell> prov-sl.sh -s {source db} -n xtrabackup -t 10 -p 2222

      Warning

      At the moment, prov-sl.sh does not support Composite Active/Active topologies when used with Tungsten Clustering, however it will be included in a future release.

      Issues: CT-614, CT-723, CT-809, CT-855, CT-963

    • Upgraded the Drizzle driver to support MySQL 8 authentication protocols (SHA256, caching_sha2).

      Issues: CT-914, CT-931, CT-966

    • The Redshift Applier now allows AWS authentication using IAM Roles. Previously authentication was possible via Access and Secret Key pairs only.

      Issues: CT-980

      For more information, see Redshift Preparation for Amazon Redshift Deployments (in [Tungsten Replicator 6.1 Manual]).

Bug Fixes

  • Command-line Tools

    • When executing mysqldump, all Tungsten tools no longer use the --add-drop-database flag as it will prevent MySQL 8+ from restoring the dump.

      Issues: CT-935

    • Fixed a bug where tpm diag (in [Tungsten Replicator 6.1 Manual]) would generate an empty zip file if the hostnames contain hyphens (-) or periods (.)

      Issues: CT-1032

  • Core Replicator

    • Added support for missing charset GB18030 to correct WARN extractor.mysql.MysqlBinlog Unknown charset errors.

      Issues: CT-915, CT-932

    • Loading data into Redshift would fail with the following error if a row of data contained a specific control character (0x00 (NULL))

      Missing newline: Unexpected character 0x30 found at location nnn

      Issues: CT-984

    • Now properly extracting the Geometry datatype.

      Issues: CT-997

    • The ddl_map.json file used by the apply_schema_changes filter was missing a rule to handle ALTER TABLE statements when replicating between MySQL and Redshift

      Issues: CT-1002

    • The extract_schema_change filter wasn't escaping " (double-quotes) and the generated JSON would then cause the applier to error with

      pendingExceptionMessage: SyntaxError: missing } after property list »
      (../../tungsten-replicator//support/filters-javascript/apply_schema_changes.js#236(eval)#1)

      Issues: CT-1011

2.11. Tungsten Replicator 6.0.5 GA (20 March 2019)

Version End of Life. 31 July 2020

Release 6.0.5 is a bugfix release.

Improvements, new features and functionality

  • Core Replicator

    • Provide a setting to control TRUNCATE statement filtering when the dropstatementdata (in [Tungsten Replicator 6.0 Manual]) filter is in use.

      The new setting is called filterTruncate, with a default of true.

      The default of true behaves the same as previously, TRUNCATE statements are filtered out and removed.

      If filterTruncate is set to false, TRUNCATE statements will not be filtered out and are kept.

      For example, to keep TRUNCATE statements (do not filter them out):

      shell> tools/tpm configure omega --repl-svc-applier-filters=dropstatementdata --property=replicator.filter.dropstatementdata.filterTruncate=false

      Issues: CT-769

Bug Fixes

  • Command-line Tools

    • The --hosts (in [Tungsten Replicator 6.0 Manual]) option was not working with the diag sub-command of the tpm (in [Tungsten Replicator 6.0 Manual]) command on nodes installed using the INI method.

      The corrected behavior is as follows:

      • With Staging-method deployments, the tpm diag (in [Tungsten Replicator 6.0 Manual]) command continues to behave as before:

        • The tpm diag (in [Tungsten Replicator 6.0 Manual]) command alone will obtain diagnostics from all hosts in the cluster.

        • The tpm diag --hosts host1,host2,hostN command will obtain diagnostics from the specified host(s) only.

      • With INI-method deployments, the new behavior is as follows:

        • The tpm diag (in [Tungsten Replicator 6.0 Manual]) command alone will obtain diagnostics from the local host only.

        • The tpm diag --hosts host1,host2,hostN command will obtain diagnostics from the specified host(s) only.

          Warning

          Limitation: the host list MUST include the local hostname or the command will fail.

      Issues: CT-345

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) command now properly handles the -all-services option for the reset sub-command.

      Issues: CT-762

    • The command tpm reverse --ini-format now outputs correctly (without the double-dashes and the trailing backslash).

      Issues: CT-827, CT-877

    • The command tpm diag (in [Tungsten Replicator 6.0 Manual]) was not collecting config dirs other than the localhost ones.

      Now the mysql, manager, cluster and connector config directories are properly gathered in the diag zip file.

      Issues: CT-860

    • The tpm (in [Tungsten Replicator 6.0 Manual]) command now properly handles network interface names containing colons and/or dots.

      Issues: CT-864

    • Fixed an issue where the tpm (in [Tungsten Replicator 6.0 Manual]) command could print warnings about nil verify_host_key.

      Issues: CT-873

  • Core Replicator

    • The postgres applier now respects the database name set by pgsql-dbname.

      Specifically, the tungsten-replicator/samples/conf/datasources/postgresql.tpl was updated to use the correct variable for the value.

      Issues: CT-704

    • Instead of searching for a Primary with appropriate role (i.e. matching the Replica preferred role) until timeout is reached, the Replicator will now loop twice before accepting connection to any host, no matter what its role is.

      Issues: CT-712

    • The backup process fails with 0-byte store*.properties files or store*.properties files with invalid dates.

      Changed the process so that invalid backup properties files are skipped.

      Issues: CT-820

    • Fix the ability to enable parallel apply within a Composite Active/Active topology.

      Now handling relay as Replica to make the relay use the same code as a Replica concerning its internal connections (disable binary logging of its internal SQL queries).

      Issues: CT-851

2.12. Tungsten Replicator 6.0.4 GA (11 December 2018)

Version End of Life. 31 July 2020

Release 6.0.4 is a bugfix release.

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) command previously required the -service (in [Tungsten Replicator 6.0 Manual]) option to be the first option on the command-line. The option can now be placed in any position on the command-line.

      Issues: CT-758

    • If no service is specified then using trepctl (in [Tungsten Replicator 6.0 Manual]) and multiple services are configured, then an error would be reported, but no list of potential services would be provided. This has been updated so that trepctl (in [Tungsten Replicator 6.0 Manual]) will output the list available services and potential commands.

      Issues: CT-759

  • Heterogeneous Replication

    • The continuent-tools-hadoop which was previously available only as a separate Github project is now included with the distribution as standard.

      Issues: CT-748

Bug Fixes

  • Installation and Deployment

    • When using tpm diag (in [Tungsten Replicator 6.0 Manual]), the command would fail to parse net-ssh options.

      Issues: CT-775

    • The Net::SSH internal options have been updated to reflect changes in the latest Net::SSH release.

      Issues: CT-781

  • Heterogeneous Replication

    • Within the Oracle to MySQL ddlscan (in [Tungsten Replicator 6.0 Manual]) templates, the TIMESTAMP datatype in Oracle has been updated to replicate into a DATETIME within MySQL.

      Issues: CT-785

  • Core Replicator

    • Changing the state machine so that RESTORING is not a substate of OFFLINE:NORMAL, but instead of OFFLINE. While a transition from OFFLINE:NORMAL:RESTORING to ONLINE was possible (which was wrong), it will not be possible to transition from OFFLINE:RESTORING to ONLINE.

      The proper sequance of events is: OFFLINE:NORMAL --restore--> OFFLINE:RESTORING --restore_complete--> OFFLINE:NORMAL

      Issues: CT-797

    • Heartbeats would be inserted into the replication flow using UTC even if the replicator had been configured to use a different timezone

      Issues: CT-803

2.13. Tungsten Replicator 6.0.3 GA (5 September 2018)

Version End of Life. 31 July 2020

Release 6.0.3 is a bugfix release.

Improvements, new features and functionality

  • Oracle Replication

    • Oracle connection strings can now be configured using the Oracle TNS name, rather than purely the Oracle service or SID names. To use this option, specify the TNS name using the --datasource-oracle-service (in [Tungsten Replicator 6.0 Manual]) option to tpm (in [Tungsten Replicator 6.0 Manual]). This will configure the connection using the service name or TNS name if this can be determined. If the TNS name cannot be resolved automatically, use the --oracle-redo-tnsadmin-home to specify the directory where the Oracle tnsnames.ora file is located.

      To use the JDBC listener rather than the TNS service, use the --datasource-oracle-sid option.

      Issues: CT-380

    • Oracle support has been improved, adding support for Oracle TNS naming and support for extracting Oracle RAC using the Oracle Redo Reader functionality.

      Support has been added for extracting data from Oracle RAC hosts. To enable extraction from Oracle RAC requires use of the new Oracle service name (TNS) specification, and a different option to tpm (in [Tungsten Replicator 6.0 Manual]) to enable different Redo Reader configuration.

      To enable extraction from an Oracle RAC instance, use the --oracle-redo-rac-enabled=true option to tpm (in [Tungsten Replicator 6.0 Manual]). In addition, you should specify the connection information to Oracle using the --datasource-oracle-service (in [Tungsten Replicator 6.0 Manual]) option to specify the TNS name, and optionally specify the location of the tnsnames.ora file using the --oracle-redo-tnsadmin-home option to tpm (in [Tungsten Replicator 6.0 Manual]).

      If your RAC environment uses a different edition ASM than used by the core Oracle deployment, the --oracle-redo-asm-home option can be used to specify the home directory for the ASM version in use.

      Currently, this includes an action script for use with Oracle RAC hosts to be used when switching RAC hosts during operation in the event of a failure. The action script can be found in support/oracle-rac-scripts/action_script.scr.

      Issues: CT-660, CT-666

  • Core Replicator

    • The output from thl list (in [Tungsten Replicator 6.0 Manual]) now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

    • The replicator has been updated to support the new character sets supported by MySQL 5.7 and MySQL 8.0, including the UTF-8-mb4 series.

      Issues: CT-700, CT-970

Bug Fixes

  • Installation and Deployment

    • During installation, tpm (in [Tungsten Replicator 6.0 Manual]) attempts to find the system commands such as service and systemctl used to start and stop databases. If these were not in the PATH, tpm (in [Tungsten Replicator 6.0 Manual]) would fail to find a start/stop for the configured database. In addition to looking for these tools in the PATH tpm (in [Tungsten Replicator 6.0 Manual]) also explicitly looks in the /sbin, /bin, /usr/bin and /usr/sbin directories.

      Issues: CT-722

  • Command-line Tools

    • Using tpm diag (in [Tungsten Replicator 6.0 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-610

    • When running tpm diag (in [Tungsten Replicator 6.0 Manual]), the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

    • Due to the operating taking a long time or timing out, the capture of the output from lsof has been removed from running tpm diag (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-731

  • Oracle Replication

    • When performing an Oracle installation for applying data, tpm (in [Tungsten Replicator 6.0 Manual]) would report an issue with permissions not required for app;ying data into Oracle.

      Issues: CT-664

    • The prepare-offboard-fetcher.pl script has been updated to address an issue with one of the checks made during execution.

      Issues: CT-665

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

    • When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when log_slave_updates was enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.

      Issues: CT-686

2.14. Tungsten Replicator 6.0.2 GA (27 June 2018)

Version End of Life. 31 July 2020

Release 6.0.2 is a bugfix release. No issues were fixed in the replicator release.

2.15. Tungsten Replicator 6.0.1 GA (30 May 2018)

Version End of Life. 31 July 2020

Release 6.0.1 is a bugfix release.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Command-line Tools

    • The tungsten_set_position (in [Tungsten Replicator 6.0 Manual]) and tungsten_get_position commands have been deprecated and will be removed in the 6.1.0 release. These commands only worked with MySQL datasources. Use the dsctl (in [Tungsten Replicator 6.0 Manual]) command, which works with a much wider range of datasources.

      Issues: CT-517

Improvements, new features and functionality

  • Installation and Deployment

    • The tpm diag (in [Tungsten Replicator 6.0 Manual]) command has been improved to include more information about the environment, including:

      • The output from the lsof command.

      • The output from the ps command.

      • The output from the show full processlist command within mysql.

      • Copies of all the .properties configuration files.

      • Copies of all the my.cnf files, including directory configurations.

      • Improvements to the clarity of some commands.

      • The INI files used by tpm (in [Tungsten Replicator 6.0 Manual]) (if using INI installs) are included.

      Issues: CT-530, CT-611, CT-615, CT-623

  • Command-line Tools

    • The trepctl services (in [Tungsten Replicator 6.0 Manual]) has been updated to support the auto-refresh option using the -r command-line optionoption.

      Issues: CT-627

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) has been updated with a new command, servicetable (in [Tungsten Replicator 6.0 Manual]) command. This outputs the status information for multiple services in a tabular format to make it easier to identify the state for multi-service replicators. For example:

      shell> trepctl servicetable
      Processing servicetable command...
      Service | Status | Role | MasterConnectUri | SeqNo | Latency
      -------------------- | ------------------------------ | ---------- | ------------------------------ | ---------- | ----------
      alpha | ONLINE | slave | thl://trfiltera:2112/ | 322 | 0.00
      beta | ONLINE | slave | thl://ubuntuheterosrc:2112/ | 12 | 4658.59
      Finished servicetable command...

      The command also supports the auto-refresh option, -r.

      Issues: CT-637

Bug Fixes

  • Installation and Deployment

    • Support for the GEOMETRY data type within MySQL 5.7 and above has been added. This provides full support for both extracting and applying of the datatype to MySQL.

      This change is not backwards compatible; when upgrading, you should upgrade Replicas first and then the Primary to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.

      Issues: CT-403

    • When using Net::SSH within tpm (in [Tungsten Replicator 6.0 Manual]), more detailed information about any specific failures or errors is now provided.

      Issues: CT-523

    • tpm (in [Tungsten Replicator 6.0 Manual]) would mistakenly report issues with JSON columns during installation which no longer applies as JSON support for MySQL 5.7 was added in 6.0.0.

      Issues: CT-635

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 6.0 Manual]) could hang within different scenarios, including being executed in the background, or part of a background script or cronjob. The script could also fail to restart MySQL correctly

      Issues: CT-319, CT-572

    • The trepctl status (in [Tungsten Replicator 6.0 Manual]) would fail badly if the service name did not exist in the configuration, or if multipl services were configured.

      Issues: CT-545, CT-593

    • When using tpm (in [Tungsten Replicator 6.0 Manual]) with the INI method, the command would search multiple locations for suitable INI files. This could lead to multiple definitions of the same service, which could in turn lead to duplication of the installation process and occasional failures. If multiple INI files are found, a warning is now produced to highlight the potential for failures.

      Issues: CT-626

    • When setting optimizeRowEvents back to false (it is enabled by default), the replicator could fail with IndexOutOfBound errors.

      Issues: CT-631

    • Using trepctl qs (in [Tungsten Replicator 6.0 Manual]) where the sequence number could be larger than an INT would cause an error.

      Issues: CT-642

  • Oracle Replication

    • The prepare_offboard_fetcher script could fail due to the use of command that may not exist on some platforms. Under some circumstances the script could also be installed as non-executable.

      Issues: CT-420, CT-421

  • Heterogeneous Replication

    • The templates for ddlscan (in [Tungsten Replicator 6.0 Manual]) for MySQL to Oracle do not escape field names correctly.

      Issues: CT-249

    • When replicating data into MongoDB, numeric values and date values would be represented in the target database as strings, not as their native values.

      Issues: CT-581, CT-582

    • The default partition method used when loading data through CSV files showed an incorrect example format. Previously it was advised to use:

      'commit_hour='yyyy-MM-dd-HH

      It should just show the data format:

      yyyy-MM-dd-HH

      Issues: CT-607

    • The Javascript batch loader for Redshift could generate an error when loading the object used to derive information during loading.

      Issues: CT-620

    • The templates for ddlscan (in [Tungsten Replicator 6.0 Manual]) for Oracle to Redshift failed to handle the NUMBER type correctly.

      Issues: CT-621

  • Core Replicator

    • Optimizing deletes in row-based replication could delete the wrong rows if the pkey (in [Tungsten Replicator 6.0 Manual]) had not been enabled.

      Issues: CT-557

    • The included Drizzle driver would incorrectly assign values to prepared statements if the fields in the prepared statement included a question mark

      Issues: CT-608

    • During replication, the replictor could raise the java.util.ConcurrentModificationException error intermittently.

      Warning

      This change is not backwards compatible; when upgrading, you should upgrade Replicas first and then the Primary to ensure compatibility with the metadata.

      Issues: CT-618

  • Filters

    • The truncatetext (in [Tungsten Replicator 6.0 Manual]) filter was not configurable within all topologies. The configuration has now been updated so that the filter can be used in MySQL and other database environments.

      Issues: CT-386

2.16. Tungsten Replicator 6.0.0 GA (4 April 2018)

Version End of Life. 31 July 2020

Release 6.0.0 is a feature and bugfix release. This release contains the following key fixes:

  • Added PostgreSQL applier support.

  • Added support for custom primary key field selection for source tables that cannot be configured with a primary key within the database.

  • Added a new filter for including whole of transaction metadata information into each event.

  • Added support for extended transaction information within the Kafka applier so that all the messages for a given transaction can be identified.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Installation and Deployment

    • Support for using Java 7 with Tungsten Cluster has been removed. Java 8 or higher must be used for all deployments.

      Issues: CT-450

Improvements, new features and functionality

  • Heterogeneous Replication

    • The Kafka applier now supports the inclusion of transaction information into each Kafka message broadcast, including the list of schema/tables and row counts for the entire transaction, as well as information about whether the message is the first or last message/row within an overall transaction. The transaction information can also be sent as a separate message on an independent Kafka topic.

      Issues: CT-496, CT-586

      For more information, see Optional Configuration Parameters for Kafka (in [Tungsten Replicator 6.0 Manual]).

  • Core Replicator

    • Experimental support for writing row-based data through SQL into PostgreSQL has been added back to the replicator. This includes basic support fr the replication of the data. Currently databases and tables must be created by hand. A future release will incorporate full support for DDL translation.

      Issues: CT-149

  • Filters

    • The pkey (in [Tungsten Replicator 6.0 Manual]) has been extended to support the specification of custom primary key fields. This enables fields in the source data to be marked as primary keys even if the source database does not have the keys specified. This is useful for heterogeneous loading of data where a unique key may exist, but cannot be defined due to the application or database that created the tables.

      Issues: CT-481

    • A new filter, rowaddtxninfo (in [Tungsten Replicator 6.0 Manual]) has been added which embeds row counts, both total and per schema/table, to the metadata for a THL event/transaction.

      Issues: CT-497

Bug Fixes

  • Installation and Deployment

    • When performing a tpm reverse (in [Tungsten Replicator 6.0 Manual]), the --replication-port (in [Tungsten Replicator 6.0 Manual]) setting would be replaced with it's alias, --oracle-tns-port.

      Issues: CT-597

  • Core Replicator

    • An internal optimization within the replicator that would attempt to optimise row-based information and operations has been removed. The effects of the optimization were actually seen in very few situations, and it duplicated work and operations performed by the pkey (in [Tungsten Replicator 6.0 Manual]) filter. Unfortunately the same optimization could also cause issues within heterogeneous deployments with the removal of information.

      Issues: CT-318

    • The internal storage of the MySQL server ID has been updated to support larger server IDs. This works with any MySQL deployment, but has been specifically expanded to work better with some cloud deployments where the server ID cannot be controlled.

      Issues: CT-439

    • The format of some errors and log entries would contain invalid characters.

      Issues: CT-493

2.17. Tungsten Replicator 5.4.1 GA (28 October 2019)

Version End of Life. Not Yet Set

Release 5.4.1 contains both significant improvements as well as some needed bugfixes.

Improvements, new features and functionality

  • Core Replicator

    • If using the dropcolumn filter during extraction, in conjunction with the Batch Applier (eg Replicating to Redshift, Hadoop, Vertica) writes would fail with a CSV mismatch error due to gaps in the THL Index.

      However, for JDBC appliers, the gaps are required to ensure the correct column mapping

      To handle the two different requirements, a new property has been added to the filter to control whether or not to leave the THL index untouched (the default) or to re-order the Index ID's

      If applying to Batch targets, then the following property should be added to your configuration. The property is not required for JDBC targets.

      --property=replicator.filter.dropcolumn.fillGaps=true

      Issues: CT-1025

Bug Fixes

  • Command-line Tools

    • Fixed an issue that would prevent reading remote binary logs when using SSL.

      Issues: CT-958

    • Restored previously-removed log file symbolic links under $CONTINUENT_ROOT/service_logs/

      Issues: CT-1026

    • Fixed a bug where tpm diag (in [Tungsten Replicator 5.4 Manual]) would generate an empty zip file if the hostnames contain hyphens (-) or periods (.)

      Issues: CT-1032

    • Improve ability to find needed binaries for commands: tungsten_find_position, tungsten_find_seqno and tungsten_get_rtt

      Issues: CT-1054

2.18. Tungsten Replicator 5.4.0 GA (31 July 2019)

Version End of Life. Not Yet Set

Release 5.4.0 contains both significant improvements as well as some needed bugfixes. One of the main features of this release is MySQL 8 support.

Improvements, new features and functionality

  • Command-line Tools

    • Two new utility scripts have been added to the release to help with setting the Replicator position:

      - tungsten_find_position, which assists with locating information in the THL based on the provided MySQL binary log event position and outputs a dsctl set (in [Tungsten Replicator 5.4 Manual]) command as output.

      - tungsten_find_seqno, which assists with locating information in the THL based on the provided sequence number and outputs a dsctl set (in [Tungsten Replicator 5.4 Manual]) command as output.

      Issues: CT-934

  • Core Replicator

    • The replicator has been updated to support the new character sets supported by MySQL 5.7 and MySQL 8.0, including the UTF-8-mb4 series.

      Issues: CT-700, CT-970

    • A new, beta-quality command has been included called prov-sl.sh which is intended to eventually replace the current tungsten_provision_slave (in [Tungsten Replicator 5.4 Manual]) script.

      Currently, prov-sl.sh supports provisioning Replicas using mysqldump and xtrabackup tools, and is MySQL 8-compatible. 

      The prov-sl.sh command is written in Bash, has less dependencies compared to the current version and is meant to fix a number of issues with the current version.

      Backups are streamed from source to target so that an intermediate write to disk is not performed, resulting in faster provisioning times.

      Logs are written to $CONTINUENT_ROOT/service_logs/prov-sl.log (i.e. /opt/continuent/service_logs/prov-sl.log).

      For example, provision a Replica from [source db] using mysqldump (default):

      shell> prov-sl.sh -s {source db}

      As another example, use xtrabackup for the backup method, with 10 parallel threads (default is 4), and ssh is listening on port 2222:

      shell> prov-sl.sh -s {source db} -n xtrabackup -t 10 -p 2222

      Warning

      At the moment, prov-sl.sh does not support Composite Active/Active topologies when used with Tungsten Clustering, however it will be included in a future release.

      Issues: CT-614, CT-723, CT-809, CT-855, CT-963

    • Upgraded the Drizzle driver to support MySQL 8 authentication protocols (SHA256, caching_sha2).

      Issues: CT-914, CT-931, CT-966

    • The Redshift Applier now allows AWS authentication using IAM Roles. Previously authentication was possible via Access and Secret Key pairs only.

      Issues: CT-980

      For more information, see Redshift Preparation for Amazon Redshift Deployments (in [Tungsten Replicator 5.4 Manual]).

Bug Fixes

  • Command-line Tools

    • The --hosts (in [Tungsten Replicator 5.4 Manual]) option was not working with the diag sub-command of the tpm (in [Tungsten Replicator 5.4 Manual]) command on nodes installed using the INI method.

      The corrected behavior is as follows:

      • With Staging-method deployments, the tpm diag (in [Tungsten Replicator 5.4 Manual]) command continues to behave as before:

        • The tpm diag (in [Tungsten Replicator 5.4 Manual]) command alone will obtain diagnostics from all hosts in the cluster.

        • The tpm diag --hosts host1,host2,hostN command will obtain diagnostics from the specified host(s) only.

      • With INI-method deployments, the new behavior is as follows:

        • The tpm diag (in [Tungsten Replicator 5.4 Manual]) command alone will obtain diagnostics from the local host only.

        • The tpm diag --hosts host1,host2,hostN command will obtain diagnostics from the specified host(s) only.

          Warning

          Limitation: the host list MUST include the local hostname or the command will fail.

      Issues: CT-345

    • When using tpm (in [Tungsten Replicator 5.4 Manual]) with the INI method, the command would search multiple locations for suitable INI files. This could lead to multiple definitions of the same service, which could in turn lead to duplication of the installation process and occasional failures. If multiple INI files are found, a warning is now produced to highlight the potential for failures.

      Issues: CT-626

    • The trepctl (in [Tungsten Replicator 5.4 Manual]) command now properly handles the -all-services option for the reset sub-command.

      Issues: CT-762

    • The command tpm reverse --ini-format now outputs correctly (without the double-dashes and the trailing backslash).

      Issues: CT-827, CT-877

    • The tpm (in [Tungsten Replicator 5.4 Manual]) command now properly handles network interface names containing colons and/or dots.

      Issues: CT-864

    • When executing mysqldump, all Tungsten tools no longer use the --add-drop-database flag as it will prevent MySQL 8+ from restoring the dump.

      Issues: CT-935

    • Fixed a bug where tpm diag (in [Tungsten Replicator 5.4 Manual]) would generate an empty zip file if the hostnames contain hyphens (-) or periods (.)

      Issues: CT-1032

  • Core Replicator

    • Added support for missing charset GB18030 to correct WARN extractor.mysql.MysqlBinlog Unknown charset errors.

      Issues: CT-915, CT-932

    • Loading data into Redshift would fail with the following error if a row of data contained a specific control character (0x00 (NULL))

      Missing newline: Unexpected character 0x30 found at location nnn

      Issues: CT-984

    • Now properly extracting the Geometry datatype.

      Issues: CT-997

    • The ddl_map.json file used by the apply_schema_changes filter was missing a rule to handle ALTER TABLE statements when replicating between MySQL and Redshift

      Issues: CT-1002

    • The extract_schema_change filter wasn't escaping " (double-quotes) and the generated JSON would then cause the applier to error with

      pendingExceptionMessage: SyntaxError: missing } after property list »
      (../../tungsten-replicator//support/filters-javascript/apply_schema_changes.js#236(eval)#1)

      Issues: CT-1011

2.19. Tungsten Replicator 5.3.6 GA (04 February 2019)

Version End of Life. 31 July 2020

This is a bugfix release.

Bug Fixes

  • Core Replicator

    • Instead of searching for a Primary with appropriate role (i.e. matching the Replica preferred role) until timeout is reached, the Replicator will now loop twice before accepting connection to any host, no matter what its role is.

      Issues: CT-712

    • Changing the state machine so that RESTORING is not a substate of OFFLINE:NORMAL, but instead of OFFLINE. While a transition from OFFLINE:NORMAL:RESTORING to ONLINE was possible (which was wrong), it will not be possible to transition from OFFLINE:RESTORING to ONLINE.

      The proper sequance of events is: OFFLINE:NORMAL --restore--> OFFLINE:RESTORING --restore_complete--> OFFLINE:NORMAL

      Issues: CT-797

    • Heartbeats would be inserted into the replication flow using UTC even if the replicator had been configured to use a different timezone

      Issues: CT-803

    • The backup process fails with 0-byte store*.properties files or store*.properties files with invalid dates.

      Changed the process so that invalid backup properties files are skipped.

      Issues: CT-820

2.20. Tungsten Replicator 5.3.5 GA (06 November 2018)

Version End of Life. 31 July 2020

Release 5.3.5 is a bug fix release.

Bug Fixes

  • Installation and Deployment

    • When using tpm diag (in [Tungsten Replicator 5.3 Manual]), the command would fail to parse net-ssh options.

      Issues: CT-775

    • The Net::SSH internal options have been updated to reflect changes in the latest Net::SSH release.

      Issues: CT-781

2.21. Tungsten Replicator 5.3.4 GA (11 October 2018)

Version End of Life. 31 July 2020

Release 5.3.4 is a bug fix release.

Bug Fixes

  • Command-line Tools

    • When using tpm diag (in [Tungsten Replicator 5.3 Manual]), the command could fail with the error text ClusterDiagnosticPackage::Zip.

      Issues: CT-763

2.22. Tungsten Replicator 5.3.3 GA (20 September 2018)

Version End of Life. 31 July 2020

Release 5.3.3 is a bug fix release.

Improvements, new features and functionality

  • Core Replicator

    • The output from thl list (in [Tungsten Replicator 5.3 Manual]) now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

Bug Fixes

  • Command-line Tools

    • Using tpm diag (in [Tungsten Replicator 5.3 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-610

    • When running tpm diag (in [Tungsten Replicator 5.3 Manual]), the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

2.23. Tungsten Replicator 5.3.2 GA (4 June 2018)

Version End of Life. 31 July 2020

Release 5.3.2 is a bug fix release.

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 5.3 Manual]) would mistakenly report issues with JSON columns during installation which no longer applies as JSON support for MySQL 5.7 was added in 6.0.0.

      Issues: CT-635

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.3 Manual]) could hang within different scenarios, including being executed in the background, or part of a background script or cronjob. The script could also fail to restart MySQL correctly

      Issues: CT-319, CT-572

    • When setting optimizeRowEvents back to false (it is enabled by default), the replicator could fail with IndexOutOfBound errors.

      Issues: CT-631

    • Using trepctl qs (in [Tungsten Replicator 5.3 Manual]) where the sequence number could be larger than an INT would cause an error.

      Issues: CT-642

  • Core Replicator

    • During replication, the replictor could raise the java.util.ConcurrentModificationException error intermittently.

      Warning

      This change is not backwards compatible; when upgrading, you should upgrade Replicas first and then the Primary to ensure compatibility with the metadata.

      Issues: CT-618

2.24. Tungsten Replicator 5.3.1 GA (18 April 2018)

Version End of Life. 31 July 2020

Release 5.3.1 is a bug fix release that adds support for the GEOMETRY data type in MySQL 5.7 and above, and a number of bug fixes.

Bug Fixes

  • Installation and Deployment

    • Support for the GEOMETRY data type within MySQL 5.7 and above has been added. This provides full support for both extracting and applying of the datatype to MySQL.

      This change is not backwards compatible; when upgrading, you should upgrade Replicas first and then the Primary to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.

      Issues: CT-403

2.25. Tungsten Replicator 5.3.0 GA (12 December 2017)

Version End of Life. 31 July 2020

Release 5.3.0 is an important feature release that contains some key new functionality for replication. In particular:

  • JSON data type column extraction support for MySQL 5.7 and higher.

  • Generated column extraction support for MySQL 5.7 and higher.

  • DDL translation support for heterogeneous targets, initially support DDL translation for MySQL to MySQL, Vertica and Redshift targets.

  • Support for data concentration support for replication into a single target schema (with additional source schema information added to each table) for both HPE Vertica and Amazon Redshift targets.

  • Rebranded and updated support for Oracle extraction with the Oracle Redo Reader, including improvements to offboard deployment, more configuration options, and support for the deployment and installation of multiple offboard replication services within a single replicator.

This release also contains a number of important bug fixes and minor improvements to the product.

Improvements, new features and functionality

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. The new file is replicator-user.log. The original file, trepsvc.log remains unchanged.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

    • Support for Java 7 (JDK or JRE 1.7) has been deprecated, and will be removed in the 6.0.0 release. The software is compiled using Java 8 with Java 7 compatibility.

      Issues: CT-252

    • Some Javascript filters had DOS style line breaks.

      Issues: CT-376

    • Support for JSON datatypes and generated columns within MySQL 5.7 and greater has been added to the MySQL extraction component of the replicator.

      Important

      Due to a MySQL bug, the way that JSON and generated columns is represented within MySQL binary log, it is possible for the size of the data, and the reported size re different and this could cause data corruption To account for this behavior and to prevent data inconsistencies, the replicator can be configured to either ignore, warn, or stop, if the mismatch occurs.

      This can be set by modifying the property replicator.extractor.dbms.json_length_mismatch_policy.

      Until this problem is addressed within MySQL, tpm (in [Tungsten Replicator 5.3 Manual]) will still generate a warning about the issue which can be ignored during installation by using the --skip-validation-check=MySQLGeneratedColumnCheck (in [Tungsten Replicator 5.3 Manual]).

      For more information on the effects of the bug, see MySQL Bug #88791.

      Issues: CT-5, CT-468

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.3 Manual]) command has been updated to correctly operate with CentOS 7 and higher. Due to an underlying change in the way IP configuration information was sourced, the extraction of the IP address information has been updated to use the ip addr command.

      Issues: CT-35

    • The THL retention setting is now checked in more detail during installation. When the --thl-log-retention (in [Tungsten Replicator 5.3 Manual]) is configured when extracting from MySQL, the value is compared to the binary log expiry setting in MySQL (expire_logs_days). If the value is less, then a warning is produced to highlight the potential for loss of data.

      Issues: CT-91

    • A new option, --oracle-redo-temp-tablespace has been added to configure the temporary tablespace within Oracle redo reader extractor deployments.

      Issues: CT-321

  • Command-line Tools

    • 1743

      The sizes outputs for the thl list (in [Tungsten Replicator 5.3 Manual]) command, such as -sizes (in [Tungsten Replicator 5.3 Manual]) or -sizesdetail (in [Tungsten Replicator 5.3 Manual]) command now additionally output summary information for the selected THL events:

      Total ROW chunks: 8 with 7 updated rows (50%)
      Total STATEMENT chunks: 8 with 2552 bytes (50%)
      16 events processed

      A new option has also been added, -sizessummary (in [Tungsten Replicator 5.3 Manual]), that only outputs the summary information.

      Issues: CT-433

      For more information, see thl list -sizessummary Command (in [Tungsten Replicator 5.3 Manual]).

  • Oracle Replication

    • A new option for tpm (in [Tungsten Replicator 5.3 Manual]) has been added, --oracle-tns-port, which is an alias for --replication-port (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-274

    • The fetcher and miner ports can now be explicitly set. Previously they were fixed as port 7901 and 7902 respectively. Use the --oracle-redo-fetcher-port and --oracle-redo-miner-port.

      Issues: CT-290

  • Heterogeneous Replication

    • The HPE Vertica applier has been updated and expanded so that data can be concentrated from multiple source schemas into a single schema, where all the souce and target schemas share a common table structure. The new functionality relies on the new adddbrowname filter, and a new batch applier script that handles the concentration.

      This functionality also incorporates options to keep a longterm copy of all the CDC data generated by the replicator by copying the data to a secondary set of staging tables. Both this and the core target information are configurable during installation.

      Note

      Full documentation on using this feature is under production and will be available shortly.

      Issues: CT-95

    • Support has now been added for a full DDL replication and translation support, initially from MySQL targets through to Amazon Redshift and HPE Vertica. The functionality allows for schemas and tables to be created, modified, and deleted, without the need to tuse ddlscan (in [Tungsten Replicator 5.3 Manual]), and without having to worry about making changes that stop replication until the structures can be changed.

      The DDL translation supports the following features:

      • Full replication of schema and table operations.

      • Configurable translation of data types, including size differences.

      • Automatically creates staging tables for batch-based appliers.

      • Support for centralized and long term schema replication.

      • Ability to add arbitrary columns to all replicated tables.

      • Ability to choose whether to apply different schema operations on specific schemas or tables. The following options can be controlled:

        • Creating schema

        • Creating table

        • Adding columns to existing table

        • Deleting columns from existing table

        • Modifying columns in existing table

        • Deleting table

        • Deleting schema

        For each operation, the operationg can be applied, ignored, stop replication with an error, or applied with archiving. In the case of the last example, a copy of the table is kept, and changes are applied only to the active table. This enables you to retain existing data and structure so that analytics can continue on a known version of the table. The naming and format of the table can also be set.

        For operations that add or change columns, you choose whether value for the new column within the existing rows for the table are set to the default value, or an explicit value.

      • Data is automatically flushed and committed before table changes are made to ensure that replication does not stop. This process happens automatically, so replicating data, adding a column, and replicating further data does not stop replication, even if the data would normally fail because of table differences and batch applier timings.

      • Existing table schemas can be extracted and replicated automatically through to a target without requiring ddlscan (in [Tungsten Replicator 5.3 Manual]) to create the initial tables.

      Note

      Full documentation on using this feature is under production and will be available shortly.

      Issues: CT-131, CT-132

    • The Javascript files used for applying data into batch targets (Redshift, Hadoop, Vertica) have been updated and improved to ensure:

      • Field names are correctly escaped

      • Error messages now contain more information about the problem

      • Where relevant, the host database errors and CSV files are now kept in the event of an error to help identification of the underlying problem.

      These changes should make it easier to identify issues, and to prevent certain issues occurring during replication.

      Issues: CT-96, CT-235

    • The CSV writer module which is used in all batch-related appliers (Redshift, Hadoop, Vertica) has been updated so that it provides more information about the potential problem when a CSV write is identified as invalid.

      Issues: CT-236

    • Support for replicating into Hadoop environments where the underlying filesystem is protected by Kerberos security and authentication has been added to the Hadoop applier. A new file, hadoop_kerberos.js has been added to the distribution which should be edited and used in place of the normal hadoop.js batch file.

      Issues: CT-266

      For more information, see Replicating into Kerberos Secured HDFS (in [Tungsten Replicator 5.3 Manual]).

    • The Amazon Redshift applier has been updated and expanded so that data can be concentrated from multiple source schemas into a single schema, where all the souce and target schemas share a common table structure. The new functionality relies on the new adddbrowname filter, and a new batch applier script that handles the concentration.

      Note

      Full documentation on using this feature is under production and will be available shortly.

      Issues: CT-408

  • Filters

    • A new filter, rowadddbname (in [Tungsten Replicator 6.0 Manual]), has been added to the replicator. This filter adds the incoming schema name, and optional numeric hash value of the schema, to every row of THL row-based changes. The filter is designed to be used with heterogeneous and analytics applications where data is being concentrated into a single schema and where the source schema name will be lost during the concentration and replication process.

      In particular, it is designed to work in harmony with the new Redshift and Vertica based single-schema appliers where data from multiple, identical, schemas are written into a single target schema for analysis.

      Issues: CT-98

    • A new filter has been added, rowadddbname (in [Tungsten Replicator 6.0 Manual]), which adds the source database name and optional database hash to every incoming row of data. This can be used to help identify source information when concentrating information into a single schema.

      Issues: CT-407

Bug Fixes

  • Installation and Deployment

    • An issue has been identified with the way certain operating systems now configure their open files limits, which can upset the checks within tpm (in [Tungsten Replicator 5.3 Manual]) that determine the open files limits configured for MySQL. To ensure that the open files limit has been set correctly, check the configuration of the service:

      1. Copy the system configuration:

        shell> sudo cp /lib/systemd/system/mysql.service /etc/systemd/system/
        shell> sudo vim /etc/systemd/system/mysql.service
      2. Add the following line to the end of the copied file:

        LimitNOFILE=infinity
      3. Reload the systemctl daemon:

        shell> sudo systemctl daemon-reload
      4. Restart MySQL:

        shell> service mysql restart

      That configures everything properly and MySQL should now take note of the open_files_limit config option.

      Issues: CT-148

    • The check to determine if triggers had been enabled within the MySQL data source would not get executed correctly, meaning that warnings about unsupported triggers would not trigger a notification.

      Issues: CT-185

    • When using tpm diag (in [Tungsten Replicator 5.3 Manual]) on a MySQL deployment, the MySQL error log would not be identified and included properly if the default datadir option was not /var/lib/mysql.

      Issues: CT-359

    • Installation when enabling security through SSL could fail intermittently during installation because the certificates would fail to get copied to the required directory during the installation process.

      Issues: CT-402

    • The Net::SSH libraries used by tpm (in [Tungsten Replicator 5.3 Manual]) have been updated to reflect the deprecation of paranoid parameter.

      Issues: CT-426

    • Using a complex password, particularly one with single or double quotes, when specifying a password for tpm (in [Tungsten Replicator 5.3 Manual]), could cause checks and the installation to raise errors or fail, although the actual configuration would work properly. The problem was limited to internal checks by tpm (in [Tungsten Replicator 5.3 Manual]) only.

      Issues: CT-440

  • Command-line Tools

    • The startall (in [Tungsten Replicator 5.3 Manual]) command would fail to correctly start the Oracle redo reader process.

      Issues: CT-283

    • The tpm (in [Tungsten Replicator 5.3 Manual]) command would fail to remove the Oracle redo reader user when using tpm uninstall (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-299

    • The replicator stop (in [Tungsten Replicator 5.3 Manual]) command would not stop the Oracle redo reader process.

      Issues: CT-300

    • Within Vertica deployments, the internal identity of the applier was set incorrectly to PostgreSQL. This would make it difficult for certain internal processes to identify the true datasource type. The setting did not affect the actual operation.

      Issues: CT-452

  • Oracle Replication

    • Oracle deployments have been updated so that the replicator is always running in UTF-8 and the NLS_LANG setting is set correctly. This will affect primarily CDC and Oracle applier deployments.

      Issues: CT-251

    • The ddlscan (in [Tungsten Replicator 5.3 Manual]) templates for Oracle to MySQL would incorrectly map NUMBER types into DECIMAL with an invalid size definition. This has been updated so that anything larger than a 19 digit NUMBER to a MySQL BIGINT.

      Issues: CT-259

    • The Oracle redo reader component has been rebranded to Continuent, Ltd, and changed internally to be identified as simply 'oracle redo reader'. This has changed the following elements within the product:

      • All components and references to vmrr and vmrrd have been changed to orarr and orarrd respectively.

      • All tpm (in [Tungsten Replicator 5.3 Manual]) options that contain vmware have been replaced with oracle, including:

        install-vmware-redo-reader install-oracle-redo-reader
        repl-install-vmware-redo-reader repl-install-oracle-redo-reader
      • All internal references, including the configuration paameters for the redo reader, have been updated to use orarr.

      • The default username and password used with the redo reader have changed from vmrruser to orarruser, and vmrruserpwd to orrruserpwd.

      • The template files used to configure the redo reader have been changed from vmrr_response_file to orarr_response_file, and vmrr_response_file to offboard_orarr_response_file.

      • The vmrrd_wrapper has been renamed to orarrd_wrapper.

      Issues: CT-19, CT-282, CT-367

    • When running the orarrd command to execute the console, the command would fail and report:

      When running orarrd console, you get the following response:
      tungsten@dbora1 alpha$ orarrd_alpha console
      orarr is already started

      Issues: CT-397

    • The orarrrd script contained incorrect environment variables for testing the validity of the installation. This could cause access to the Redo Reader console to fail.

      Issues: CT-401

  • Heterogeneous Replication

    • The Redshift applier would use a relative directory for the AWS configuration reference, but would refer to the wrong location.

      Issues: CT-375

    • The sample configuration file for Redshift mistakenly contained $ characters to indicate variables. These dollar signs are not required.

      Issues: CT-406

  • Core Replicator

    • When parsing THL data it was possible for the internal THL processing to lead to a java.util.ConcurrentModificationException. This indicated that the underlying THL event metadata structure used internally had changed between uses.

      Issues: CT-355

3. Tungsten Dashboard Release Notes

3.1. Tungsten Dashboard 1.0.9 GA (12 August 2020)

Version End of Life. 11 August 2021

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.9 provides a number of new features, improvements and bugfixes.

Dashboard Configuration

  • Now able to configure Dashboard settings via the browser

    You can disable the editing of settings in the browser by changing the value of disableSettingsEdit to 1 in the config.php file, in the "settings": { } stanza:

     "disableSettingsEdit": 1 
  • All settings configured via the browser page are stored in the {webroot}/settings.d/ directory as individual JSON text files named for the setting. Please ensure it exists and is writable by the web server user.

  • You may edit or delete any of the files in the {webroot}/settings.d/ directory. The setting will revert to the default if deleted. you may also choose to configure settings in this way as opposed to using the config.php file. Your choice.

  • Refactored all options and created centralized defaults

Software Update

  • Now able to self-update the Dashboard software via the browser

    There are four related settings, enableUpdates, tmpDir, downloadAccessKey and downloadSecretKey.

    All four must be located in the config.php file, in the "settings": { } stanza. They are not accessible from the browser settings page.

    You can disable the Dashboard self-update feature by changing the value of enableUpdates to 0 in config.php (default: 1):

     "enableUpdates": 1 

    The tmpDir value is used to determine where downloaded software packages are saved to:

     "tmpDir":"/tmp" 

    The other two (downloadAccessKey and downloadSecretKey) need to be obtained from Continuent support and typicially ship with the Dashboard installation package.

Cluster Definitions

  • Now able to manually create and save cluster definitions in the conf.d subdirectory. Originally, a cluster could only be defined in the "clusters": { } stanza.

  • Now able to create and save cluster definitions to the conf.d subdirectory via a browser workflow

  • Added Display, Edit and Remove Cluster Definition menu choices for each cluster

  • Now able to automatically define cluster definitions in conf.d just by providing a hostname and port number in a browser workflow

  • Now able to automatically define cluster definitions in conf.d at Dashboard startup

    There are three related settings, enableAutoConfiguration, managerPort and useHAProxy.

    You can enable the Dashboard auto-configuration feature by changing the "Enable Auto-Configuration?" setting via the Dashboard settings page in the browser, or changing the value of enableAutoConfiguration to 1 in config.php (default: 0) or via the Dashboard settings page in the browser:

     "enableAutoConfiguration": 1 

    The managerPort value is used to determine what port to communicate with the manager upon when performing auto-configuration and auto-define, as well as populating form fields in other places. Only change this if you have change the API listener port for the Manager as well.

     "managerPort": 8090 

    The useHAProxy value is used to determine how to calculate ports when performing auto-configuration and auto-define.

    Set the value to 1 to determine the manager port number automatically during various operations based on calulations using the base managerPort.

    Set the value to 0 (default) to use the base managerPort with no attempt to auto-define the port.

    You can enable the manager port auto-configuration feature by changing the "Using HA Proxy?" setting via the Dashboard settings page in the browser, or changing the value in the config.php file.

     "useHAProxy": 1 

UI/UX

  • Role name cleaning (Master is now Primary, and Slave is now Replica for nodes; Master is now Active, and Slave is now Passive for clusters)

  • Improve error handling for JSON responses to AJAX calls

  • Bug fixes in service alias support

  • Many footer improvements, including a link to check for an available Dashboard software update

  • Stop providing tabInfo during intitial page load, instead do it as AJAX call after load to save initial page load time

Dashboard Diagnostics

  • Now able to upload a Dashboard Diagnostic containing the JSON configuration to Continuent Support's protected AWS bucket. No other customer has access to this location, it is upload-only.

    There are three related settings, customerName, uploadAccessKey and uploadSecretKey.

    The customerName value is used to pre-populate the diagnostic upload form.

     "customerName":"your customer name here" 

    The other two (uploadAccessKey and uploadSecretKey) need to be located in config.php

     "uploadAccessKey":"AKIAIWDZPQUE5YL4SBDQ", ] 
     "uploadSecretKey":"FQ0iVkTtH9biIZT2+IpwXwhqXvVwqMUqsZ4++N4K" ] 

Misc Admin

  • New Expert mode disables both confirmation prompts when Deleting All Definitions

    The default is 0 (disabled). Set enableExpertMode to 1 (one) to enable.

     "enableExpertMode": 1 
  • Use the enableDebug setting to get additional logging information and use the debug software versions when checking for an available update.

     "enableDebug": 1 

3.2. Tungsten Dashboard 1.0.8 GA (4 June 2020)

Version End of Life. 3 June 2021

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.8 provides a number of new features, improvements and bugfixes.

  • Added basic Role-Based Access Control (RBAC). There are two roles, Administrator with full access and Operator with Read-Only access. This feature requires Basic Auth to be properly configured on the Web server.

    When enabled, the user's current role will be displayed in the footer. Refresh the page to activate any changes to config.php.

    The default is 0 (disabled). Set enableRBAC to 1 (one) to enable.

     "enableRBAC":1 

    Use the administrators setting to list the users with admin privs:

     "administrators": [ "adminUser1","adminUser2" ] 
  • Improved page load performance via caching of API calls. This is especially helpful with Composite clusters that have multiple sites over a wide area.

  • Added the ability to modify the browser window title using the new configuration option windowTitle

  • Added the ability to change the cluster service sort order from the alpha default to as-written configuration order using the new configuration option sortByConfigOrderNotAlpha

  • Site favicons along with the navigation bar logo and colors have been updated to promote a cleaner look. Additional icon replacements and color tweaks have been made throughout the tool.

  • Added hover-based tooltips for all fields and buttons where possible. Set disableTooltips to 1 to prevent the tooltips from appearing.

  • Significantly improved the Connector popover formatting, sorting and operation.

  • Message handling is improved so that multiple actions and responses are tracked and messaged properly.

  • Added the ability to view the json configuration in the browser via a menu link.

  • Added the ability to check for Dashboard software updates.

  • Added the ability to check for Clustering software updates on a per-node basis.

Tungsten Dashboard is compatible with both the Tungsten Clustering 5.3.x series and 6.x series.

3.3. Tungsten Dashboard 1.0.7 GA (26 November 2019)

Version End of Life. 26 November 2020

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.7 provides a number of new features, improvements and bugfixes.

  • Added the feature to allow for cluster service name aliases. You may now add the sub-key actualName pointing to the "real" name of the service, and change the top-level cluster service name to some alias that you understand.

    Previously, it was impossible to configure two or more clusters with the same service name. This could be required if clusters were installed into different environments like production, staging or development. While the best practice is to name the cluster services to match the environment (i.e. east_prod and east_staging), in some situations this may not be possible.

  • Added a new feature to automatically fade out messages after a delay. The default is 60 seconds. Set msgFadeOutTimer to 0 (zero) to disable or to a positive integer to specify the delay in seconds.

     "msgFadeOutTimer":60 
  • Improved the look & feel of the overall layout, including display widths, the location of the timestamp marker and spacing.

  • Fixed a bug where the controls to open and close a cluster were STILL not working.

  • Fixed a bug where the datasource status details hover was not displaying properly

Tungsten Dashboard is compatible with both the Tungsten Clustering 5.3.x series and 6.x series.

3.4. Tungsten Dashboard 1.0.6 GA (3 September 2019)

Version End of Life. 3 September 2020

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.6 is a bugfix and minor feature release.

  • Fixed a bug where the controls to open and close a cluster were not working.

  • When Auto-refresh is turned on, any issuence of a command will stop the auto-refresh. Simply re-select your desired refresh rate to turn it back on.

Tungsten Dashboard is compatible with both the Tungsten Clustering 5.3.x series and 6.x series.

3.5. Tungsten Dashboard 1.0.5 GA (28 June 2019)

Version End of Life. 28 June 2020

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.5 is a bugfix release.

  • Fixed CMM cluster bug where clusters other than the first do not show subservices.

  • Tweaked cell alignment

Tungsten Dashboard is compatible with both the Tungsten Clustering 5.3.x series and 6.x series.

3.6. Tungsten Dashboard 1.0.4 GA (11 April 2019)

Version End of Life. 11 April 2020

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.4 is a bugfix release.

  • Fixed cluster-level open/close regression.

  • Tweaked error text and reduced noise in the logs.

Tungsten Dashboard is compatible with both the Tungsten Clustering 5.3.x series and 6.x series.

3.7. Tungsten Dashboard 1.0.3 GA (22 March 2019)

Version End of Life. 22 March 2020

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.3 is a feature release for better global controls and customization.

The default for navButtonFormat is icon if not specified.

  • Added modal "Stop Auto-Refresh" button which will turn off the Auto-refresh feature. This button is only visible if auto-refresh is enabled.

  • Added ability to set global buttons to icon, text or some combination. Use the setting navButtonFormat and specify one or more of icon or text as a comma-separated string, no spaces. Order counts.

    $jsonConfig = <<<EOJ
    {
     "settings": {
     "navButtonFormat":"icon",
    ...
    EOJ;

    Currently there are four (4) possible entries:

    "navButtonFormat":"icon",
    	"navButtonFormat":"text",
    	"navButtonFormat":"icon,text",
    	"navButtonFormat":"text,icon",

Tungsten Dashboard is compatible with both the Tungsten Clustering 5.3.x series and 6.x series.

3.8. Tungsten Dashboard 1.0.2 GA (20 September 2018)

Version End of Life. 20 September 2019

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.2 is a bug fix release for better API error handling.

  • Refactored API calls for better error handling.

  • Better error reporting on the front-end.

Tungsten Dashboard is compatible with both the Tungsten Clustering 5.3.x series and 6.x series.

3.9. Tungsten Dashboard 1.0.1 GA (17 September 2018)

Version End of Life. 17 September 2019

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

Tungsten Dashboard v1.0.1 is a bug fix release that also contains a few improvements.

  • Support for Composite Active/Active topology offered in Continuent Clustering v6.x (requires Continuent Clustering version 6.0.3)

  • Improvements to the menu system layout and clarity

  • Composite-level cluster commands have been relocated to a new menu to the right of the State field

  • Composite clusters now display the actual composite state instead of the Ready/Warning/Error status indicators, and status indicator lights have been moved to the left of the State label

  • Improvements to the locking system:

    • Auto-Lock and Auto-Unlock are now both configurable via config.php

    • Auto-Lock and Auto-Unlock setting are now both visible at the bottom of the cluster-level locking menu

    • Auto-Lock may be configured to attempt a lock for all actions, heartbeats only, or not at all

    • Auto-Unlock may be configured to attempt an unlock for all actions, heartbeats only, or not at all

  • Additional formatting tweaks, including the reduction in height of the rows

Tungsten Dashboard is compatible with both the Tungsten Clustering 5.3.x series and 6.x series.

3.10. Tungsten Dashboard 1.0.0 GA (10 May 2018)

Version End of Life. 10 May 2019

Tungsten Dashboard provides a web-based UI for monitoring and managing Tungsten Clustering deployments.

It supports the following features:

  • Full monitoring information on the status and progress of replication and the status of the cluster

  • Monitor multiple clusters through a single page

  • Perform switches and failovers

  • Shun hosts

  • Recover failed hosts

Tungsten Dashboard is compatible with the Tungsten Clustering 5.3.x series.