1.16. Tungsten Clustering 6.0.4 GA (11 December 2018)

Version End of Life. 31 July 2020

Release 6.0.4 is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • When installing from an RPM, the installation would automatically restart the connector during the installation. This behavior can now be controlled by setting the parameter no-connectors within the ini configuration. This will prevent tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) from restarting the connectors during the automated update processing.

      Issues: CT-792

  • Tungsten Manager

    • Cross-site replicators within a Composite Active/Active deployment can now be configured to point to Replicas by default, and to prefer Replicas over Primaries during operation. In a standard deployment, cross-site replicators work via Primaries at each cluster site to read the remote information. To configure the service to use Replicas in preference to Primaries, use the --policy-relay-from-slave=true (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option to tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]). Both Primaries and Replicas remain in the list of possible hosts, if no Replicas are availble during a switch or failover event, then a Primary will be used.

      Issues: CT-776, CT-783

Bug Fixes

  • Installation and Deployment

    • When performing a tpm update (in [Tungsten Clustering (for MySQL) 6.0 Manual]) in a cluster with an active witness, the host with the witness will not be restarted correctly resulting in the witness being down on that host.

      Issues: CT-596

    • When using tpm diag (in [Tungsten Clustering (for MySQL) 6.0 Manual]), the command would fail to parse net-ssh options.

      Issues: CT-775

    • The Net::SSH internal options have been updated to reflect changes in the latest Net::SSH release.

      Issues: CT-781

    • When a site goes offline, connections to this site will be forced closed. Those connections will reconnect, as long as the site stays offline, they will be connected to remote site.

      You can now enable an option so that when the site comes back online, the connector will disconnect all these connections that couldn't get to their preferred site so that they will then reconnect to the expected site with the appropriate affinity.

      When not enabled, connections will continue to use the server originally configured until they disconnect through normal attrition. This is the default option.

      Note that this only applies to bridge mode. In proxy mode, relevancy of connected data source will be re-evaluated before every transaction.

      To enable this option, use the tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option --connector-reset-when-affinity-back=true (in [Tungsten Clustering (for MySQL) 6.0 Manual]).

      Issues: CT-789

  • Command-line Tools

    • In a Composite Active/Active deployment, once a datasource has been welcomed to the cluster, individual clusters within the composite may not agree on the overall state of the composite and individual clusters.

      Issues: CT-721

    • Tab completion within cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) would not always work in all cases, especially when the -multi (in [Tungsten Clustering (for MySQL) 6.0 Manual]) option was in effect.

      Issues: CT-752

    • The check_tungsten_progress (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command could fail within Composite Active/Active deployments because there is no single default service.

      Issues: CT-757

    • Long service names within cctrl (in [Tungsten Clustering (for MySQL) 6.0 Manual]) could cause output to fail when displaying information. The underlying issue has been fixed. Because long service names can cause formatting issues, a new option, --cctrl-column-width (in [Tungsten Clustering (for MySQL) 6.0 Manual]) has been added which can be used to configure the minimum column width used to display information.

      Issues: CT-773, CT-926

    • During the lifetime of the cluster, switches may happen and the current Primary may well be a different node than what is reflected in the static ini file in the master= line. Normally, this difference is ignored during and update or an upgrade.

      However, if a customer has some kind of procedure (i.e. automation) which hand-edits the ini configuration file master= line at some point, and such hand-edits do not reflect the current reality at the time of the update/upgrade, an update/upgrade will fail and the cluster may be left in an indeterminate state.


      The best practice is to NOT change the master= line in the INI configuration file after installation.

      Changed tpm check CurrentTopologyCheck from WARN to ERROR to prevent changed master= lines in ini files from breaking updates and upgrades.


      Even with this fix, there is still a window of opportunity for failure. The update will continue, passing the CurrentTopologyCheck test and potentially leaving the cluster in an indeterminate state if the master= option is set to a hostname that is not the current Primary or the current host.

      Issues: CT-801

  • Tungsten Connector

    • The Connector has been modified to get the driver and JDBC URL of the datasource from the Connector-specific configuration, overriding the information normally distributed to it by the manager. This prevents the Connector from using incorrect settings, or empty values.

      Issues: CT-802

  • Tungsten Manager

    • Datasources could fail to be fenced correctly when a replicator fails.

      Issues: CT-424

    • Standby datasources would not be displayed within cctrl correctly.

      Issues: CT-749

    • The tungsten_prep_upgrade (in [Tungsten Clustering (for MySQL) 6.0 Manual]) command could fail if there were certain special characters within the tpm (in [Tungsten Clustering (for MySQL) 6.0 Manual]) options.

      Issues: CT-750

    • Changed the Manager logic so that the rules will not change the state of a Replicator in the OFFLINE:RESTORING state.

      Issues: CT-798

Tungsten Clustering 6.0.4 Includes the following changes made in Tungsten Replicator 6.0.4

Release 6.0.4 is a bugfix release.

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) command previously required the -service (in [Tungsten Replicator 6.0 Manual]) option to be the first option on the command-line. The option can now be placed in any position on the command-line.

      Issues: CT-758

    • If no service is specified then using trepctl (in [Tungsten Replicator 6.0 Manual]) and multiple services are configured, then an error would be reported, but no list of potential services would be provided. This has been updated so that trepctl (in [Tungsten Replicator 6.0 Manual]) will output the list available services and potential commands.

      Issues: CT-759

Bug Fixes

  • Installation and Deployment

    • When using tpm diag (in [Tungsten Replicator 6.0 Manual]), the command would fail to parse net-ssh options.

      Issues: CT-775

    • The Net::SSH internal options have been updated to reflect changes in the latest Net::SSH release.

      Issues: CT-781

  • Heterogeneous Replication

    • Within the Oracle to MySQL ddlscan (in [Tungsten Replicator 6.0 Manual]) templates, the TIMESTAMP datatype in Oracle has been updated to replicate into a DATETIME within MySQL.

      Issues: CT-785

  • Core Replicator

    • Changing the state machine so that RESTORING is not a substate of OFFLINE:NORMAL, but instead of OFFLINE. While a transition from OFFLINE:NORMAL:RESTORING to ONLINE was possible (which was wrong), it will not be possible to transition from OFFLINE:RESTORING to ONLINE.

      The proper sequance of events is: OFFLINE:NORMAL --restore--> OFFLINE:RESTORING --restore_complete--> OFFLINE:NORMAL

      Issues: CT-797

    • Heartbeats would be inserted into the replication flow using UTC even if the replicator had been configured to use a different timezone

      Issues: CT-803