B.2. Continuent Tungsten 2.0.4 GA (9 Sep 2014)

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

We recommend Java 7 for all Continuent Tungsten 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Improvements, new features and functionality

  • Tungsten Manager

    • Tungsten Manager: Improved monitoring fault-tolerance

      Under normal operating conditions, the Tungsten Manager on each DB server host will monitor the local Tungsten Replicator and the database server running on that host and relay the monitoring information thus collected to the other Tungsten Managers in the cluster. In previous releases, Continuent Tungsten was even able to continue to monitor database servers even if a manager on a given DB server node was not running.

      With this release, this functionality has been generalized to handle the monitoring of both database servers and Tungsten replication such that any time a Tungsten Manager is not running on a given DB server host, the remaining Tungsten Managers in the cluster will take over the monitoring activities for both database servers and Tungsten Replicators until the manager on that host resumes operations. This activity takes place automatically and does not require any special configuration or intervention from an administrator.

      The new functionality means that if you have configured Tungsten to fence replication failures and stops, and you stop all Tungsten services on a given node, the rest of the cluster will respond by fencing the associated data source to an OFFLINE or FAILED state.

      Full recovery of a failed node requires that a Tungsten Manager be running on the node.

    • Tungsten Connector/Tungsten Manager: Full support for 'relative latency'

      Support for the use and display of the relativeLatency has been expanded and improved. By default, absolute latency is used by the cluster to determine the configuration.

      When relative latency is used, the difference between the last commit time and the current time is displayed. This will show an increasing latency even on long running transactions, or in the event of a stalled replicator. To enable relative latency, use the --use-relative-latency=true option to tpm during configuration.

      The following changes to the operation of Continuent Tungsten have been added to this release when the use of relative latency is enabled:

      • The output of SHOW SLAVE STATUS has been updated to show the Seconds_Behind_Master value.

      • cctrl will output a new field, relative, showing the relative latency value.

      • The Tungsten Connector will use the value when the maxAppliedLatency option is used in the connection string to determine whether to route a connection to a master or a slave.

      For more information, see Section 5.3.1, “Latency or Relative Latency Display”.

    • Tungsten Manager: Automated Data Source Fencing Due to Replication Faults

      Continuent Tungsten can now be configured to effectively isolate data sources for which replication has stopped or exhibits an error condition. See the updated documentation on Section 5.13, “Replicator Fencing” for further information.

      Issues: TUC-2240

      For more information, see Section 5.13, “Replicator Fencing”.

Bug Fixes

  • Installation and Deployment

    • The tpm command has been updated to support updated fencing mechanisms.

      Issues: TUC-2245

    • During an upgrade procedure, the process would mistake active witnesses for passive ones.

      Issues: TUC-2280

    • During an update using tpm, the replicator could end up in the OFFLINE state.

      Issues: TUC-2282

    • When performing an update, particularly in environments such as Multi-Site, Multi-Master, the tpm command could fail to update the cluster correctly. This could leave the cluster in a diminished state, or fail to upgrade all the components. The tpm command has been updated as follows:

      • tpm will no longer attempt to upgrade a Tungsten Replicator™ with a Continuent Tungsten™ distribution, and vice versa.

      • When installing Tungsten Replicator™, and the $CONTINUENT_PROFILES variable has been set, tpm will fail, warning that the $REPLICATOR_PROFILES variable should be set instead.

      Issues: TUC-2288, TUC-2292

  • Tungsten Connector

    • When changing connector properties, and reloading the configuration, the updated values would not be updated.

    • When using mysqldump with option --flush-logs, the connector would fail with an Unsupported command error.

      Issues: TUC-2209

    • When the option showRelativeSlaveStatus=true has been specified, the behavior of the connector for checking of latency with read/write splitting would not be used, instead the appliedLatency figure would be used instead.

      Issues: TUC-2243

    • The connection.close.idle.timeout would fail to be taken into account when the connector was running in bridge mode.

      Issues: TUC-2255

    • When the connector was running in bridge mode, and the connection was killed, the connections would not be correctly closed.

      Issues: TUC-2261

    • The Connector SmartScale would fail to round-robin through slaves when there was no discernable load on the cluster to provide load performance metrics.

      Issues: TUC-2272

    • SmartScale would wrongly load balance connections to a slave even during a switch operation.

      Issues: TUC-2273

    • The connector would update the high water setting before and after a write connection was used, creating additional overhead for connections, generating additional query overhead.

      Issues: TUC-2277

    • When using SmartScale, automatic sessions could be unnecessarily closed upon disconnection, causing slaves to miss valid queries.

      Issues: TUC-2286

  • Tungsten Manager

    • The checker.tungstenreplicator.properties and checker.mysqlserver.properties files would fail to be created correctly on active witnesses.

      Issues: TUC-2250, TUC-2251

    • The manager would fail to show the correct status for the replicator when getting status information by proxy.

      Issues: TUC-2254

    • Under some conditions, the manager would shut down the router gateway due to an invalid membership alarm but would not restart the connector. This would cause all new connections to hang indefinitely.

      Issues: TUC-2278

    • When performing a reset of the replicator service, recovery of the failed service would fail.

      Issues: TUC-2290

  • Other Issues

    • The check_tungsten.sh script could fail to locate the tungsten.cfg or read the correct values from the file.

      Issues: TUC-2263