B.2. Continuent Tungsten 2.0.4 GA (9 Sep 2014)
This is a recommended release for all customers as it contains
important updates and improvements to the stability of the
manager component, specifically with respect to stalls and
memory usage that would cause manager failures.
We recommend Java 7 for all Continuent Tungsten 2.0
installations. Continuent are aware of issues within Java 6 that
cause memory leaks which may lead to excessive memory usage
within the manager. This can cause the manager to run out of
memory and restart, without affecting the operation of the
dataservice. These problems do not exist within Java 7.
Improvements, new features and functionality
Tungsten Connector/Tungsten Manager: Full
support for 'relative latency'
Support for the use and display of the
relativeLatency has been expanded and
improved. By default, absolute latency is used by the cluster to
determine the configuration.
When relative latency is used, the difference between the last
commit time and the current time is displayed. This will show an
increasing latency even on long running transactions, or in the
event of a stalled replicator. To enable relative latency, use
option to tpm during configuration.
The following changes to the operation of Continuent Tungsten have
been added to this release when the use of relative latency is
The output of SHOW
SLAVE STATUS has been updated to show the
cctrl will output a new field,
the relative latency value.
The Tungsten Connector will use the value when the
maxAppliedLatency option is used in
the connection string to determine whether to route a
connection to a master or a slave.
For more information, see Section 5.3.1, “Latency or Relative Latency Display”.
Tungsten Manager: Improved monitoring
Under normal operating conditions, the Tungsten Manager on each DB
server host will monitor the local Tungsten Replicator and the
database server running on that host and relay the monitoring
information thus collected to the other Tungsten Managers in the
cluster. In previous releases, Continuent Tungsten was even able to
continue to monitor database servers even if a manager on a
given DB server node was not running.
With this release, this functionality has been generalized to
handle the monitoring of both database servers and Tungsten
replication such that any time a Tungsten Manager is not running
on a given DB server host, the remaining Tungsten Managers in
the cluster will take over the monitoring activities for both
database servers and Tungsten Replicators until the manager on
that host resumes operations. This activity takes place
automatically and does not require any special configuration or
intervention from an administrator.
The new functionality means that if you have configured Tungsten
to fence replication failures and stops, and you stop all
Tungsten services on a given node, the rest of the cluster will
respond by fencing the associated data source to an
Full recovery of a failed node requires that a Tungsten Manager be
running on the node.
Tungsten Manager: Automated Data Source
Fencing Due to Replication Faults
Continuent Tungsten can now be configured to
effectively isolate data sources for which replication has
stopped or exhibits an error condition. See the updated
documentation on Section 5.13, “Replicator Fencing” for
For more information, see Section 5.13, “Replicator Fencing”.