This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.
We recommend Java 7 for all Continuent Tungsten 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.
The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:
Within composite clusters, TCP/IP port 7 connectivity is now required between managers on each site to confirm availability.
The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.
The default behavior of the manager is to not fence a datasource for which a replicator has stopped or gone into an error state. This was implemented to prevent reducing the overall availability of the deployed service. There are cases and deployments where clusters should not operate with replicators in stopped or error states. This could be configure by changing the following properties to
trueaccording to the master or slave role requirements:policy.fence.slaveReplicator=false policy.fence.masterReplicator=false
If they are set to true, the manager should fence the datasource by setting it to a 'failed' state. When this happens, and the datasource is a master, failover will occur. If the datasource is a slave, the datasource will just stay in the failed state indefinitely or until the replicator is back in the online state, in which case the datasource will be recovered to online.
At present the setting of these properties are not honored.
The default buffer sizes for the Section 6.4, “Using Bridge Mode” have been updated to 262144 (256KB).
To ensure that the correct number of the managers and witnesses are configured within the system, tpm has been updated to check and identify potential issues with the configuration. The installation and checks operate as follows:
The number of members is calculated as follows:
When initially starting up, the connector would open a connection to the configured master to retreive configuration information, but the connection would never be closed, leading to open unused connections.
The cluster status output by the tungsten cluster status within a multi-site cluster would fail to display the correct states of different data sources when an entire data service was offline.
When the connector has been configured into read-only mode, for
the connector would mistakenly route statements starting
set autocommit=0 to the
master, instead of being routed to a slave.
When operating in bridge mode, the connector would retain the client connection when the server had closed the connection. The connector has been updated to close all client connections when the corresponding server connection is closed.
The manager could enter a situation where after switching a relay on one physical service, remote site relay is incorrectly reconfigured to point at the new relay. This has been corrected so that reconfiguration no longer occurs in this situation.
For more information, see Section 5.3.3, “Understanding Datasource Roles”.
For more information, see Section 3.2.6, “Resetting a single dataservice”.