Version End of Life. 31 July 2020
Release 6.0.3 is a bugfix release.
tpm now outputs a note and recommendation for performing backups of your cluster when installation has been completed.
A new Nagios compatible check script has been added to the release, check_tungsten_policy, which returns the currently active policy mode.
For more information, see The check_tungsten_policy Command.
The connector has been updated to allow dataservice selection to be deterministic and ordered rather than random by configuration. The updated configuration enables the connector to be set to use an ordered list of clusters within a composite solution.
To set the order of the service selected during operation, the
information must be set within the
user.map. The configuration is based on an
ordered, comma-separated list of services to use which are then
selected in order. The specification operates on the following
List of service names in order
If the service name has a dash prefix it is always explicitly excluded from the list of available datasources
If a datasource is not specified, it will always be picked last
For example, in a setup made of three data service,
europe, using affinity
select data sources in data service
usa is not available, in
asia is not availble, then
connection will not succeed since
europe has been negated.
tpm would fail during installation if the current directory was not writable by the current user.
Issues: CT-682, CT-695
tpm would fail to set properties within the
defaults section of the
configuration within Composite Active/Active clusters.
Performing a switch operation within a Composite Active/Active cluster with
three or more clusters when the cluster was in
mode and the cross-site replicators are offline would lead to an
unrecoverable cluster failure.
Issues: CT-673, CT-691
During a relay failover within a Composite Active/Passive or Composite Active/Active deployment, if the communications had also failed between sites when the failover occured the manager would be unable to determine the correct Primary of the remote site.
In a deployment, single cluster or Composite Active/Active where there is either the potential for high-latency across sites, or high latency within a site due to high loads on the connectors, the manager could mis-identify this high latency as a failure. This would trigger a quorum validation. These would be reported as network hangs, even though the result of the quorum check would be valid.
To address this, the processing of router notifications processed by the connector and all other operations have been separated. This reduces the change of a heartbeat gap between hosts and therefore the connectors are available to the managers even under high loads or latency.
Release 6.0.3 is a bugfix release.
The output from thl list now includes the name of the file for the correspnding THL event. For example:SEQ# = 0 / FRAG# = 0 (last frag) - FILE = thl.data.0000000001 - TIME = 2018-08-29 12:40:57.0 - EPOCH# = 0 - EVENTID = mysql-bin.000050:0000000000000508;-1 - SOURCEID = demo-c11 - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE] - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]
Issues: CT-700, CT-970
During installation, tpm attempts to find the system commands such as service and systemctl used to start and stop databases. If these were not in the
PATH, tpm would fail to find a start/stop for the configured database. In addition to looking for these tools in the
PATHtpm also explicitly looks in the
When running tpm diag, the operation would fail if the
/etc/mysqldirectory does not exist.
Due to the operating taking a long time or timing out, the capture of the output from lsof has been removed from running tpm diag.
Issues: CT-10, CT-652
When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when
log_slave_updateswas enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.