Version End of Life. 31 July 2020
This is a bugfix release.
Improvements, new features and functionality
tpm now outputs a note and recommendation for performing backups of your cluster when installation has been completed.
Issues: CT-730
tpm would fail during installation if the current directory was not writable by the current user.
Issues: CT-564
When performing a tpm update in a cluster with an active witness, the host with the witness will not be restarted correctly resulting in the witness being down on that host.
Issues: CT-596
Using tpm diag, the command would ignore
options on the command-line, including
--net-ssh-option
.
Issues: CT-610
Using tpm connector at the command-line would
fail if the core MySQL configuration file (i.e.
/etc/my.cnf
) did not exist.
Issues: CT-641
The connector would fail to set reusable network addresses during configuration which could delay or slow startup until the address/port become available again.
Issues: CT-694
When operating in bridge mode, the connector would fail to check whether the driver was in enabled/disabled mode, which could cause upgrades to fail as part of a graceful shutdown/update operation.
Issues: CT-696
Multiple connectors within a cluster could all connect to the same manager within a given service, increasing the load on the single manager.
Issues: CT-717
When using the connector, the connector --cluster-status --json command would output header and footer information in place of bare JSON which would then cause JSON parsing to fail.
Issues: CT-685
A memory leak within the manager could cause the Java VM to consume more and more CPU cycles and then restart.
Issues: CT-673, CT-691
During a relay failover within a Composite Active/Passive deployment, if the communications had also failed between sites when the failover occured the manager would be unable to determine the correct Primary of the remote site.
Issues: CT-703
A memory leak was identified in the router manager component that manages the communicating between the manager and the connector.
Issues: CT-715
In a deployment, single cluster or Multi-Site/Active-Active where there is either the potential for high-latency across sites, or high latency within a site due to high loads on the connectors, the manager could mis-identify this high latency as a failure. This would trigger a quorum validation. These would be reported as network hangs, even though the result of the quorum check would be valid.
To address this, the processing of router notifications processed by the connector and all other operations have been separated. This reduces the change of a heartbeat gap between hosts and therefore the connectors are available to the managers even under high loads or latency.
Issues: CT-725
Tungsten Clustering 5.3.3 Includes the following changes made in Tungsten Replicator 5.3.3
Release 5.3.3 is a bug fix release.
Improvements, new features and functionality
The output from thl list now includes the name of the file for the correspnding THL event. For example:
SEQ# = 0 / FRAG# = 0 (last frag) - FILE = thl.data.0000000001 - TIME = 2018-08-29 12:40:57.0 - EPOCH# = 0 - EVENTID = mysql-bin.000050:0000000000000508;-1 - SOURCEID = demo-c11 - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE] - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]Issues: CT-550
Using tpm diag, the command would ignore options on the command-line, including
--net-ssh-option
.Issues: CT-610
When running tpm diag, the operation would fail if the
/etc/mysql
directory does not exist.Issues: CT-724