Version End of Life. 15 Aug 2024
Release 6.1.2 contains both significant improvements as well as some needed bugfixes.
The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:
Certified the Tungsten product suite with Java 11.
A small set of minor issues have been found and fixed (CT-1091, CT-1076) along with this certification.
The code is now compiled with Java compiler v11 while keeping Java 8 compatibility.
Java 9 and 10 have been tested and validated but certification and support will only cover Long Term releases.
Note
Known Issue
With Java 11, command line tools are slower. There is no impact on the overall clustering or replication performance but this can affect manual operations using CLI tools such as cctrl and trepctl
Issues: CT-1052
Improvements, new features and functionality
A new Replicator role, thl-server
, has been added.
This new feature allows your Replica replicators to still pull generated THL from a Primary even when the Primary replicator has stopped extracting from the binlogs.
If used in Tungsten Clustering, this feature must only be enabled when the cluster is in MAINTENANCE
mode.
Issues: CT-58
For more information, see Section 7.3, “Understanding Replicator Roles”.
A new JavaScript filter dropddl.js
has been
added to allow selective removal of specific object DDL from
THL.
Issues: CT-1092
If you need to reposition the extractor, there are a number of ways to do this, including the use of the options
-from-event
or-base-seqno
Both of these options are mutually exclusive, however in some situations, such as when positioning against an Aurora source, you may need to issue both of these options together. Previously this was not possible. In this release both options can now be supplied providing that you include the additional
-force
option, for exampleshell>trepctl -service
serviceName
online -base-seqno 53 -from-event 000412:762897 -forceIssues: CT-1065
When the Replicator inserts a heartbeart there is an associated timezone. Previously, the heartbeat would be inserted using the GMT timezone, which fails during the DST switch window. The new default uses the Replicator host's timezone instead.
This defaults change corrects an edge case where inserting a heartbeat will fail during the DST switch window when the MYSQL server is running in a different timezone than the Replicator (which runs in GMT).
For example, on 31th March 2019, the time switch occurred @ 2AM in the Europe/Paris timezone. When inserting a heartbeat in the window between 4 and 5 AM (say at 4:15am), the corresponding GMT time would be 2:15am, which is invalid in the Europe/Paris timezone. Replicator would then fail if the MySQL timezone was set to Europe/Paris, as it would try to insert an invalid timestamp.
A new option,
-tz
has been added into the trepctl heartbeat command to force the use of a specific timezone.For example, use GMT as the timezone when inserting a heartbeat:
shell>trepctl heartbeat -tz NONE
Use the Replicator host's timezone to insert the heartbeat:
shell>trepctl heartbeat -tz HOST
Use the given timezone to insert the heartbeat:
shell>trepctl heartbeat -tz {valid timezone id}
If the MySQL server timezone is different from the host timezone (which is strongly not recommended), then
-tz {valid timezone id}
should be used instead where{valid timezone id}
should be the same as the MySQL server timezone.Issues: CT-1066
Corrected resource leak when loading Java keystores
Issues: CT-1091
When configuring a Kafka Applier, the Kafka Port was set incorrectly
Issues: CT-693
If a JSON field contained a single quote, the replicator would break during the apply stage whilst running the generated SQL into MySQL.
Single quotes will now be properly escaped to solve this issue
Issues: CT-983
Under rare circumstances (network packet loss or MySQL Server hang), the replicator would also hang until restarted.
This issue has been fixed by using specific network timeouts in both the replicator and in the Drizzle jdbc driver connection logic
Issues: CT-1034
When configuring Active/Active, standalone replicators, with the BidiSlave filter enabled, the replicator was incorrectly parsing certain DDL Statements and marking them as unsafe, as a result they were being dropped by the applier and ignored
The full list of DDL commands fixed in this release are as follows:
CREATE|DROP TRIGGER
CREATE|DROP FUNCTION
CREATE|DROP|ALTER|RENAME USER
GRANT|REVOKE
Issues: CT-1084, CT-1117
The following warnings would appear in the replicator log due to GTID events not being handled.
WARN extractor.mysql.LogEvent Skipping unrecognized binlog event type 33, 34 or 35)
The WARN message will no longer appear, however GTID Events are still not handled in this release, but will be fully extracted in a future release.
Issues: CT-1114