Version End of Life. Not Yet Set
Release 6.1.19 contains a number of critical bug fixes and improvements, specifically for customers using Parallel Replication.
The following changes have been made to Tungsten Cluster and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:
repl_svc_extractor_multi_frag_service_detectionis now turned ON by default. Event shards are determined at extraction time. With fragmented events, the shard cannot be determined by only reading the first fragment, but needs to check the last fragment as well. With this setting turned OFF, there is no issue with pipelines that don't need it, i.e. no parallel apply downstream replicas. However, as this is done at extract time, THL contains this information, and adding or changing a replica using parallel apply could introduce issues.
It can be disabled if you see a performance overhead but this should be done with caution. For Aurora<>Aurora Active/Active deployments it is essential that this property be left ON.
Improvements, new features and functionality
The tpm command calls to
glob have been improved to be more strict and compliant.
The tpm ask stages and tpm ask allstages commands have been added to display the Replicator stages for the current node (stages) and the stages for each role (allstages).
The tpm ask command has five new variables available:
dsstate for the current datasource,
trstate for the current replicator, and
nodeinfo which displays all 4 of the new variables.
A new standalone status script has been added called tungsten_get_status that shows the datasources and replicators for all nodes in all services along with seqno and latency.
Added a connector mode command to print which mode the connector is running in, either "bridge" or "proxy"
The connector graceful-stop command now supports systemd service manager properly. The connector stop command now takes an optional argument that will make it a graceful stop. If connector stop is run without the parameter, it will stop the connector immediately. If a positive number of seconds is passed, it will wait, at most, this timeout for connections to disconnect (refusing new connections), after which it will force close all connections and shutdown the connector. connector graceful-stop behavior is unchanged: without the parameter, the connector will wait "forever" for connections to disconnect. A positive timeout in seconds can be passed to sever connections after the given delay
The tpm install and tpm update commands now properly support the --thl-ports option for cross-site subservices.
Fixes an issue where the pause state would be badly displayed under no load, when the pause reaches the defined time. Note this is only a display issue : once an event is received, the pause state and displayed remaining time will be reseted correctly.
The tpm mysql command will now gracefully handle being run on a non-database node.
Fixes an issue that would leave a transaction uncommitted longer than necessary.
Fixed an issue where the
shard_id stored in the
would show an incorrect shard after processing a skipped event (an event from another channel)
Fixed an issue where the replicator would hang after applying a
DROP TABLE event, that
originally failed on the primary, but got logged into the binlog.
BidiRemoteSlaveFilter could fail to correctly flag fragmented events in unprivileged environments (Aurora, for example)
In such an environment (multi-active, unprivileged database access), a new setting was introduced to force extraction process to read ahead
to the last fragment to detect the service name (false by default). Enabled with