Version End of Life. 15 Aug 2024
Release 6.1.12 is a minor bug fix release.
--no-routers cli option has been added to the cctrl command.
Using cctrl --no-routers will suppress the collection and display of connectors when
ls command. This is especially useful when there are a lot of connectors installed
and gathering the information from them could take a lot of time, causing unexpected timeouts.
This feature can be particularly useful to direct all writes for a specific user/application to a dedicated Write endpoint, where you need to remove any chance of writes in multiple clusters conflicting.
This feature can be enabled in a number of ways :- Via edits to the
user.map file for Proxy
configurations, via calls during connection, or via the
--connector-affinity tpm option
if using Bridge mode.
For more details on configuring this, and examples, consult the connector specific documentation here:
Issues: CT-1369, CT-1457
Support has now been added to allow monitoring via the New Relic Insights API, using a new script tungsten_newrelic_event
In a Composite Active/Active environment, the tungsten_post_process command now intelligently handles
cross-site filter properties in the INI so that any defined filters are appended to the default values for
any option that resolves to property
Issues: CT-1372, CT-1449
When provisioning from a Primary node, in a Composite Active/Active environment, using tungsten_provision_slave, the replicator would fail to reset due to the incorrect service name being used.
Fixed an issue where, in Composite Active/Active environments, tpm would ignore the affinity passed with the tpm flag
If using tungsten_provision_slave to restore a previously failed Primary node, the provision could fail to recover the node.
This new filter should be enabled within any MySQL applier where the source is generated from a MySQL 8 database and the target is MySQL 5.7 or lower.
If SSL is enabled between replicators for THL Transfer, and a source node fails and becomes unresponsive, or is slow/overloaded, the replicator on the replica can fail to go offline due to a timeout, remaining in a GOING-ONLINE:SYNCHRONIZING state.
In a clustering envronment, this can prevent failovers occuring due to the manager waiting for the replicator which is stuck in a timeout loop.