2.26. Tungsten Replicator 6.1.5 GA (5 Aug 2020)

Version End of Life. 15 Aug 2024

Release 6.1.5 is a small interim bug fix with a number of issues resolved within the Core Replicator, sepcifically for heterogeneous environments.

Behavior Changes

The following changes have been made to Tungsten Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Heterogeneous Replication

    • When using the batch applier in Heterogeneous pipelines (Redshift, Vertica, Hadoop) the batch applier removes DDL statements.

      There may be occasions when you intentionally want DDL to pass through, such as if you have a custom filter that injects custom DDL statements into the pipeline, however the batch applier would always remove them.

      A new property is now available to control this behaviour. Set property=replicator.applier.dbms.applyStatements=true to allow the batch applier to retain DDL statements. The default value of false retains the original behaviour of removing DDL.

      Issues: CT-1270

Known Issues

The following issues are known within this release but not considered critical, nor impact the operation of Tungsten Replicator. They will be addressed in a subsequent patch release.

  • Core Replicator

    • In MySQL release 8.0.21 the behavior of CREATE TABLE ... AS SELECT ... has changed, resulting in the transactions being logged differenly in the binary log. This change in behavior will cause the replicators to fail.

      Until a fix is implemented within the replicator, the workaround for this will be to split the action into separate CREATE TABLE ... followed by INSERT INTO ... SELECT FROM... statements.

      If this is not possible, then you will need to manually create the table on all nodes, and then skip the resulting error in the replicator, allowing the subsequent loading of the data to continue.

      Issues: CT-1301

Bug Fixes

  • Core Replicator

    • When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when log_slave_updates was enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.

      This was most commonly seen when using the standalone replicator to replicate into a Cluster, either from a standlone source, or a cluster source.

      Issues: CT-1255

    • For offboard extraction, the replicator would appear to be ONLINE but not actually processing new events.

      This is due to the relay client getting an incomplete packet from the remote database and going into a WAITING state.

      To handle these situations, a new property has been included that will set a timeout and if the replicator does not process an event in the given timeout period, we assume we have lost the link to the remote database and will place the replicator into an OFFLINE:ERROR state.

      Providing auto-recovery has been enabled using the auto-recovery-max-attempts parameter, the replicator will then restart and proceed successfully.

      The new property to include is property=replicator.extractor.dbms.relayLogIdleTimeout

      The default value (0) will disable the timeout. Values provided are in seconds, so 300 would be 5 minutes.

      Setting the timeout too low in quieter systems may result in unnecessary replicator restarts. The value should be set according to the activity levels of your database. If the source is very active with constant updates, then a low value would be appropriate. On quieter systems that may have long periods of inactivty, should have a timeout value set no less than the longest, normal, period of inactivity within your system.

      Issues: CT-1262

    • If filtering is in use, and a space appeared either side of the delimiter in a "schema.table" definition in your SQL, the replicator would fail to parse the statement correctly.

      For example, all of the examples below are valid SQL but would cause a failure in the replicator:

      sql> CREATE TABLE myschema. mytable (....
      
      sql> CREATE TABLE myschema .mytable (....
      
      sql> CREATE TABLE myschema . mytable (....

      Issues: CT-1278

    • Fixes a bug in the Drizzle Driver whereby a failing prepared statement that exceeds 1000 characters would report a String index out of range: 999 error rather than the actual error.

      Issues: CT-1303