Tungsten Replicator 4.0.4 is a bugfix release that contains critical fixes and improvements to the Tungsten Replicator 4.0.3 release.
The following issues may affect the operation of Tungsten Replicator and should be taken into account when deploying or updating to this release.
Due to a bug within the Drizzle JDBC driver when communicating with MySQL, using the
optimizeRowEventsoptions could lead to significant memory usage and subsequent failure. To alleviate the problem. For more information, see Drizzle JDBC Issue 38.
When validating the existence of MyISAM tables within a MySQL database, tpm would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.
When events are filtered on a master, and a slave replicator reconnects to the master, it is possible to get the error server does not have seqno expected by client. The replicator has been updated to correctly supply the sequence number during reconnection.
Issues: CONT-1384, CONT-1525
CSV files generated during batch loading into datawarehouses
would be created within a directory structure within the
/tmp. On long-running
replictors, automated processes that would clean up the
/tmp directory could
delete the files causing replication to fail temporarily due to
the missing directory.
The location where staging CSV files are created has now been
updated. Files are now stored within the
directory, following the same naming structure. For example, if
Tungsten Replicator has been installed in
/opt/continuent, then CSV files for the
alpha, CSV files
for the first active applier channel will be stored in
The timeout used to read information from the MySQL binary logs
has been changed from a fixed period of 120 seconds to a
configurable parameter. This can be set by using the
property during configuration tpm.
pkey filter could
force table metadata to be updated when the update was not
When using the
filter in combination with the
colnames, an issue could
arise where differences in the incoming Schema and target schema
could result in incorrect SQL statements. The solution is to
on the slave not to extract the schema information from the
database but instead to use the incoming data from the source
database and the translated THL.