A.5. Continuent Tungsten 4.0.3 Not Released (NA)

Continuent Tungsten 4.0.3 is a bugfix release that contains critical fixes and improvements.

Due to an internal bug identified shortly before release, Continuent Tungsten 4.0.3 was never released to customers.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Tungsten Connector

    • The connector has been updated to provide an acknowledgement to the MySQL protocol COM_CHANGE_USER command. This allows client connections that use connection pooling (such as PHP) and the change user command as a verification of an open connection to correctly received an acknowledgement that the connection is available.

      The option is disabled by default. To enable, set the treat.com.change.user.as.ping property to true during configuration with tpm.

      Issues: CONT-1380

      For more information, see Section 6.6.6, “Connector Change User as Ping”.

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

  • Core Replicator

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

  • Tungsten Connector

    • A connector running in bridge mode with auto reconnect enabled could try to reconnect to MySQL and attempt additional writes.

      Issues: CONT-1461

    • Automatic retry of query could fail due to interference of keep alive request while re-executing the query.

      Issues: CONT-1512

    • The Tungsten Connector would sometimes retry connectivity on connections that had been killed. The logic has been updated. The default behavior remains the same:

      • Reconnect closed connections

      • Retry autocommitted reads

      The behavior can be modified by using the --connector-autoreconnect-killed-connections. Setting to false disables the reconnection or retry of a connection outside of a planned switch or automatic failover. The default is true, reconnecting and retrying all connections.

      Issues: CONT-1514

  • Tungsten Manager

    • A cluster could go into a panic after a failover if the mysqld and then immediately became available, causing multiple masters to exist.

      Issues: CONT-1482

    • Recovering a node that had been marked as a standby, the node would be recovered into a standard slave, not a standby.

      Issues: CONT-1486

    • The cluster would fail to failover if the interface was down on the master.

      Issues: CONT-1537

Continuent Tungsten 4.0.3 Includes the following changes from Tungsten Replicator 4.0.3

Continuent Tungsten 4.0.3 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.2 release.

Due to an internal bug identified shortly before release, Continuent Tungsten 4.0.3 was never released to customers.

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

      The error is transient and non-specific and deployment should be retried.

      Issues: CONT-1343

  • Core Replicator

    • Due to a bug within the Drizzle JDBC driver when communicating with MySQL, using the optimizeRowEvents options could lead to significant memory usage and subsequent failure. To alleviate the problem. For more information, see Drizzle JDBC Issue 38.

      Issues: CONT-1115

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

  • Command-line Tools

  • Core Replicator

    • A master replicator could fail to finish extracting a fragmented transaction if disconnected during processing.

      Issues: CONT-1163

    • A slave replicator could fail to come ONLINE if the last THL file is empty.

      Issues: CONT-1164

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

    • The replicator incorrectly assigns LOAD DATA statements to the #UNKNOWN shard. This can happen when the entire length is above 200 characters.

      Issues: CONT-1431

    • In some situations, statements that would be unsafe for parallel execution were not serializing into a single threaded execution properly during the applier phase of the target connection.

      Issues: CONT-1489

    • CSV files generated during batch loading into datawarehouses would be created within a directory structure within the /tmp. On long-running replictors, automated processes that would clean up the /tmp directory could delete the files causing replication to fail temporarily due to the missing directory.

      The location where staging CSV files are created has now been updated. Files are now stored within the $CONTINUENT_HOME/tmp/staging/$SERVICE directory, following the same naming structure. For example, if Continuent Tungsten has been installed in /opt/continuent, then CSV files for the service alpha, CSV files for the first active applier channel will be stored in /opt/continuent/tmp/staging/alpha/staging0.

      Issues: CONT-1500

  • Filters

    • The pkey filter could force table metadata to be updated when the update was not required.

      Issues: CONT-1162

    • When using the dropcolumn filter in combination with the colnames, an issue could arise where differences in the incoming Schema and target schema could result in incorrect SQL statements. The solution is to reconfigure the colnames on the slave not to extract the schema information from the database but instead to use the incoming data from the source database and the translated THL.

      Issues: CONT-1495