3.14. Continuent Tungsten 2.0.2 GA (19 May 2014)

Version End of Life. 31 October 2018

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

In addition, we recommend Java 7 for all Release Notes 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Improvements, new features and functionality

  • Installation and Deployment

    • The default Java garbage collection (GC) used within the Connector, Replicator and Manager has been reconfigured to use parallel garbage collection. The default GC could produce CPU starvation issues during execution.

      Issues: TUC-2101

  • Tungsten Connector

    • Keep-alive functionality has been added to the Connector. When enabled, connections to the database server are kept alive, even when there is no client activity.

      Issues: TUC-2103

      For more information, see Connector Keepalive (in [Continuent Tungsten 2.0 Manual]).

Bug Fixes

  • Tungsten Manager

    • The embedded JGroups service, which manages the communication and management of the manager service has been updated to the latest version. This improves the stability of the service, and removes some of the memory leaks causing manager stalls.

    • A number of issues the memory management on the Manager service, particularly with respect to the included JGroups support have been rectified. These issues caused the manager to use increased amounts of memory that could lead to the manager to stall.

Continuent Tungsten 2.0.2 Includes the following changes made in Tungsten Replicator 2.2.1

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The tpm (in [Tungsten Replicator 2.2 Manual]) tool and configuration have been updated to support both older Oracle SIDs and the new JDBC URL format for Oracle service IDs. When configuring an Oracle service, use --datasource-oracle-sid for older service specifications, and --datasource-oracle-service (in [Tungsten Replicator 2.2 Manual]) for newer JDBC URL installations.

    Issues: 817

Improvements, new features and functionality

  • Installation and Deployment

    • When using the --enable-heterogeneous-master (in [Tungsten Replicator 2.2 Manual]) option to tpm (in [Tungsten Replicator 2.2 Manual]), the MySQL service is now checked to ensure that ROW-based replication has been enabled.

      Issues: 834

  • Command-line Tools

    • The thl (in [Tungsten Replicator 2.2 Manual]) command has been expanded to support an additional output format, -specs (in [Tungsten Replicator 2.2 Manual]), which adds the field specifications for row-based THL output.

      Issues: 801

      For more information, see thl list -specs Command (in [Tungsten Replicator 2.2 Manual]).

  • Oracle Replication

    • Templates have been added to the suite of DDL translation templates supported by ddlscan (in [Tungsten Replicator 2.2 Manual]) to support Oracle to MySQL replication. Two templates are included:

      • ddl-oracle-mysql provides standard translation of DDL when replicating from Oracle to MySQL

      • ddl-oracle-mysql-pk-only provides standard translation of DDL including automatic selection of a primary key from the available unique indexes if no explicit primary key is defined within Oracle DDL when replicating to MySQL

      Issues: 787

    • ddlscan (in [Tungsten Replicator 2.2 Manual]) has been updated to support parsing of a file containing a list of tables to be parsed for DDL information. The file should be formatted as a CSV file, but only the first argument, table name, will be extracted. Lines starting with a # (hash) character are ignored.

      The file is in the same format as used by setupCDC.sh (in [Tungsten Replicator 2.2 Manual]).

      To use the file, supply the -tableFile (in [Tungsten Replicator 2.2 Manual]) parameter to the command.

      Issues: 832

  • Core Replicator

    • The replicator has been updated to support autorecovery from transient failures that would normally cause the replicator to go OFFLINE (in [Tungsten Clustering (for MySQL) 6.1 Manual]) while in either the ONLINE (in [Tungsten Clustering (for MySQL) 6.1 Manual]) or GOING-ONLINE:SYNCHRONIZING (in [Tungsten Replicator 6.1 Manual]) state. This enables the replicator to recover from errors such as MySQL restarts, or transient connection errors.

      The period, number of attempted recovery operations, and the delay before a recovery is considered successful are configurable through individual properties.

      Issues: 784

      For more information, see Deploying Automatic Replicator Recovery (in [Tungsten Replicator 2.2 Manual]).

    • The way VARCHAR values were stored and represented within the replicator has been updated which improves performance significantly.

      Issues: 804

    • If the binary logs for MySQL were flushed and purged (using FLUSH LOGS and PURGE BINARY LOGS), and then the replicator is restarted, the replicator would fail to identify and locate the newly created logs with an MySQLExtractException.

      Issues: 851

  • Documentation

    • The deployment and recovery procedures for Multi-site/Multi-master deployments have been documented.

      Issues: 797

      For more information, see Deploying Multisite/Multimaster Clustering (in [Tungsten Clustering (for MySQL) 6.1 Manual]).

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 2.2 Manual]) would incorrectly identify options that accepted true/false values, which could cause incorrect interpretations, or subsequent options on the command-line to be used as true/false indications.

      Issues: 310

    • Removing an existing parallel replication configuration (in [Tungsten Replicator 2.2 Manual]) using tpm (in [Tungsten Replicator 2.2 Manual]) would cause the replicator to fail due to a mismatch in the status table and current configuration.

      Issues: 867

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 2.2 Manual]) tool would fail to correctly re-provision a master within a fan-in or multi-master configuration. When re-provisioning, the service should be reset with trepctl reset (in [Tungsten Replicator 2.2 Manual]).

      Issues: 709

    • Errors when executing tungsten_provision_slave (in [Tungsten Replicator 2.2 Manual]) that have been generated by the underlying mysqldump or xtrabackup are now redirected to STDOUT.

      Issues: 802

    • The tungsten_provision_slave (in [Tungsten Replicator 2.2 Manual]) tool would re-provision using a slave in a OFFLINE:ERROR (in [Tungsten Replicator 6.1 Manual]) state, even through this could create a second, invalid, slave deployment. Reprovisioning from a slave in the ERROR state is now blocked, unless the -f (in [Tungsten Replicator 2.2 Manual]) or --force (in [Tungsten Replicator 2.2 Manual]) option is used.

      Issues: 860

      For more information, see The tungsten_provision_slave Script (in [Tungsten Replicator 2.2 Manual]).

  • Oracle Replication

    • Tuning for the CDC extraction from Oracle has been updated to support both a minimum sleep time parameter, minSleepTime, and the increment value used when increasing the sleep time between updates, sleepAddition.

      Issues: 239

      For more information, see Tuning CDC Extraction (in [Tungsten Replicator 2.2 Manual]).

    • The URLs used for connecting to Oracle RAC SCAN addresses were not correct and were incompatible with non-RAC installations. The URL format has been updated to use a URL format that is compatible with both Oracle RAC and non-RAC installations.

      Issues: 479

  • Core Replicator

    • When a timeout occurred on the connection to MySQL for the channel assignment service (part of parallel applier), the replicator would go offline, rather than retrying the connection. The service has now been updated to retry the connection if a timeout occurs. The default reconnect timeout is 120 seconds.

      Issues: 783

    • A slave replicator would incorrectly set the restart sequence number when reading from a master if the slave THL directory was cleared. This would cause slave replicators to fail to restart correctly.

      Issues: 794

    • Unsigned integers are extracted from the source database in a non-platform independent method. This would cause the Oracle applier to incorrectly attempt to apply negative values in place of their unsigned equivalents. The Oracle applier has been updated to correctly translate these values for types identified as unsigned to the correct value. When viewing these values are viewed within the THL, they will still be identified as a negative value.

      Issues: 798

      For more information, see thl list Command (in [Tungsten Replicator 2.2 Manual]).

    • Replication would fail when processing binlog entries containing the statement INSERT INTO ... WHERE... when operating in mixed mode.

      Issues: 807

  • Filters

    • The mysqlsessionsupport (in [Tungsten Clustering (for MySQL) 6.1 Manual]) filter would cause replication to fail when the default thread_id was set to -1, for example when STRICT_ALL_TABLES SQL mode had been enabled. The replicator has been updated to interpret -1 as 0 to prevent this error.

      Issues: 821

    • The rename (in [Tungsten Clustering (for MySQL) 6.1 Manual]) filter has been updated so that renaming of only the schema name for STATEMENT events. Previously, only ROW events would be renamed by the filter.

      Issues: 842