B.4. Continuent Tungsten 2.0.2 GA (19 May 2014)

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

In addition, we recommend Java 7 for all Continuent Tungsten 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Improvements, new features and functionality

  • Installation and Deployment

    • The default Java garbage collection (GC) used within the Connector, Replicator and Manager has been reconfigured to use parallel garbage collection. The default GC could produce CPU starvation issues during execution.

      Issues: TUC-2101

  • Tungsten Connector

    • Keep-alive functionality has been added to the Connector. When enabled, connections to the database server are kept alive, even when there is no client activity.

      Issues: TUC-2103

      For more information, see Section 6.6.5, “Connector Keepalive”.

Bug Fixes

  • Tungsten Manager

    • A number of issues the memory management on the Manager service, particularly with respect to the included JGroups support have been rectified. These issues caused the manager to use increased amounts of memory that could lead to the manager to stall.

    • The embedded JGroups service, which manages the communication and management of the manager service has been updated to the latest version. This improves the stability of the service, and removes some of the memory leaks causing manager stalls.

Continuent Tungsten 2.0.2 Includes the following changes made in Tungsten Replicator 2.2.1

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The tpm tool and configuration have been updated to support both older Oracle SIDs and the new JDBC URL format for Oracle service IDs. When configuring an Oracle service, use --datasource-oracle-sid for older service specifications, and --datasource-oracle-service for newer JDBC URL installations.

    Issues: 817

Improvements, new features and functionality

  • Installation and Deployment

    • When using the --enable-heterogeneous-master (in [Tungsten Replicator 2.2 Manual]) option to tpm, the MySQL service is now checked to ensure that ROW-based replication has been enabled.

      Issues: 834

  • Command-line Tools

    • The thl command has been expanded to support an additional output format, -specs, which adds the field specifications for row-based THL output.

      Issues: 801

      For more information, see thl list -specs Command.

  • Oracle Replication

    • Templates have been added to the suite of DDL translation templates supported by ddlscan to support Oracle to MySQL replication. Two templates are included:

      • ddl-oracle-mysql provides standard translation of DDL when replicating from Oracle to MySQL

      • ddl-oracle-mysql-pk-only provides standard translation of DDL including automatic selection of a primary key from the available unique indexes if no explicit primary key is defined within Oracle DDL when replicating to MySQL

      Issues: 787

    • ddlscan has been updated to support parsing of a file containing a list of tables to be parsed for DDL information. The file should be formatted as a CSV file, but only the first argument, table name, will be extracted. Lines starting with a # (hash) character are ignored.

      The file is in the same format as used by setupCDC.sh.

      To use the file, supply the -tableFile (in [Tungsten Replicator 2.2 Manual]) parameter to the command.

      Issues: 832

  • Core Replicator

    • The replicator has been updated to support autorecovery from transient failures that would normally cause the replicator to go OFFLINE while in either the ONLINE or GOING-ONLINE:SYNCHRONIZING (in [Tungsten Replicator 2.2 Manual]) state. This enables the replicator to recover from errors such as MySQL restarts, or transient connection errors.

      The period, number of attempted recovery operations, and the delay before a recovery is considered successful are configurable through individual properties.

      Issues: 784

      For more information, see Deploying Automatic Replicator Recovery.

    • The way VARCHAR values were stored and represented within the replicator has been updated which improves performance significantly.

      Issues: 804

    • If the binary logs for MySQL were flushed and purged (using FLUSH LOGS and PURGE BINARY LOGS), and then the replicator is restarted, the replicator would fail to identify and locate the newly created logs with an MySQLExtractException.

      Issues: 851

  • Documentation

Bug Fixes

  • Installation and Deployment

    • tpm would incorrectly identify options that accepted true/false values, which could cause incorrect interpretations, or subsequent options on the command-line to be used as true/false indications.

      Issues: 310

    • Removing an existing parallel replication configuration using tpm would cause the replicator to fail due to a mismatch in the status table and current configuration.

      Issues: 867

  • Command-line Tools

  • Oracle Replication

    • Tuning for the CDC extraction from Oracle has been updated to support both a minimum sleep time parameter, minSleepTime, and the increment value used when increasing the sleep time between updates, sleepAddition.

      Issues: 239

      For more information, see Tuning CDC Extraction.

    • The URLs used for connecting to Oracle RAC SCAN addresses were not correct and were incompatible with non-RAC installations. The URL format has been updated to use a URL format that is compatible with both Oracle RAC and non-RAC installations.

      Issues: 479

  • Core Replicator

    • When a timeout occurred on the connection to MySQL for the channel assignment service (part of parallel applier), the replicator would go offline, rather than retrying the connection. The service has now been updated to retry the connection if a timeout occurs. The default reconnect timeout is 120 seconds.

      Issues: 783

    • A slave replicator would incorrectly set the restart sequence number when reading from a master if the slave THL directory was cleared. This would cause slave replicators to fail to restart correctly.

      Issues: 794

    • Unsigned integers are extracted from the source database in a non-platform independent method. This would cause the Oracle applier to incorrectly attempt to apply negative values in place of their unsigned equivalents. The Oracle applier has been updated to correctly translate these values for types identified as unsigned to the correct value. When viewing these values are viewed within the THL, they will still be identified as a negative value.

      Issues: 798

      For more information, see Section 8.17.2, “thl list Command”.

    • Replication would fail when processing binlog entries containing the statement INSERT INTO ... WHERE... when operating in mixed mode.

      Issues: 807

  • Filters

    • The mysqlsessionsupport filter would cause replication to fail when the default thread_id was set to -1, for example when STRICT_ALL_TABLES SQL mode had been enabled. The replicator has been updated to interpret -1 as 0 to prevent this error.

      Issues: 821

    • The rename filter has been updated so that renaming of only the schema name for STATEMENT events. Previously, only ROW events would be renamed by the filter.

      Issues: 842