Release Notes

Continuent Ltd

Abstract

This document provides release notes for all released versions of Continuent software.

Build date: 2017-12-12 (00e0638a)

Up to date builds of this document: Release Notes (Online), Release Notes (PDF)


Table of Contents

1. Tungsten Replicator Release Notes
1.1. Tungsten Replicator 5.3.0 GA (12 December 2017)
1.2. Tungsten Replicator 5.2.2 GA (22 October 2017)
1.3. Tungsten Replicator 5.2.1 GA (21 Sep 2017)
1.4. Tungsten Replicator 5.2.0 GA (19 July 2017)
1.5. Tungsten Replicator 5.1.1 GA (23 May 2017)
1.6. Tungsten Replicator 5.1.0 GA (26 Apr 2017)
1.7. Tungsten Replicator 5.0.1 GA (23 Feb 2017)
1.8. Tungsten Replicator 5.0.0 GA (7 December 2015)
2. Tungsten Clustering Release Notes
2.1. Tungsten Clustering 5.3.0 GA (12 December 2017)
2.2. Tungsten Clustering 5.2.2 GA (22 October 2017)
2.3. Tungsten Clustering 5.2.1 GA (21 Sep 2017)
2.4. Tungsten Clustering 5.2.0 GA (19 July 2017)
2.5. Tungsten Clustering 5.1.1 GA (23 May 2017)
2.6. Tungsten Clustering 5.1.0 GA (26 Apr 2017)
2.7. Tungsten Clustering 5.0.1 GA (23 Feb 2017)
2.8. Tungsten Clustering 5.0.0 GA (7 December 2015)
2.9. Tungsten Clustering 4.0.0 Not yet released (Not yet released)
3. Continuent Tungsten Release Notes
3.1. Continuent Tungsten 4.0.8 GA (22 May 2017)
3.2. Continuent Tungsten 4.0.7 GA (23 Feb 2017)
3.3. Continuent Tungsten 4.0.6 GA (8 Dec 2016)
3.4. Continuent Tungsten 4.0.5 GA (4 Mar 2016)
3.5. Continuent Tungsten 4.0.4 GA (24 Feb 2016)
3.6. Continuent Tungsten 4.0.3 Not Released (NA)
3.7. Continuent Tungsten 4.0.2 GA (1 Oct 2015)
3.8. Continuent Tungsten 4.0.1 GA (20 Jul 2015)
3.9. Continuent Tungsten 4.0.0 GA (17 Apr 2015)
3.10. Continuent Tungsten 2.2.0 NYR (Not Yet Released)
3.11. Continuent Tungsten 2.0.5 GA (24 Dec 2014)
3.12. Continuent Tungsten 2.0.4 GA (9 Sep 2014)
3.13. Continuent Tungsten 2.0.3 GA (1 Aug 2014)
3.14. Continuent Tungsten 2.0.2 GA (19 May 2014)
3.15. Continuent Tungsten 2.0.1 GA (3 January 2014)
3.16. Continuent Tungsten 1.5.4 GA (Not yet released)

1. Tungsten Replicator Release Notes

1.1. Tungsten Replicator 5.3.0 GA (12 December 2017)

Release 5.3.0 is an important feature release that contains some key new functionality for replication. In particular:

  • JSON data type column extraction support for MySQL 5.7 and higher.

  • Generated column extraction support for MySQL 5.7 and higher.

  • DDL translation support for heterogeneous targets, initially support DDL translation for MySQL to MySQL, Vertica and Redshift targets.

  • Support for data concentration support for replication into a single target schema (with additional source schema information added to each table) for both HPE Vertica and Amazon Redshift targets.

  • Rebranded and updated support for Oracle extraction with the Oracle Redo Reader, including improvements to offboard deployment, more configuration options, and support for the deployment and installation of multiple offboard replication services within a single replicator.

This release also contains a number of important bug fixes and minor improvements to the product.

Improvements, new features and functionality

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. The new file is replicator-user.log. The original file, trepsvc.log remains unchanged.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

    • Support for Java 7 (JDK or JRE 1.7) has been deprecated, and will be removed in the 6.0.0 release. The software is compiled using Java 8 with Java 7 compatibility.

      Issues: CT-252

    • Some Javascript filters had DOS style line breaks.

      Issues: CT-376

    • Support for JSON datatypes and generated columns within MySQL 5.7 and greater has been added to the MySQL extraction component of the replicator.

      Important

      Due to a MySQL bug, the way that JSON and generated columns is represented within MySQL binary log, it is possible for the size of the data, and the reported size re different and this could cause data corruption To account for this behavior and to prevent data inconsistencies, the replicator can be configured to either ignore, warn, or stop, if the mismatch occurs.

      This can be set by modifying the property replicator.extractor.dbms.json_length_mismatch_policy.

      Until this problem is addressed within MySQL, tpm (in [Tungsten Replicator 5.2 Manual]) will still generate a warning about the issue which can be ignored during installation by using the --skip-validation-check=MySQLGeneratedColumnCheck (in [Tungsten Replicator 5.2 Manual]).

      For more information on the effects of the bug, see MySQL Bug #88791.

      Issues: CT-5, CT-468

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.2 Manual]) command has been updated to correctly operate with CentOS 7 and higher. Due to an underlying change in the way IP configuration information was sourced, the extraction of the IP address information has been updated to use the ip addr command.

      Issues: CT-35

    • The THL retention setting is now checked in more detail during installation. When the --thl-log-retention (in [Tungsten Replicator 5.2 Manual]) is configured when extracting from MySQL, the value is compared to the binary log expiry setting in MySQL (expire_logs_days). If the value is less, then a warning is produced to highlight the potential for loss of data.

      Issues: CT-91

  • Command-line Tools

    • The sizes outputs for the thl list (in [Tungsten Replicator 5.2 Manual]) command, such as -sizes (in [Tungsten Replicator 5.2 Manual]) or -sizesdetail (in [Tungsten Replicator 5.2 Manual]) command now additionally output summary information for the selected THL events:

      Total ROW chunks: 8 with 7 updated rows (50%)
      Total STATEMENT chunks: 8 with 2552 bytes (50%)
      16 events processed

      A new option has also been added, -sizessummary, that only outputs the summary information.

      Issues: CT-433

      For more information, see ???.

  • Filters

    • A new filter has been added, rowadddbname, which adds the source database name and optional database hash to every incoming row of data. This can be used to help identify source information when concentrating information into a single schema.

      Issues: CT-407

Bug Fixes

  • Installation and Deployment

    • An issue has been identified with the way certain operating systems now configure their open files limits, which can upset the checks within tpm (in [Tungsten Replicator 5.2 Manual]) that determine the open files limits configured for MySQL. To ensure that the open files limit has been set correctly, check the configuration of the service:

      1. Copy the system configuration:

        shell> sudo cp /lib/systemd/system/mysql.service /etc/systemd/system/
        shell> sudo vim /etc/systemd/system/mysql.service
      2. Add the following line to the end of the copied file:

        LimitNOFILE=infinity
      3. Reload the systemctl daemon:

        shell> sudo systemctl daemon-reload
      4. Restart MySQL:

        shell> service mysql restart

      That configures everything properly and MySQL should now take note of the open_files_limit config option.

      Issues: CT-148

    • The check to determine if triggers had been enabled within the MySQL data source would not get executed correctly, meaning that warnings about unsupported triggers would not trigger a notification.

      Issues: CT-185

    • When using tpm diag (in [Tungsten Replicator 5.2 Manual]) on a MySQL deployment, the MySQL error log would not be identified and included properly if the default datadir option was not /var/lib/mysql.

      Issues: CT-359

    • Installation when enabling security through SSL could fail intermittently during installation because the certificates would fail to get copied to the required directory during the installation process.

      Issues: CT-402

    • The Net::SSH libraries used by tpm (in [Tungsten Replicator 5.2 Manual]) have been updated to reflect the deprecation of paranoid parameter.

      Issues: CT-426

    • Using a complex password, particularly one with single or double quotes, when specifying a password for tpm (in [Tungsten Replicator 5.2 Manual]), could cause checks and the installation to raise errors or fail, although the actual configuration would work properly. The problem was limited to internal checks by tpm (in [Tungsten Replicator 5.2 Manual]) only.

      Issues: CT-440

  • Command-line Tools

    • Within Vertica deployments, the internal identity of the applier was set incorrectly to PostgreSQL. This would make it difficult for certain internal processes to identify the true datasource type. The setting did not affect the actual operation.

      Issues: CT-452

  • Core Replicator

    • When parsing THL data it was possible for the internal THL processing to lead to a java.util.ConcurrentModificationException. This indicated that the underlying THL event metadata structure used internally had changed between uses.

      Issues: CT-355

1.2. Tungsten Replicator 5.2.2 GA (22 October 2017)

Tungsten Replicator 5.2.2 is a minor bugfix release that addresses some bugs found in the previous 5.2.1 in [Tungsten Replicator 5.2 Manual] release. It is a recommended upgrade for all users making use of cluster to big data replication.

1.3. Tungsten Replicator 5.2.1 GA (21 Sep 2017)

Tungsten Replicator 5.2.1 is a minor bugfix release that addresses some bugs found in the previous 5.2.0 in [Tungsten Replicator 5.2 Manual] release. It is a recommended upgrade for all users.

1.4. Tungsten Replicator 5.2.0 GA (19 July 2017)

Tungsten Replicator 5.2.0 is a new feature release that contains a combination of new features, specifically new replicator applier targets:

This release also provides improvements to the trepctl (in [Tungsten Replicator 5.2 Manual]) and thl (in [Tungsten Replicator 5.2 Manual]) commands, and bug fixes to improve stability.

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide clearer and more detailed information on certain aspects of it's operation. Two new commands have been added, trepctl qs (in [Tungsten Replicator 5.2 Manual]) and trepctl perf (in [Tungsten Replicator 5.2 Manual]):

      • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide a simplified status output that provides an easier to understand status, using the qs (in [Tungsten Replicator 5.2 Manual]) command. For example:

        shell> trepctl qs
        State: alpha Online for 1172.724s, running for 124280.671s
        Latency: 0.71s from source DB commit time on thl://ubuntuheterosrc:2112/ into target database
         7564.198s since last source commit
        Sequence: 4860 last applied, 0 transactions behind (0-4860 stored) estimate 0.00s before synchronization
      • The trepctl perf (in [Tungsten Replicator 5.2 Manual]) command provides detailed performance information on the operation and status of the replicator and individual stages. This can be useful to identify where any additional latency or performance issues lie:

        shell> trepctl perf
        Statistics since last put online 1360.141s ago
        Stage | Seqno | Latency | Events | Extraction | Filtering | Applying | Other | Total
        remote-to-thl | 4860 | 0.475s | 70 | 116713.145s | 0.000s | 2.920s | 0.000s | 116716.065s
         Avg time per Event | 1667.331s | 0.000s | 0.000s | 0.042s | 1667.372s
        thl-to-q | 4860 | 0.527s | 3180 | 113842.933s | 0.011s | 2873.039s | 0.102s | 116716.085s
         Avg time per Event | 35.800s | 0.000s | 0.000s | 0.903s | 36.703s
        q-to-dbms | 4860 | 0.536s | 3180 | 112989.667s | 0.010s | 3701.035s | 25.554s | 116716.266s
         Avg time per Event | 35.531s | 0.000s | 0.008s | 1.164s | 36.703s

      Issues: CT-29

    • A number of improvements have been made to the identification of long running transactions within the replicator:

      • A new field has been added to the output of trepctl status -name tasks (in [Tungsten Replicator 5.2 Manual]):

        timeInCurrentEvent : 6571.462

        This shows the time that the replictor has been processing the current event. For a long-running event, it helps to indicate that the replicator is still processing the curent event. Note that this is a just a counter for how low the current event has been running. For a replicator that is idle, this will show the time the replicator has spent both processing the original event and waiting to process the new event.

      • The thl list (in [Tungsten Replicator 5.2 Manual]) has been expanded to provide simple and detailed THL size information so that large transactions can be identified. Using the -sizes (in [Tungsten Replicator 5.2 Manual]) and -sizesdetail (in [Tungsten Replicator 5.2 Manual]) displays detailed information about the size of the SQL, number of rows, or both for each stored event. For example:

        shell> thl list -sizes
        SEQ# Frag# Tstamp
        ...
        12 0 2017-06-28 13:21:11.0 Event total: 1 chunks 73 bytes in SQL statements 0 rows
        13 0 2017-06-28 13:21:10.0 Event total: 1645 chunks 0 bytes in SQL statements 1645 rows
        14 0 2017-06-28 13:21:11.0 Event total: 1 chunks 36 bytes in SQL statements 0 rows

        For more information, see thl list -sizes Command and thl list -sizesdetail Command.

      • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide more detailed information on the performance of the replicator, see trepctl perf (in [Tungsten Replicator 5.2 Manual]).

      • For easier navigation and selection of THL events, the thl (in [Tungsten Replicator 5.2 Manual]) has had two further command-line options added, -first (in [Tungsten Replicator 5.2 Manual]) and -last (in [Tungsten Replicator 5.2 Manual]) to select the first and last events in the THL. Both also take an optional number that shows the first N or last N events.

      Issues: CT-34

    • A new command, tungsten_send_diag (in [Tungsten Replicator 5.2 Manual]), has been added that provides a simplified method for sending a tpm diag (in [Tungsten Replicator 5.2 Manual]) output automatically through to the support team. The new command uploads the diagnostic information directly in Amazon S3 without requiring a separate upload to Zendesk.

      Issues: CT-158

    • A new command, clean_release_directory (in [Tungsten Replicator 5.2 Manual]) has been added to the distribution. This command removes old releases from the installation directory that have been created during either upgrades or configuration updates. The command removes all old entries except the current active one, and the last five entries.

      Issues: CT-204

  • Documentation

    • The documentation has been updated to make the use of the --property (in [Tungsten Replicator 5.2 Manual]) option to tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-180

Bug Fixes

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) command could hang during the execution of an external command which could cause the entire process to fail to complete properly.

      Issues: CT-82

    • When a replicator has been configured a cluster slave, the masterListenUri (in [Tungsten Replicator 5.2 Manual]) would be blank. This was because a pure cluster-slave configuration did not correctly configure the necessary pipelines.

      Issues: CT-197

    • The query (in [Tungsten Replicator 5.2 Manual]) tool has been updated to provide better error handling and messages during an error. This particularly affects tools which embed the use of this command, such as tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-203

    • An auto-refresh option has been added to certain commands within trepctl (in [Tungsten Replicator 5.2 Manual]). By adding the -r (in [Tungsten Replicator 5.2 Manual]) option and the number of seconds to either trepctl status (in [Tungsten Replicator 5.2 Manual]), trepctl qs (in [Tungsten Replicator 5.2 Manual]), or trepctl perf (in [Tungsten Replicator 5.2 Manual]) commands. For example, trepctl qs -r 5 (in [Tungsten Replicator 5.2 Manual]) would refresh the quick status command every 5 seconds.

      Issues: CT-209

1.5. Tungsten Replicator 5.1.1 GA (23 May 2017)

Tungsten Replicator 5.1.1 is a minor bugfix release that addresses some bugs found in the previous 5.1.0 in [Tungsten Clustering for MySQL 5.1 Manual] release. It is a recommended upgrade for all users.

Bug Fixes

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.2 Manual]) command has been updated:

      • The -ascmd (in [Tungsten Replicator 5.2 Manual]) option has been added to output the current position as a command that you can use verbatim to reset the status. For example:

        shell> dsctl get -ascmd
        dsctl set -seqno 17 -epoch 11 -event-id "mysql-bin.000082:0000000014031577;-1" -source-id "ubuntu"
      • The -reset (in [Tungsten Replicator 5.2 Manual]) option has been added so that the current position can be reset and then set using dsctl set -reset without having to run two separate commands.

      Issues: CT-24

    • The availability and default configuration of some filters has been changed so that certain filters are now available in all configurations. This does not effect existing filter deployments.

      Issues: CT-84

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) command could fail to complete properly due to a problem with the threads created during the provision process.

      Issues: CT-202

  • Backup and Restore

    • The trepctl backup (in [Tungsten Replicator 5.2 Manual]) operation could fail if the system ran out of disk space, or the storage.index file could not be written or become corrrupted. The backup system will now recreate the file if the information could be read properly.

      Issues: CT-122

  • Heterogeneous Replication

    • When creating DDL from an Oracle source for Hadoop using ddlscan (in [Tungsten Replicator 5.2 Manual]), the template that is used to create the metadata file was missing.

      Issues: CT-206

1.6. Tungsten Replicator 5.1.0 GA (26 Apr 2017)

Tungsten Replicator 5.1.0 is a minor feature release and constains some significant improvements in the compatiblity and stability for Hadoop loading, JavaScript filters, heterogeneous filter compatibility and important bug fixes.

Improvements, new features and functionality

  • Installation and Deployment

    • The list of supported Ruby versions has been updating to support Ruby up to and including Ruby 2.4.0.

      Issues: CT-138

  • Heterogeneous Replication

    • The support for loading into Hadoop has been improved with better compatibility for recent Hadoop releases from the major Hadoop distributions.

      • MapR 5.2

      • Cloudera 5.8

      In addition to ensuring the basic compatibility of these tools, the continuent-tools-hadoop has been updated to support the use of the beeline as well as the hive command.

      Issues: CT-153, CT-155

      For more information, see The load-reduce-check Tool.

    • The replicator and load-reduce-check (in [Tungsten Replicator 5.2 Manual]) command that is part of the continuent-tools-hadoop repository has been updated so that it can support loading and replication into Hadoop from Oracle. This includes creating suitable DDL templates and support for accessing Oracle via JDBC to load DDL information.

      Issues: CT-168

  • Filters

    • The JavaScript environment has been updated to include a standardized set of filter functionality. This is proivided and loaded as standard into all JavaScript filters. The core utilities are provided in the coreutils.js file.

      The current file provides three functions:

      • load — which loads an external JavaScript file.

      • readJSONFile — which loads an external JSON file into a variable.

      • JSON — provides JSON class including the ability to dump a JavaScript variable into a JSON string.

      Issues: CT-99

    • The thl (in [Tungsten Replicator 5.2 Manual]) has been improved to support -from (in [Tungsten Replicator 5.2 Manual]) and -to (in [Tungsten Replicator 5.2 Manual]) options for selecting the range. These act as synonyms for the existing -low (in [Tungsten Replicator 5.2 Manual]) and -high (in [Tungsten Replicator 5.2 Manual]) options and can be used with all commands.

      Issues: CT-111

    • A number of filters have been updated so that the THL metadata for the transaction includes whether a specific filter has been applied to the transaction in question. This is designed to make it easier to determine whether the filter has been applied, particularly in heterogeneous replication, and also to determine whether the incoming transaction are suitable to be applied to a targert that requires them. Currently the metadata is only added to the transactions and no enforcement is made.

      The following filters add this information:

      The format of the metadata is tungsten_filter_NAME=true.

      Issues: CT-157

Bug Fixes

  • Installation and Deployment

    • The rubygems extension to Ruby was loaded correctly causing some tools to fail to load correctly, or fail to use the Net/SSH tools correctly.

      Issues: CT-143

    • The tpm update (in [Tungsten Replicator 5.2 Manual]) command could fail when using Ruby 1.8.7.

      Issues: CT-165

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) could fail if the innodb_log_home_dir and innodb_data_home_dir were set to a value different to the datadir option, and the --direct (in [Tungsten Replicator 5.2 Manual]) was used.

      Issues: CT-83, CT-141

  • Heterogeneous Replication

    • The Hadoop loader would previously load CSV files directly into the /users/tungsten within HDFS, completely ignoring the setting of thr replication user within the replicator. This has been corrected so that data can be loaded into the configured replication user.

      Issues: CT-134

    • By default the the Hadoop loader would default to use a directory structure that matched the SERVICENAME/SCHEMANAME/TABLENAME. This cause problems with the default DDL templates and the continuent-tools-hadoop tools which used only the schema and table name.

      Issues: CT-135

1.7. Tungsten Replicator 5.0.1 GA (23 Feb 2017)

Tungsten Replicator 5.0.1 is a bugfix release that contains critical fixes and improvements from the Tungsten Replicator 5.0.0 release. Specifically, it changes the default security and other settings to make upgrades from previous releases easier, and other fixes and improvements to the Oracle support and command-line tools.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The Ruby Net::SSH module, which has been bundled with Tungsten Replicator in past releases, is no longer included. This is due to the wide range of Ruby versions and deployment environments that we support, and differences in the Net::SSH module supported and used with different Ruby versions. In order to simplify the process and ensure that the platforms we support operate correctly, the Net::SSH module has been removed and will now need to be installed before deployment.

    To ensure you have the correct environment before deployment, ensure both the Net::SSH and Net::SCP Ruby modules are installed using gem:

    shell> gem install net-ssh
    shell> gem install net-scp

    Depending on your environment, you may also need to install the io-console module:

    shell> gem install io-console

    If during installation you get an error similar to this:

    mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

    It indicates that you do not have the Ruby development headers installed. Use your native package management interface (for example yum or apt and install the ruby-dev package. For example:

    shell> sudo apt install ruby-dev

    Issues: CT-88

  • The replicator (in [Tungsten Replicator 5.2 Manual]) is no longer restarted when updating the configuration with tpm (in [Tungsten Replicator 5.2 Manual]) when using the --replace-tls-certificate (in [Tungsten Replicator 5.2 Manual]) option.

    Issues: CT-120

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

Improvements, new features and functionality

  • Installation and Deployment

    • Tungsten Replicator is now certified for deployment on systems running Java 8.

      Issues: CT-27

  • Core Replicator

    • The replicator will now generate a detailed heap dump in the event of a failure. This will help during debugging and identifying any issues.

      Issues: CT-11

  • Filters

    • The Rhino JS, which is incorporated for use by the filtering and batch loading mechanisms, has been updated to Rhino 1.7R4. This addresses a number of different issues with the embedded library, including a performance issue that could lead to increased latency during filter operations.

      Issues: CT-21

Bug Fixes

  • Installation and Deployment

    • The Ruby Net::SSH libraries used by tpm (in [Tungsten Replicator 5.2 Manual]) have been updated to the latest version. This addresses issues with SSH and staging based deployments, including KEX algorithm errors.

      Issues: CT-16

    • On some platforms the keytool command could fail to be found, causing an error within the installation when generating certificates.

      Issues: CT-73

  • Command-line Tools

    • The tpasswd (in [Tungsten Replicator 5.2 Manual]) could create a log file with the wrong permissions.

      Issues: CT-117

  • Core Replicator

    • Checksums in MySQL could cause problems when parsing the MySQL binary log due to a change in the way the checksum information is recorded within the binary log. This would cause the replicator to become unable to come online.

      Issues: CT-72

1.8. Tungsten Replicator 5.0.0 GA (7 December 2015)

VMware Continuent for Replication 5.0.0 is a major release that incorporates the following changes:

  • The software release has been renamed. For most users of VMware Continuent for Replication, the filename will start with vmware-continuent-replication. If you are using an Oracle DBMS as the source and have purchased support for the latest version, the filename will start with vmware-continuent-replication-oracle-source.

    The documentation has not been updated to reflect this change. While reading these examples you will see references to tungsten-replicator which will apply to your software release.

  • New Oracle Extraction module that reads the Oracle Redo logs provided faster, more compatible, and more efficient method for extracting data from Oracle databases. For more information, see Oracle Replication using Redo Reader.

  • Security, including file permissions and TLS/SSL is now enabled by default. For more information, see Deployment Security.

  • License keys are now required during installation. For more information, see Deploy License Keys.

  • Support for RHEL 7 and CentOS 7.

  • Basic support for MySQL 5.7.

  • Cleaner and simpler directory layout.

Upgrading from previous versions should be fully tested before attempted in a production environment. The changes listed below affect tpm (in [Tungsten Replicator 5.2 Manual]) output and the requirements for operation.

Improvements, new features and functionality

  • Installation and Deployment

    • During installation, tpm (in [Tungsten Replicator 5.2 Manual]) writes the configuration log to /tmp/tungsten-configure.log. If the file exists, but is owned by a separate user the operation will fail with a Permission Denied error. The operation has now been updated to create a directory within /tmp (in [Tungsten Replicator 5.2 Manual]) with the name of the current user where the configuration log will be stored. For example, if the user is tungsten, the log will be written to /tmp/tungsten/tungsten-configure.log.

      Issues: CONT-1402

Bug Fixes

  • Installation and Deployment

    • During installation, a failed installation by tpm (in [Tungsten Replicator 5.2 Manual]), running tpm uninstall could also fail. The command now correctly uninstalls even a partial installation.

      Issues: CONT-1359

Tungsten Replicator 5.0.0 Includes the following changes made in Tungsten Replicator 5.0.0

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The Bristlecone load generator toolkit is no longer included with Release Notes by default.

    Issues: CONT-903

  • The scripts previously located within the scripts directory have now been relocated to the standard bin directory. This does not affect their availability if the env.sh (in [Tungsten Replicator 5.2 Manual]) script has been used to update your path. This includes, but is not limited to, the following commands:

    • ebs_snapshot.sh

    • file_copy_snapshot.sh

    • multi_trepctl

    • tungsten_get_position

    • tungsten_provision_slave

    • tungsten_provision_thl

    • tungsten_read_master_events

    • tungsten_set_position

    • xtrabackup.sh

    • xtrabackup_to_slave

    Issues: CONT-904

  • The backup (in [Tungsten Replicator 5.2 Manual]) and restore (in [Tungsten Replicator 5.2 Manual]) functionality in trepctl (in [Tungsten Replicator 5.2 Manual]) has been deprecated and will be removed in a future release.

    Issues: CONT-906

  • The location of the JavaScript filters has been moved to new location in keeping with the rest of the configuration:

    • samples/extensions/javascript has moved to support/filters-javascript

    • samples/scripts/javascript-advanced has moved to support/filters-javascript

    The use of these filters has not changed but the default location for some filter configuration files has moved to support/filters-config. Check your current configuration before upgrading.

    Issues: CONT-908

  • The ddlscan (in [Tungsten Replicator 5.2 Manual]) templates have been moved to the support/ddlscan directory.

    Issues: CONT-909

  • The Vertica applier should write exceptions to a temporary file during replication.

    The applier statements will include the EXCEPTIONS attribute in each statement to assist in debugging. Review the replicator log or trepctl status (in [Tungsten Replicator 5.2 Manual]) output for more details.

    Issues: CONT-1169

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • The replicator can hit a MySQL lock wait timeout when processing large transactions.

      Issues: CONT-1106

    • The replicator can run into OutOfMemory when handling very large Row-Based replication events. This can be avoided by setting --optimize-row-events=false (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1115

    • The replicator can fail during LOAD DATA commands or Vertica loading if the system permissions are not set correctly. If this is encountered, make sure the MySQL or Vertica system users are a member of the Tungsten system group. The issue may also be avoided by removing system file protections with --file-protection-level=none (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1460

Improvements, new features and functionality

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.2 Manual]) has been updated to provide help output when specifically requested with the -h or -help options.

      Issues: CONT-1003

      For more information, see dsctl help Command.

Bug Fixes

  • Core Replicator

    • A master replicator could fail to finish extracting a fragmented transaction if disconnected during processing.

      Issues: CONT-1163

    • A slave replicator could fail to come ONLINE (in [Tungsten Replicator 5.2 Manual]) if the last THL file is empty.

      Issues: CONT-1164

    • The replicator applier and filters may fail with ORA-955 because the replicator did not check for metadata tables using uppercase table names.

      Issues: CONT-1375

2. Tungsten Clustering Release Notes

2.1. Tungsten Clustering 5.3.0 GA (12 December 2017)

Release 5.3.0 is a new feature release that contains improvements to the core replicator and manager, including adding new functionality in preparation for the next major release (6.0.0) and future functionality.

Key improvements include:

  • Improved and simplified user-focused logging, to make it easier to identify issues and problems.

  • Easier access to the overall cluster status from the command-line through the Connector cluster-status command.

  • Many fixes and stabilisation improvements to the Connector.

Improvements, new features and functionality

  • Tungsten Connector

    • The connector (in [Continuent Tungsten 4.0 Manual]) has been extended to provide cluster status information, and to also to provide this information encapsulated in a JSON format. To get the cluster status through the connector (in [Continuent Tungsten 4.0 Manual]) command:

      shell> connector cluster-status

      To get the information in JSON format:

      shell> connector cluster-status -json

      Issues: CONT-630

      For more information, see ???.

Bug Fixes

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. These files are:

      • manager-user.log

      • connector-user.log

      • replicator-user.log

      These files should be much smaller, and much simpler to read and digest in the event of a problem. Currently the information and warnings added to the logs are being adjusted so that the new log files do not contain unnecessary entries.

      The original log files (tmsvc.log, connector.log, trepsvc.log) remain unchanged in terms of the information logged to them.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

  • Command-line Tools

    • Backups using datasource backup (in [Continuent Tungsten 4.0 Manual]) could fail to complete properly when using xtrabackup.

      Issues: CT-352

    • The tpm diag (in [Tungsten Replicator 5.2 Manual]) would fail to get manager logs from hosts that were configured without a replicator, for example standalone connector or witness hosts.

      Issues: CT-360

  • Tungsten Connector

    • If the MySQL server returns a 'too many open connections' error when connecting through the Drizzle driver, the connector could hang with a log message about BufferUnderFlow.

      Issues: CT-86

    • Support for complex passwords within user.map (in [Continuent Tungsten 4.0 Manual]) that may include one or more single or double quotes have been updated. The following rules now apply for passwords in user.map (in [Continuent Tungsten 4.0 Manual]):

      • Quotes ' and double quotes " are now supported in the user.map password.

      • If there's a space in the password, the password needs to be surrounded with " or ':

        "password with space"
      • If there's one or several " or ' in the password without space, the password doesn't need to be surrounded

        my"pas'w'or"d"
      • If the password itself starts and ends with the same quote (" or '), it needs to be surrounded by quotes.

        "'mypassword'" so that the actual password is 'mypassword'.

      As a general rule, if the password is enclosed in either single or double quotes, these are not included as part of the password during authentication.

      Issues: CONT-239

    • When starting up, the Connector would connect to the first master in the first data service within it's own internal list, now the 1st entry of the user.map (in [Continuent Tungsten 4.0 Manual]) configuration.

      Issues: CT-385

    • When a connection gets its channel updated by a read/write split (either automatically because Smartscale has been enabled, or manually with selective read/write splitting), the channel that is left in background will be wrongly set as "in use", so the keepalive task won't be able to ping it anymore.

      Issues: CT-388

    • The bridgeServerToClientForcedCloseTimeout property default value has been reduced from 500ms to 50ms.

      Issues: CT-392

      For more information, see Adjusting the Bridge Mode Forced Client Disconnect Timeout.

    • Under certain circumstances it would be possible for the Connector, when configured to choose a slave based on the slave latency (i.e. using the --connector-max-slave-latency (in [Tungsten Replicator 5.2 Manual]) configuration option), to select the wrong slave. Rather than choosing the most advanced slave in terms of the latency, the slave with the highest latency could be selected instead.

      Issues: CONT-421

    • The connector would log a message each time a connection disappeared without being properly closed. For connections through load balancers this is standard behavior, and could lead to a large number of log entries that would make it difficult to find other errors. The default setting has been changed so the connection warnings are no longer produced by default. This can be changed by setting the printConnectionWarnings property to true.

      Issues: CT-456

  • Tungsten Manager

    • If the manager is on the same host as the coordinator, and there was an error writing information to the disk, and a failover situation occurred, the failiver would not take place. Since a disk write failure is a possible scenario for the the failure to occur, it could lead to the cluster being in an unstable state.

      Issues: CT-364

    • Within a composite deployment, switching a node in a local cluster would cause all relays within the entire composite cluster to point to that node as a master datasource.

      Issues: CT-378

Tungsten Clustering 5.3.0 Includes the following changes made in Tungsten Replicator 5.3.0

Release 5.3.0 is an important feature release that contains some key new functionality for replication. In particular:

  • JSON data type column extraction support for MySQL 5.7 and higher.

  • Generated column extraction support for MySQL 5.7 and higher.

  • DDL translation support for heterogeneous targets, initially support DDL translation for MySQL to MySQL, Vertica and Redshift targets.

  • Support for data concentration support for replication into a single target schema (with additional source schema information added to each table) for both HPE Vertica and Amazon Redshift targets.

  • Rebranded and updated support for Oracle extraction with the Oracle Redo Reader, including improvements to offboard deployment, more configuration options, and support for the deployment and installation of multiple offboard replication services within a single replicator.

This release also contains a number of important bug fixes and minor improvements to the product.

Improvements, new features and functionality

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. The new file is replicator-user.log. The original file, trepsvc.log remains unchanged.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

    • Support for Java 7 (JDK or JRE 1.7) has been deprecated, and will be removed in the 6.0.0 release. The software is compiled using Java 8 with Java 7 compatibility.

      Issues: CT-252

    • Some Javascript filters had DOS style line breaks.

      Issues: CT-376

    • Support for JSON datatypes and generated columns within MySQL 5.7 and greater has been added to the MySQL extraction component of the replicator.

      Important

      Due to a MySQL bug, the way that JSON and generated columns is represented within MySQL binary log, it is possible for the size of the data, and the reported size re different and this could cause data corruption To account for this behavior and to prevent data inconsistencies, the replicator can be configured to either ignore, warn, or stop, if the mismatch occurs.

      This can be set by modifying the property replicator.extractor.dbms.json_length_mismatch_policy.

      Until this problem is addressed within MySQL, tpm (in [Tungsten Replicator 5.2 Manual]) will still generate a warning about the issue which can be ignored during installation by using the --skip-validation-check=MySQLGeneratedColumnCheck (in [Tungsten Replicator 5.2 Manual]).

      For more information on the effects of the bug, see MySQL Bug #88791.

      Issues: CT-5, CT-468

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.2 Manual]) command has been updated to correctly operate with CentOS 7 and higher. Due to an underlying change in the way IP configuration information was sourced, the extraction of the IP address information has been updated to use the ip addr command.

      Issues: CT-35

    • The THL retention setting is now checked in more detail during installation. When the --thl-log-retention (in [Tungsten Replicator 5.2 Manual]) is configured when extracting from MySQL, the value is compared to the binary log expiry setting in MySQL (expire_logs_days). If the value is less, then a warning is produced to highlight the potential for loss of data.

      Issues: CT-91

  • Command-line Tools

    • The sizes outputs for the thl list (in [Tungsten Replicator 5.2 Manual]) command, such as -sizes (in [Tungsten Replicator 5.2 Manual]) or -sizesdetail (in [Tungsten Replicator 5.2 Manual]) command now additionally output summary information for the selected THL events:

      Total ROW chunks: 8 with 7 updated rows (50%)
      Total STATEMENT chunks: 8 with 2552 bytes (50%)
      16 events processed

      A new option has also been added, -sizessummary, that only outputs the summary information.

      Issues: CT-433

      For more information, see ???.

  • Filters

    • A new filter has been added, rowadddbname, which adds the source database name and optional database hash to every incoming row of data. This can be used to help identify source information when concentrating information into a single schema.

      Issues: CT-407

Bug Fixes

  • Installation and Deployment

    • An issue has been identified with the way certain operating systems now configure their open files limits, which can upset the checks within tpm (in [Tungsten Replicator 5.2 Manual]) that determine the open files limits configured for MySQL. To ensure that the open files limit has been set correctly, check the configuration of the service:

      1. Copy the system configuration:

        shell> sudo cp /lib/systemd/system/mysql.service /etc/systemd/system/
        shell> sudo vim /etc/systemd/system/mysql.service
      2. Add the following line to the end of the copied file:

        LimitNOFILE=infinity
      3. Reload the systemctl daemon:

        shell> sudo systemctl daemon-reload
      4. Restart MySQL:

        shell> service mysql restart

      That configures everything properly and MySQL should now take note of the open_files_limit config option.

      Issues: CT-148

    • The check to determine if triggers had been enabled within the MySQL data source would not get executed correctly, meaning that warnings about unsupported triggers would not trigger a notification.

      Issues: CT-185

    • When using tpm diag (in [Tungsten Replicator 5.2 Manual]) on a MySQL deployment, the MySQL error log would not be identified and included properly if the default datadir option was not /var/lib/mysql.

      Issues: CT-359

    • Installation when enabling security through SSL could fail intermittently during installation because the certificates would fail to get copied to the required directory during the installation process.

      Issues: CT-402

    • The Net::SSH libraries used by tpm (in [Tungsten Replicator 5.2 Manual]) have been updated to reflect the deprecation of paranoid parameter.

      Issues: CT-426

    • Using a complex password, particularly one with single or double quotes, when specifying a password for tpm (in [Tungsten Replicator 5.2 Manual]), could cause checks and the installation to raise errors or fail, although the actual configuration would work properly. The problem was limited to internal checks by tpm (in [Tungsten Replicator 5.2 Manual]) only.

      Issues: CT-440

  • Command-line Tools

    • Within Vertica deployments, the internal identity of the applier was set incorrectly to PostgreSQL. This would make it difficult for certain internal processes to identify the true datasource type. The setting did not affect the actual operation.

      Issues: CT-452

  • Core Replicator

    • When parsing THL data it was possible for the internal THL processing to lead to a java.util.ConcurrentModificationException. This indicated that the underlying THL event metadata structure used internally had changed between uses.

      Issues: CT-355

2.2. Tungsten Clustering 5.2.2 GA (22 October 2017)

Release 5.2.2 is a bug fix release that addresses a specific issue with high-volumne concurrent connections through Tungsten Connector.

Bug Fixes

  • Command-line Tools

    • The tungsten_send_diag (in [Tungsten Replicator 5.2 Manual]) command could fail with the error Can't locate Digest/HMAC_SHA1.pm.

      Issues: CT-389

  • Tungsten Connector

    • A bug was located in the performance optimization as part of CT-340, which could cause the Connector to start dropping connections during periods of heavy load.

      Issues: CT-398

Tungsten Clustering 5.2.2 Includes the following changes made in Tungsten Replicator 5.2.2

Tungsten Replicator 5.2.2 is a minor bugfix release that addresses some bugs found in the previous 5.2.1 in [Tungsten Replicator 5.2 Manual] release. It is a recommended upgrade for all users making use of cluster to big data replication.

2.3. Tungsten Clustering 5.2.1 GA (21 Sep 2017)

Release 5.2.1 is a bug fix release.

Improvements, new features and functionality

  • Tungsten Connector

    • Host-based read-write splitting is now also available in bridge mode. The solution can work either by using a modified /etc/hosts (in [Tungsten Replicator 5.2 Manual]), or by using multiple localhost entries in user.map (in [Continuent Tungsten 4.0 Manual]):

      @hostoption 127.0.0.2 qos=RO_RELAXED

      Any other IP will get the default configuration (generally RW_STRICT (in [Continuent Tungsten 4.0 Manual])).

      Issues: CT-341

Bug Fixes

  • Installation and Deployment

    • The tpm connector (in [Continuent Tungsten 4.0 Manual]) command would fail to import any local configuration options.

      Issues: CT-137

  • Command-line Tools

    • The tpm connector (in [Continuent Tungsten 4.0 Manual]) would fail in MySQL 5.7 deployments because MySQL expects to use SSL by default.

      Issues: CT-363

  • Tungsten Connector

    • A small optimisation has been found in the way the connector reads packets from MySQL.

      Issues: CT-340

    • The logging configuration for the Connector had been badly configured with a check time on the logging file of 30ms in place of desired 30s. This introduced a significant performance deficit due to over-checking the file. This has now been updated to 30s.

      Issues: CT-342

    • When running in bridge mode, the Connecto would not disconnect ongoing connections after losing contact with managers.

      Issues: CT-371

Tungsten Clustering 5.2.1 Includes the following changes made in Tungsten Replicator 5.2.1

Tungsten Replicator 5.2.1 is a minor bugfix release that addresses some bugs found in the previous 5.2.0 in [Tungsten Replicator 5.2 Manual] release. It is a recommended upgrade for all users.

2.4. Tungsten Clustering 5.2.0 GA (19 July 2017)

Release 5.2.0 is a new feature release that contains improvements to the trepctl (in [Tungsten Replicator 5.2 Manual]) and thl (in [Tungsten Replicator 5.2 Manual]) commands for better understanding of replication state, particularly with larger transactions, and provides support for new appliers in the Tungsten Replicator.

Bug Fixes

  • Tungsten Manager

    • Due to an issue with the manager, timeouts, and the time taken to perform a switch when restarting the replicator, upgrades and switches between different versions of Continuent Tungsten could fail. The timings have been adjust to address the issue.

      Issues: CT-192

    • A memory leak in the manager could cause the manager to restart after exhausting memory. The issue was most often seen when monitoring the system where the frequent update of status information.

      Issues: CT-211

Tungsten Clustering 5.2.0 Includes the following changes made in Tungsten Replicator 5.2.0

Tungsten Replicator 5.2.0 is a new feature release that contains a combination of new features, specifically new replicator applier targets:

This release also provides improvements to the trepctl (in [Tungsten Replicator 5.2 Manual]) and thl (in [Tungsten Replicator 5.2 Manual]) commands, and bug fixes to improve stability.

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide clearer and more detailed information on certain aspects of it's operation. Two new commands have been added, trepctl qs (in [Tungsten Replicator 5.2 Manual]) and trepctl perf (in [Tungsten Replicator 5.2 Manual]):

      • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide a simplified status output that provides an easier to understand status, using the qs (in [Tungsten Replicator 5.2 Manual]) command. For example:

        shell> trepctl qs
        State: alpha Online for 1172.724s, running for 124280.671s
        Latency: 0.71s from source DB commit time on thl://ubuntuheterosrc:2112/ into target database
         7564.198s since last source commit
        Sequence: 4860 last applied, 0 transactions behind (0-4860 stored) estimate 0.00s before synchronization
      • The trepctl perf (in [Tungsten Replicator 5.2 Manual]) command provides detailed performance information on the operation and status of the replicator and individual stages. This can be useful to identify where any additional latency or performance issues lie:

        shell> trepctl perf
        Statistics since last put online 1360.141s ago
        Stage | Seqno | Latency | Events | Extraction | Filtering | Applying | Other | Total
        remote-to-thl | 4860 | 0.475s | 70 | 116713.145s | 0.000s | 2.920s | 0.000s | 116716.065s
         Avg time per Event | 1667.331s | 0.000s | 0.000s | 0.042s | 1667.372s
        thl-to-q | 4860 | 0.527s | 3180 | 113842.933s | 0.011s | 2873.039s | 0.102s | 116716.085s
         Avg time per Event | 35.800s | 0.000s | 0.000s | 0.903s | 36.703s
        q-to-dbms | 4860 | 0.536s | 3180 | 112989.667s | 0.010s | 3701.035s | 25.554s | 116716.266s
         Avg time per Event | 35.531s | 0.000s | 0.008s | 1.164s | 36.703s

      Issues: CT-29

    • A number of improvements have been made to the identification of long running transactions within the replicator:

      • A new field has been added to the output of trepctl status -name tasks (in [Tungsten Replicator 5.2 Manual]):

        timeInCurrentEvent : 6571.462

        This shows the time that the replictor has been processing the current event. For a long-running event, it helps to indicate that the replicator is still processing the curent event. Note that this is a just a counter for how low the current event has been running. For a replicator that is idle, this will show the time the replicator has spent both processing the original event and waiting to process the new event.

      • The thl list (in [Tungsten Replicator 5.2 Manual]) has been expanded to provide simple and detailed THL size information so that large transactions can be identified. Using the -sizes (in [Tungsten Replicator 5.2 Manual]) and -sizesdetail (in [Tungsten Replicator 5.2 Manual]) displays detailed information about the size of the SQL, number of rows, or both for each stored event. For example:

        shell> thl list -sizes
        SEQ# Frag# Tstamp
        ...
        12 0 2017-06-28 13:21:11.0 Event total: 1 chunks 73 bytes in SQL statements 0 rows
        13 0 2017-06-28 13:21:10.0 Event total: 1645 chunks 0 bytes in SQL statements 1645 rows
        14 0 2017-06-28 13:21:11.0 Event total: 1 chunks 36 bytes in SQL statements 0 rows

        For more information, see thl list -sizes Command and thl list -sizesdetail Command.

      • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide more detailed information on the performance of the replicator, see trepctl perf (in [Tungsten Replicator 5.2 Manual]).

      • For easier navigation and selection of THL events, the thl (in [Tungsten Replicator 5.2 Manual]) has had two further command-line options added, -first (in [Tungsten Replicator 5.2 Manual]) and -last (in [Tungsten Replicator 5.2 Manual]) to select the first and last events in the THL. Both also take an optional number that shows the first N or last N events.

      Issues: CT-34

    • A new command, tungsten_send_diag (in [Tungsten Replicator 5.2 Manual]), has been added that provides a simplified method for sending a tpm diag (in [Tungsten Replicator 5.2 Manual]) output automatically through to the support team. The new command uploads the diagnostic information directly in Amazon S3 without requiring a separate upload to Zendesk.

      Issues: CT-158

    • A new command, clean_release_directory (in [Tungsten Replicator 5.2 Manual]) has been added to the distribution. This command removes old releases from the installation directory that have been created during either upgrades or configuration updates. The command removes all old entries except the current active one, and the last five entries.

      Issues: CT-204

  • Documentation

    • The documentation has been updated to make the use of the --property (in [Tungsten Replicator 5.2 Manual]) option to tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-180

Bug Fixes

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) command could hang during the execution of an external command which could cause the entire process to fail to complete properly.

      Issues: CT-82

    • When a replicator has been configured a cluster slave, the masterListenUri (in [Tungsten Replicator 5.2 Manual]) would be blank. This was because a pure cluster-slave configuration did not correctly configure the necessary pipelines.

      Issues: CT-197

    • The query (in [Tungsten Replicator 5.2 Manual]) tool has been updated to provide better error handling and messages during an error. This particularly affects tools which embed the use of this command, such as tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-203

    • An auto-refresh option has been added to certain commands within trepctl (in [Tungsten Replicator 5.2 Manual]). By adding the -r (in [Tungsten Replicator 5.2 Manual]) option and the number of seconds to either trepctl status (in [Tungsten Replicator 5.2 Manual]), trepctl qs (in [Tungsten Replicator 5.2 Manual]), or trepctl perf (in [Tungsten Replicator 5.2 Manual]) commands. For example, trepctl qs -r 5 (in [Tungsten Replicator 5.2 Manual]) would refresh the quick status command every 5 seconds.

      Issues: CT-209

2.5. Tungsten Clustering 5.1.1 GA (23 May 2017)

Bug Fixes

  • Tungsten Manager

    • A memory leak in the manager could cause the manager to restart after exhausting memory. The issue was most often seen when monitoring the system where the frequent update of status information.

      Issues: CT-211

Tungsten Clustering 5.1.1 Includes the following changes made in Tungsten Replicator 5.1.1

Tungsten Replicator 5.1.1 is a minor bugfix release that addresses some bugs found in the previous 5.1.0 in [Tungsten Clustering for MySQL 5.1 Manual] release. It is a recommended upgrade for all users.

Bug Fixes

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.2 Manual]) command has been updated:

      • The -ascmd (in [Tungsten Replicator 5.2 Manual]) option has been added to output the current position as a command that you can use verbatim to reset the status. For example:

        shell> dsctl get -ascmd
        dsctl set -seqno 17 -epoch 11 -event-id "mysql-bin.000082:0000000014031577;-1" -source-id "ubuntu"
      • The -reset (in [Tungsten Replicator 5.2 Manual]) option has been added so that the current position can be reset and then set using dsctl set -reset without having to run two separate commands.

      Issues: CT-24

    • The availability and default configuration of some filters has been changed so that certain filters are now available in all configurations. This does not effect existing filter deployments.

      Issues: CT-84

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) command could fail to complete properly due to a problem with the threads created during the provision process.

      Issues: CT-202

  • Backup and Restore

    • The trepctl backup (in [Tungsten Replicator 5.2 Manual]) operation could fail if the system ran out of disk space, or the storage.index file could not be written or become corrrupted. The backup system will now recreate the file if the information could be read properly.

      Issues: CT-122

  • Heterogeneous Replication

    • When creating DDL from an Oracle source for Hadoop using ddlscan (in [Tungsten Replicator 5.2 Manual]), the template that is used to create the metadata file was missing.

      Issues: CT-206

2.6. Tungsten Clustering 5.1.0 GA (26 Apr 2017)

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • When SSL is enabled, the Connector automatically advertisies the ports and itself as SSL capable. With some clients, this triggers them to use SSL even if SSL has not been configured. This causes the connections to fail and not operate correctly.

    The configuration can be controlled by using the --connector-ssl-capable (in [Tungsten Clustering for MySQL 5.1 Manual]) option to tpm (in [Tungsten Replicator 5.2 Manual]). By default, the connector will advertise as SSL capable.

    Issues: CT-140

Improvements, new features and functionality

  • Installation and Deployment

    • The list of supported Ruby versions has been updating to support Ruby up to and including Ruby 2.4.0.

      Issues: CT-138

Bug Fixes

  • Installation and Deployment

    • The rubygems extension to Ruby was loaded correctly causing some tools to fail to load correctly, or fail to use the Net/SSH tools correctly.

      Issues: CT-143

    • The tpm update (in [Tungsten Replicator 5.2 Manual]) command could fail when using Ruby 1.8.7.

      Issues: CT-165

Tungsten Clustering 5.1.0 Includes the following changes made in Tungsten Replicator 5.1.0

Tungsten Replicator 5.1.0 is a minor feature release and constains some significant improvements in the compatiblity and stability for Hadoop loading, JavaScript filters, heterogeneous filter compatibility and important bug fixes.

Improvements, new features and functionality

  • Installation and Deployment

    • The list of supported Ruby versions has been updating to support Ruby up to and including Ruby 2.4.0.

      Issues: CT-138

  • Heterogeneous Replication

    • The support for loading into Hadoop has been improved with better compatibility for recent Hadoop releases from the major Hadoop distributions.

      • MapR 5.2

      • Cloudera 5.8

      In addition to ensuring the basic compatibility of these tools, the continuent-tools-hadoop has been updated to support the use of the beeline as well as the hive command.

      Issues: CT-153, CT-155

      For more information, see The load-reduce-check Tool.

    • The replicator and load-reduce-check (in [Tungsten Replicator 5.2 Manual]) command that is part of the continuent-tools-hadoop repository has been updated so that it can support loading and replication into Hadoop from Oracle. This includes creating suitable DDL templates and support for accessing Oracle via JDBC to load DDL information.

      Issues: CT-168

  • Filters

    • The JavaScript environment has been updated to include a standardized set of filter functionality. This is proivided and loaded as standard into all JavaScript filters. The core utilities are provided in the coreutils.js file.

      The current file provides three functions:

      • load — which loads an external JavaScript file.

      • readJSONFile — which loads an external JSON file into a variable.

      • JSON — provides JSON class including the ability to dump a JavaScript variable into a JSON string.

      Issues: CT-99

    • The thl (in [Tungsten Replicator 5.2 Manual]) has been improved to support -from (in [Tungsten Replicator 5.2 Manual]) and -to (in [Tungsten Replicator 5.2 Manual]) options for selecting the range. These act as synonyms for the existing -low (in [Tungsten Replicator 5.2 Manual]) and -high (in [Tungsten Replicator 5.2 Manual]) options and can be used with all commands.

      Issues: CT-111

    • A number of filters have been updated so that the THL metadata for the transaction includes whether a specific filter has been applied to the transaction in question. This is designed to make it easier to determine whether the filter has been applied, particularly in heterogeneous replication, and also to determine whether the incoming transaction are suitable to be applied to a targert that requires them. Currently the metadata is only added to the transactions and no enforcement is made.

      The following filters add this information:

      The format of the metadata is tungsten_filter_NAME=true.

      Issues: CT-157

Bug Fixes

  • Installation and Deployment

    • The rubygems extension to Ruby was loaded correctly causing some tools to fail to load correctly, or fail to use the Net/SSH tools correctly.

      Issues: CT-143

    • The tpm update (in [Tungsten Replicator 5.2 Manual]) command could fail when using Ruby 1.8.7.

      Issues: CT-165

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) could fail if the innodb_log_home_dir and innodb_data_home_dir were set to a value different to the datadir option, and the --direct (in [Tungsten Replicator 5.2 Manual]) was used.

      Issues: CT-83, CT-141

  • Heterogeneous Replication

    • The Hadoop loader would previously load CSV files directly into the /users/tungsten within HDFS, completely ignoring the setting of thr replication user within the replicator. This has been corrected so that data can be loaded into the configured replication user.

      Issues: CT-134

    • By default the the Hadoop loader would default to use a directory structure that matched the SERVICENAME/SCHEMANAME/TABLENAME. This cause problems with the default DDL templates and the continuent-tools-hadoop tools which used only the schema and table name.

      Issues: CT-135

2.7. Tungsten Clustering 5.0.1 GA (23 Feb 2017)

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • In previous releases, a client PING command would open a new connection to the MySQL server, execute a SELECT 1 and then returns the OK (or failure) to the client. This could introduce additional load and also affect the metrics if statement execution counts and connections were being monitored.

    This has been updated so that the PING request is sent verbatim through to the server by the connector.

    Issues: CT-1

  • The default security configuration for new installations is for security, including SSL and TLS and authentication, to be disabled. In 5.0.0 the default was to enable full security on all components which could lead to problems and difficulty when upgrading.

    Issues: CT-18

  • The manager (in [Continuent Tungsten 4.0 Manual]) is no longer restarted when updating the configuration with tpm (in [Tungsten Replicator 5.2 Manual]) when using the --replace-tls-certificate (in [Tungsten Replicator 5.2 Manual]) option.

    Issues: CT-120

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • When performing an upgrade of MySQL 5.6 to MySQL 5.7, and after running mysql_upgrade, the MySQL server must be restarted. Failure to do this could cause switch or failover operations to fail.

    Issues: CT-70

  • Under certain circumstances, the rsyncprocess can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Installation and Deployment

    • Support has been improved for CentOS 7, addressing some issues regarding the startup and deployment scripts used to manage MySQL and Continuent Tungsten

      Issues: CONT-211

    • tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to cope with changes in the configuration and operation of MySQL 5.7.

      Issues: CONT-1060

    • When performing a persmissions check within tpm (in [Tungsten Replicator 5.2 Manual]), changes to the way password and other information is confirmed has been updated to work correctly with MySQL 5.7. In particular, due to the way passwords are now stored and used, tpm (in [Tungsten Replicator 5.2 Manual]) will confirm the configured user and password by checking that login functions correctly.

      Issues: CONT-1578

    • During installation, tpm (in [Tungsten Replicator 5.2 Manual]) will no longer check the connector credentials if the connector has been configured to operate in bridge mode in [Continuent Tungsten 4.0 Manual] if application specific credentials are not supplied. If the --application-user (in [Tungsten Replicator 5.2 Manual]) and --application-password (in [Tungsten Replicator 5.2 Manual]) options are provided, tpm (in [Tungsten Replicator 5.2 Manual]) will run the same checks even if bridge mode has been selected.

      Issues: CONT-1580, CONT-1581

  • Tungsten Connector

    • The connector has been updated to provide an acknowledgement to the MySQL protocol COM_CHANGE_USER command. This allows client connections that use connection pooling (such as PHP) and the change user command as a verification of an open connection to correctly received an acknowledgement that the connection is available.

      The option is disabled by default. To enable, set the treat.com.change.user.as.ping property to true during configuration with tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1380

      For more information, see Connector Change User as Ping.

  • Tungsten Manager

    • All the core tools now generate a detailed heap dump in the event of a failure. This will help during debugging and identifying any issues.

      Issues: CT-11

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm (in [Tungsten Replicator 5.2 Manual]) would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

    • The Nagios tungsten_nagios_online (in [Continuent Tungsten 4.0 Manual]) command would report nodes in the standby (in [Continuent Tungsten 4.0 Manual]) role that were in the OFFLINE (in [Tungsten Replicator 5.2 Manual]) would indicate that the node was in a warning state.

      Issues: CONT-1487

    • The Zabbix related monitoring tools, zabbix_tungsten_services (in [Continuent Tungsten 4.0 Manual]), zabbix_tungsten_progress (in [Continuent Tungsten 4.0 Manual]), zabbix_tungsten_online (in [Continuent Tungsten 4.0 Manual]), and zabbix_tungsten_latency (in [Continuent Tungsten 4.0 Manual]) were not marked as executable.

      Issues: CONT-1493

    • The tpm update (in [Tungsten Replicator 5.2 Manual]) would fail if the installation directory had been specified with a trailing slash.

      Issues: CONT-1499

    • If the cluster is put into maintenance mode, but the coordinator node, or the terminal session that put the cluster into maintenance mode fails, the cluster would stay in maintenance mode. The node is now tracked, and if the node goes away for any reason, the cluster will be returned to the mode it was in before being placed into maintenance node.

      Issues: CONT-1535

    • Running tpm connector (in [Continuent Tungsten 4.0 Manual]) while multi_trepctl (in [Tungsten Replicator 5.2 Manual]) is running on the same host would fail with the error:

      ERROR >> db2 >> There is already another Tungsten installation script running

      Issues: CONT-1572

  • Core Replicator

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

  • Tungsten Connector

    • The connector (in [Continuent Tungsten 4.0 Manual]) would not retry and/or reconnect transactions that were automatically redirected to a slave. This has been corrected so that all slave-targeted requests are retried or reconnected and retried in the event of an error.

      Issues: CT-22

    • Automatic retry of query could fail due to interference of keep alive request while re-executing the query.

      Issues: CONT-1512

    • The Tungsten Connector would sometimes retry connectivity on connections that had been killed. The logic has been updated. The default behavior remains the same:

      • Reconnect closed connections

      • Retry autocommitted reads

      The behavior can be modified by using the --connector-autoreconnect-killed-connections (in [Tungsten Replicator 5.2 Manual]). Setting to false disables the reconnection or retry of a connection outside of a planned switch or automatic failover. The default is true, reconnecting and retrying all connections.

      Issues: CONT-1514

  • Tungsten Manager

    • When deployed within a composite service, a race condition within the manager could cause the master replicator to start up in a shunned state.

      Issues: CT-2

    • The show slave status command when used through a Tungsten Connector connection could fail with the error Data truncation: BIGINT UNSIGNED value is out of range.

      Issues: CT-85

    • An entity called POLICY_MANAGER would appear in the output of ls resources (in [Continuent Tungsten 4.0 Manual]). This could cause problems with monitoring tools which parsed the output. The check script has now been updated to ignore the resource in the output.

      Issues: CT-90

    • In the event of a mysqld restart, the cluster could recover into a state with multiple masters.

      Issues: CONT-1482

    • Recovering a standby (in [Continuent Tungsten 4.0 Manual]) node would switch the role of the node once recovered to be a slave (in [Tungsten Replicator 5.2 Manual]), instead of remaining as a standby (in [Continuent Tungsten 4.0 Manual]).

      Issues: CONT-1486

    • The embedded Drools libraries have been updated to Drools 6.3. This addresses an issue in Drools which could lead to a memory leak.

      Issues: CONT-1547

    • The generated mysql_read_only script would use password on the command line, and could execute a query that returned multiple rows. Both issues could cause issues during executation, particularly for MySQL 5.6 and later.

      Issues: CONT-1570

Tungsten Clustering 5.0.1 Includes the following changes made in Tungsten Replicator 5.0.1

Tungsten Replicator 5.0.1 is a bugfix release that contains critical fixes and improvements from the Tungsten Replicator 5.0.0 release. Specifically, it changes the default security and other settings to make upgrades from previous releases easier, and other fixes and improvements to the Oracle support and command-line tools.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The Ruby Net::SSH module, which has been bundled with Tungsten Replicator in past releases, is no longer included. This is due to the wide range of Ruby versions and deployment environments that we support, and differences in the Net::SSH module supported and used with different Ruby versions. In order to simplify the process and ensure that the platforms we support operate correctly, the Net::SSH module has been removed and will now need to be installed before deployment.

    To ensure you have the correct environment before deployment, ensure both the Net::SSH and Net::SCP Ruby modules are installed using gem:

    shell> gem install net-ssh
    shell> gem install net-scp

    Depending on your environment, you may also need to install the io-console module:

    shell> gem install io-console

    If during installation you get an error similar to this:

    mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

    It indicates that you do not have the Ruby development headers installed. Use your native package management interface (for example yum or apt and install the ruby-dev package. For example:

    shell> sudo apt install ruby-dev

    Issues: CT-88

  • The replicator (in [Tungsten Replicator 5.2 Manual]) is no longer restarted when updating the configuration with tpm (in [Tungsten Replicator 5.2 Manual]) when using the --replace-tls-certificate (in [Tungsten Replicator 5.2 Manual]) option.

    Issues: CT-120

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

Improvements, new features and functionality

  • Installation and Deployment

    • Tungsten Replicator is now certified for deployment on systems running Java 8.

      Issues: CT-27

  • Core Replicator

    • The replicator will now generate a detailed heap dump in the event of a failure. This will help during debugging and identifying any issues.

      Issues: CT-11

  • Filters

    • The Rhino JS, which is incorporated for use by the filtering and batch loading mechanisms, has been updated to Rhino 1.7R4. This addresses a number of different issues with the embedded library, including a performance issue that could lead to increased latency during filter operations.

      Issues: CT-21

Bug Fixes

  • Installation and Deployment

    • The Ruby Net::SSH libraries used by tpm (in [Tungsten Replicator 5.2 Manual]) have been updated to the latest version. This addresses issues with SSH and staging based deployments, including KEX algorithm errors.

      Issues: CT-16

    • On some platforms the keytool command could fail to be found, causing an error within the installation when generating certificates.

      Issues: CT-73

  • Command-line Tools

    • The tpasswd (in [Tungsten Replicator 5.2 Manual]) could create a log file with the wrong permissions.

      Issues: CT-117

  • Core Replicator

    • Checksums in MySQL could cause problems when parsing the MySQL binary log due to a change in the way the checksum information is recorded within the binary log. This would cause the replicator to become unable to come online.

      Issues: CT-72

2.8. Tungsten Clustering 5.0.0 GA (7 December 2015)

VMware Continuent for Clustering 5.0.0 is a major release that incorporates the following changes:

  • The software release has been renamed. The filename now starts with vmware-continuent-clustering.

    The documentation has not been updated to reflect this change. While reading these examples you will see references to tungsten-replicator which will apply to your software release.

  • The connector now uses bridge-mode in [Continuent Tungsten 4.0 Manual] by default for all new installations and upgrades that do not have read-write splitting configured.

  • Security, including file permissions and TLS/SSL is now enabled by default. For more information, see Deployment Security.

  • TLS/SSL is supported as the default encrypted communication channel. TLS uses either the v1.1, or v1.2 depending on the available Java environment used for execution. For TLS v1.2, use Java 8 or higher.

  • License keys are now required during installation. For more information, see Deploy License Keys.

  • Support for RHEL 7 and CentOS 7.

  • Basic support for MySQL 5.7.

  • Cleaner and simpler directory layout for the replicator.

Upgrading from previous versions should be fully tested before attempted in a production environment. The changes listed below affect tpm (in [Tungsten Replicator 5.2 Manual]) output and the requirements for operation.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Continuent Tungsten now enables security by default. Security includes:

    • Authentication between command-line tools (cctrl (in [Continuent Tungsten 4.0 Manual])) and background services.

    • SSL/TLS between command-line tools and background services.

    • SSL/TLS between Tungsten Replicator and datasources.

    • SSL/TLS between Tungsten Connector and datasources.

    • File permissions and access by all components.

    The security changes require certificate files to be generated prior to operation. The tpm (in [Tungsten Replicator 5.2 Manual]) command can do that during upgrade if you are using a staging directory. Alternatively, you can create the certificates in [Tungsten Replicator 5.2 Manual] and update your configuration with the corresponding argument. This is required if you are installing from an INI file. See Installing from a Staging Host with Manually Generated Certificates or Installing via INI File with Manually Generated Certificates for more information. This functionality may be disabled by adding --disable-security-controls (in [Tungsten Replicator 5.2 Manual]) to your configuration.

    If you would like tpm (in [Tungsten Replicator 5.2 Manual]) to generate the necessary certificates from the staging directory. Run tpm update (in [Tungsten Replicator 5.2 Manual]) with the --replace-tls-certificate (in [Tungsten Replicator 5.2 Manual]) and --replace-jgroups-certificate options.

    staging-shell> ./tools/tpm update --replace-tls-certificate --replace-jgroups-certificate

    For more information, see Deployment Security.

  • Continuent Tungsten now requires license keys in order to operate.

    License keys are provided to all customers with an active support contract. Login to my.vmware.com to identify your support contract and the associated license keys. After collecting the license keys, they should be placed into /etc/tungsten/continuent.licenses or /opt/continuent/share/continuent.licenses. The /opt/continuent (in [Tungsten Replicator 5.2 Manual]) path should be replaced with your value for --install-directory (in [Tungsten Replicator 5.2 Manual]). Place each license on a new line in the file and make sure it is readable by the tungsten system user.

    If you are testing VMware Continuent or don't have your license key, talk with your sales contact for assistance. You may enable a trial-mode by using the license key TRIAL. This will not affect the runtime operation of VMware Continuent but may impact your ability to get rapid support.

    The tpm (in [Tungsten Replicator 5.2 Manual]) script will display a warning if license keys are not provided or if the provided license keys are not valid.

  • The connector will now use bridge-mode by default. This change will improve transparency and performance of the connector. The bridge-mode does not use the user.map (in [Continuent Tungsten 4.0 Manual]) file which reflects other changes to take a more secure default deployment. A warning will be displayed during the validation process to tell you if bridge-mode is being enabled. It will not be enabled in the following cases:

    • The --connector-smartscale (in [Tungsten Replicator 5.2 Manual]) option is set to true.

    • The user.map (in [Continuent Tungsten 4.0 Manual]) file contains @direct (in [Continuent Tungsten 4.0 Manual]) entries.

    • The user.map (in [Continuent Tungsten 4.0 Manual]) file contains @hostoption (in [Continuent Tungsten 4.0 Manual]) entries.

    • The --property=selective.rwsplitting (in [Tungsten Replicator 5.2 Manual]) connector option is set to true.

    This change may be disabled by adding --connector-bridge-mode=false (in [Tungsten Replicator 5.2 Manual]) to your configuration.

    Issues: CONT-1033

    For more information, see Using Bridge Mode.

  • Continuent Tungsten now includes RELEASE_NOTES in the package and displays a warning if they have not been reviewed.

    During some tpm (in [Tungsten Replicator 5.2 Manual]) commands, the script will check to see if the release notes have been reviewed and accepted. This may be done by running tools/accept_release_notes from the staging directory. The script will display the information and prompt the user for acceptance. A hidden file will be created on the staging server to mark the release notes have been accepted and the warning will not be displayed.

    This process may be automated by calling tools/accept_release_notes -y prior to installation. The script will mark the release notes as accepted and the warning will not be displayed.

    Issues: CONT-1122

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Under certain circumstances, the rsyncprocess can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Tungsten Connector

    • The SSL support within the Connector has been improved to support multiple aliases, enabling different certificates to be used for different components of the communication, for example, allowing a different certificate between MySQL and the Connector against the certificate for Connector to Client communication.

      Issues: CONT-1126

      For more information, see Deployment Security.

Bug Fixes

  • Core Replicator

    • During installation, a replicator source ID could be misconfigured causing problems during switch and failover operations.

      Issues: CONT-1002

  • Tungsten Connector

    • Following an automatic reconnection, the connector could retry a pending statement if it was a read or write.

      The connector will now detect between reads and writes and only retry the statement if it is a read. Any writes will raise an error to be handled by the application.

      Issues: CONT-1461

  • Tungsten Manager

    • The manager fails to read security.properties during startup. If this occurs, the manager will print a warning in tmsvc.log (in [Tungsten Replicator 5.2 Manual]).

      A race condition was resolved to ensure the manager reads configuration files in the correct order.

      Issues: CONT-1070

Tungsten Clustering 5.0.0 Includes the following changes made in Tungsten Replicator 5.0.0

VMware Continuent for Replication 5.0.0 is a major release that incorporates the following changes:

  • The software release has been renamed. For most users of VMware Continuent for Replication, the filename will start with vmware-continuent-replication. If you are using an Oracle DBMS as the source and have purchased support for the latest version, the filename will start with vmware-continuent-replication-oracle-source.

    The documentation has not been updated to reflect this change. While reading these examples you will see references to tungsten-replicator which will apply to your software release.

  • New Oracle Extraction module that reads the Oracle Redo logs provided faster, more compatible, and more efficient method for extracting data from Oracle databases. For more information, see Oracle Replication using Redo Reader.

  • Security, including file permissions and TLS/SSL is now enabled by default. For more information, see Deployment Security.

  • License keys are now required during installation. For more information, see Deploy License Keys.

  • Support for RHEL 7 and CentOS 7.

  • Basic support for MySQL 5.7.

  • Cleaner and simpler directory layout.

Upgrading from previous versions should be fully tested before attempted in a production environment. The changes listed below affect tpm (in [Tungsten Replicator 5.2 Manual]) output and the requirements for operation.

Improvements, new features and functionality

  • Installation and Deployment

    • During installation, tpm (in [Tungsten Replicator 5.2 Manual]) writes the configuration log to /tmp/tungsten-configure.log. If the file exists, but is owned by a separate user the operation will fail with a Permission Denied error. The operation has now been updated to create a directory within /tmp (in [Tungsten Replicator 5.2 Manual]) with the name of the current user where the configuration log will be stored. For example, if the user is tungsten, the log will be written to /tmp/tungsten/tungsten-configure.log.

      Issues: CONT-1402

Bug Fixes

  • Installation and Deployment

    • During installation, a failed installation by tpm (in [Tungsten Replicator 5.2 Manual]), running tpm uninstall could also fail. The command now correctly uninstalls even a partial installation.

      Issues: CONT-1359

Tungsten Replicator 5.0.0 Includes the following changes made in Tungsten Replicator 5.0.0

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The Bristlecone load generator toolkit is no longer included with Release Notes by default.

    Issues: CONT-903

  • The scripts previously located within the scripts directory have now been relocated to the standard bin directory. This does not affect their availability if the env.sh (in [Tungsten Replicator 5.2 Manual]) script has been used to update your path. This includes, but is not limited to, the following commands:

    • ebs_snapshot.sh

    • file_copy_snapshot.sh

    • multi_trepctl

    • tungsten_get_position

    • tungsten_provision_slave

    • tungsten_provision_thl

    • tungsten_read_master_events

    • tungsten_set_position

    • xtrabackup.sh

    • xtrabackup_to_slave

    Issues: CONT-904

  • The backup (in [Tungsten Replicator 5.2 Manual]) and restore (in [Tungsten Replicator 5.2 Manual]) functionality in trepctl (in [Tungsten Replicator 5.2 Manual]) has been deprecated and will be removed in a future release.

    Issues: CONT-906

  • The location of the JavaScript filters has been moved to new location in keeping with the rest of the configuration:

    • samples/extensions/javascript has moved to support/filters-javascript

    • samples/scripts/javascript-advanced has moved to support/filters-javascript

    The use of these filters has not changed but the default location for some filter configuration files has moved to support/filters-config. Check your current configuration before upgrading.

    Issues: CONT-908

  • The ddlscan (in [Tungsten Replicator 5.2 Manual]) templates have been moved to the support/ddlscan directory.

    Issues: CONT-909

  • The Vertica applier should write exceptions to a temporary file during replication.

    The applier statements will include the EXCEPTIONS attribute in each statement to assist in debugging. Review the replicator log or trepctl status (in [Tungsten Replicator 5.2 Manual]) output for more details.

    Issues: CONT-1169

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • The replicator can hit a MySQL lock wait timeout when processing large transactions.

      Issues: CONT-1106

    • The replicator can run into OutOfMemory when handling very large Row-Based replication events. This can be avoided by setting --optimize-row-events=false (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1115

    • The replicator can fail during LOAD DATA commands or Vertica loading if the system permissions are not set correctly. If this is encountered, make sure the MySQL or Vertica system users are a member of the Tungsten system group. The issue may also be avoided by removing system file protections with --file-protection-level=none (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1460

Improvements, new features and functionality

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.2 Manual]) has been updated to provide help output when specifically requested with the -h or -help options.

      Issues: CONT-1003

      For more information, see dsctl help Command.

Bug Fixes

  • Core Replicator

    • A master replicator could fail to finish extracting a fragmented transaction if disconnected during processing.

      Issues: CONT-1163

    • A slave replicator could fail to come ONLINE (in [Tungsten Replicator 5.2 Manual]) if the last THL file is empty.

      Issues: CONT-1164

    • The replicator applier and filters may fail with ORA-955 because the replicator did not check for metadata tables using uppercase table names.

      Issues: CONT-1375

2.9. Tungsten Clustering 4.0.0 Not yet released (Not yet released)

Improvements, new features and functionality

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.2 Manual]) command has been added. This enables easy getting, setting, and resetting of the current replication status information stored in the datasource.

      Issues: CONT-34

    • The tpm (in [Tungsten Replicator 5.2 Manual]) command has been updated to correctly configure clusters and replicators to support replication from a cluster directly to a datawarehouse.

      Issues: CONT-51

Bug Fixes

  • Installation and Deployment

    • During an update or upgrade configuration when components are being added or removed, older configuration could remain, leading to services and components being configured even though the service or component had been removed.

      Issues: CONT-155

    • The validation of values supplied to tpm (in [Tungsten Replicator 5.2 Manual]) for the --thl-log-retention (in [Tungsten Replicator 5.2 Manual]) has been updated. The option now requires a single letter suffix values (the first letter of day, hour, minute, seconds) to specify the quantifier for the value. The default value is 5d.

      Issues: CONT-177

    • The validation of values supplied to tpm (in [Tungsten Replicator 5.2 Manual]) for the --svc-applier-block-commit-interval (in [Tungsten Replicator 5.2 Manual]) has been updated. The option now accepts single letter suffix values (the first letter of day, hour, minute, seconds) to specify the quantifier for the value. Values iver 1000 or greater are assumed to be in seconds. The default value is 15s if batch-enabled (in [Tungsten Replicator 5.2 Manual]) is true, or 0 otherwise.

      Issues: CONT-181

    • tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to confirm that row-based replication has been enabled when a heterogeneous cluster has been configured.

      Issues: CONT-193

  • Command-line Tools

    • The tungsten_set_position (in [Tungsten Replicator 5.2 Manual]) command would fail when executed between dataservices if the service names were different.

      Issues: CONT-24

    • Managers are now started in serial per dataservice, rather than started serially globally.

      Issues: CONT-27

  • Core Replicator

    • A RENAME TABLE operation within MySQL would not cause the metadata caches during replication. This could lead to invalid metadata being used during processing and filtering.

      Issues: CONT-158

  • Tungsten Connector

    • The requirement for Oracle MySQL Connector/J to be used as the MySQL JDBC connector has been removed. The JDBC interface now uses the Drizzle driver by default.

      Issues: CONT-48

  • Tungsten Manager

    • The built-in Drools library has been updated to resolve an issue with memory consumption.

      Issues: CONT-28

    • The network connectivity checks using either the echo or ping protocols have been updated, and additional checks are now performed by the tpm (in [Tungsten Replicator 5.2 Manual]) command during installation to ensure that one or other of the methods is available, configuring the apprioriate method during installation. If neither method is confirmed to work, installation will now fail with a warning.

      Issues: CONT-53, CONT-90

    • Concurrent operations within cctrl (in [Continuent Tungsten 4.0 Manual]) could generate an exception.

      Issues: CONT-165

    • Within a composite dataservice, failover would not be triggered if the master site was isolated from the relay.

      Issues: CONT-188

3. Continuent Tungsten Release Notes

3.1. Continuent Tungsten 4.0.8 GA (22 May 2017)

Continuent Tungsten 4.0.8 which address a specific memory leak issue in the manager.

Bug Fixes

  • Tungsten Manager

    • A memory leak in the manager could cause the manager to restart after exhausting memory. The issue was most often seen when monitoring the system where the frequent update of status information.

      Issues: CT-211

3.2. Continuent Tungsten 4.0.7 GA (23 Feb 2017)

Continuent Tungsten 4.0.7 is a bugfix release that contains a specific correction for the deployment with respect to the use of the Ruby Net::SSH module.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • In previous releases, a client PING command would open a new connection to the MySQL server, execute a SELECT 1 and then returns the OK (or failure) to the client. This could introduce additional load and also affect the metrics if statement execution counts and connections were being monitored.

    This has been updated so that the PING request is sent verbatim through through to server by the connector.

    Issues: CT-1

  • The Ruby Net::SSH module, which has been bundled with Continuent Tungsten in past releases, is no longer included. This is due to the wide range of Ruby versions and deployment environments that we support, and differences in the Net::SSH module supported and used with different Ruby versions. In order to simplify the process and ensure that the platforms we support operate correctly, the Net::SSH module has been removed and will now need to be installed before deployment.

    To ensure you have the correct environment before deployment, ensure both the Net::SSH and Net::SCP Ruby modules are installed using gem:

    shell> gem install net-ssh
    shell> gem install net-scp

    Depending on your environment, you may also need to install the io-console module:

    shell> gem install io-console

    If during installation you get an error similar to this:

    mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

    It indicates that you do not have the Ruby development headers installed. Use your native package management interface (for example yum or apt and install the ruby-dev package. For example:

    shell> sudo apt install ruby-dev

    Issues: CT-88

3.3. Continuent Tungsten 4.0.6 GA (8 Dec 2016)

Release Notes 4.0.6 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration.

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration.

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports.

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration for more information.

Improvements, new features and functionality

  • Installation and Deployment

    • The release has been updated to correctly operate with CentOS v7.0 and higher. This was related to the changes made to the operation of the systemd tool used to manage startup and shutdown scripts.

      Issues: CONT-211, CONT-1552

    • When performing a persmissions check within tpm (in [Tungsten Replicator 5.2 Manual]), changes to the way password and other information is confirmed has been updated to work correctly with MySQL 5.7. In particular, due to the way passwords are now stored and used, tpm (in [Tungsten Replicator 5.2 Manual]) will confirm the configured user and password by checking that login functions correctly.

      Issues: CONT-1578

    • During installation, tpm (in [Tungsten Replicator 5.2 Manual]) will no longer check the connector credentials if the connector has been configured to operate in bridge mode in [Continuent Tungsten 4.0 Manual] if application specific credentials are not supplied. If the --application-user (in [Tungsten Replicator 5.2 Manual]) and --application-password (in [Tungsten Replicator 5.2 Manual]) options are provided, tpm (in [Tungsten Replicator 5.2 Manual]) will run the same checks even if bridge mode has been selected.

      Issues: CONT-1580

Bug Fixes

  • Installation and Deployment

    • If the cluster is put into maintenance mode, but the coordinator node, or the terminal session that put the cluster into maintenance mode fails, the cluster would stay in maintenance mode. The node is now tracked, and if the node goes away for any reason, the cluster will be returned to the mode it was in before being placed into maintenance node.

      Issues: CONT-1535

    • Running tpm connector (in [Continuent Tungsten 4.0 Manual]) while multi_trepctl (in [Tungsten Replicator 5.2 Manual]) is running on the same host would fail with the error:

      ERROR >> db2 >> There is already another Tungsten installation script running

      Issues: CONT-1572

  • Tungsten Connector

    • In the event of a statement being explicitly requested to execute on a slave and there being an error, it's possible that the Connector will not retry the statement. The behaviour has been updated to retry and/or reconnect to execute the statement on the slave.

      Issues: CT-22

  • Tungsten Manager

    • It was possible for a race condition within the manager to create a cluster that starts up with a shunned master service.

      Issues: CT-2

    • The generated mysql_read_only script would use password on the command line, and could execute a query that returned multiple rows. Both issues could cause issues during executation, particularly for MySQL 5.6 and later.

      Issues: CONT-1570

Continuent Tungsten 4.0.6 Includes the following changes made in Tungsten Replicator 4.0.6

Continuent Tungsten 4.0.6 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.5 release.

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.2 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • When running tpm update (in [Tungsten Replicator 5.2 Manual]) preperties set during the initial install could be reset or changed to their default value.

      Issues: CONT-1579

  • Command-line Tools

    • Running multi_trepctl (in [Tungsten Replicator 5.2 Manual]) in a multi-site, multi-master (MSMM) deployment could fail to report all of the running replication processes.

      Issues: CONT-1585

  • Core Replicator

    • There is a limit in the communication protocal for the replicator which limits the number of fragments within a single transaction in the THL to 32768. Although this is not a limit in the THL format, it is a limit in the protocol used to exchanged the THL information between replicators.

      The size of this value, and therefore, the maximum number of fragments cannot be increased without creating an incompatible change within the replicator. This creates a limit to the maximum size of a single transaction that can be replicated. Although this figure cannot be altered, the size of each individual fragment can be increased. The default setting is 1,000,000, creating a limit of approximately 32GB.

      To increase the fragment size, set the value of the property replicator.extractor.dbms.transaction_frag_size (in [Tungsten Replicator 5.2 Manual]). For example, increasing the value to 2,000,000 would increase the maximum THL transaction size to approximately 64GB.

      Care should be taken when increasing this value, as it also increases the amount of memory required to handle the transaction.

      Issues: CONT-1574

  • Filters

    • There is a known issue with the fixmysqlstrings.js filter. When translating BINARY or VARBINARY datatypes into a hex value, if the encoding set for the MySQL and replicator instance is not UTF-8, an implied character set conversion can take place. This leads to a corruption of the information when it is turned into a hex string. This is due to limitations of the internal datatypes available within the JavaScript environment used for the translation.

      Issues: CONT-1508

Improvements, new features and functionality

  • Installation and Deployment

    • Due to changes in the datatypes available in MySQL 5.7 and the supported datatypes within Continuent Tungsten, and coinciding with changes to the way this information is available, the tpm (in [Tungsten Replicator 5.2 Manual]) checks for compatibility may no longer highlight important option changes. For example, virtual columns and JSON columns in MySQL 5.7 are not replicated. During installation, if tpm (in [Tungsten Replicator 5.2 Manual]) identifies that MySQL 5.7 is in use, the following message will be reported:

      IMPORTANT: The replicator is unable to replicate tables that have
      columns defined as type JSON or that utilise VIRTUAL GENERATED values!
      The use of these features will cause replication to fail. If you want
      tpm to check for these add --mysql-allow-intensive-checks to the
      configuration. Be aware that the checks will query the
      information_schema and if you have thousands of tables this may affect
      other queries while the check runs. Otherwise, if you have confirmed
      manually that JSON or VIRTUAL GENERATED columns are not being used,
      you can skip this check by
      adding --skip-validation-check=MySQLUnsopportedDataTypesCheck to your
      configuration.

      To address this issue, when using tpm (in [Tungsten Replicator 5.2 Manual]) during an installation, more intensive checks for tables with unsupported types can be performed. For example, when checking the special column types used in all tables within an existing installation, tpm (in [Tungsten Replicator 5.2 Manual]) must check each table individually. As this can increase the load on the server during installagtion, tpm (in [Tungsten Replicator 5.2 Manual]) by default does not perform these checks. Instead, these checks can be enabled by using the --mysql-allow-intensive-checks (in [Tungsten Replicator 5.2 Manual]) option during configuration. Enabling this option provides for a much more detailed check, but may cause the installation process to take longer.

      Issues: CONT-1551, CONT-1576

  • Core Replicator

    • If the slave THL file ends with an event that was ultimately filtered, and the the replicator master and slave roles are then switched, the new master could generate an incorrect sequence number.

      Issues: CONT-1545

Bug Fixes

  • Installation and Deployment

    • The Ruby Net::SSH libraries used by tpm (in [Tungsten Replicator 5.2 Manual]) have been updated to the latest version. This addresses issues with SSH and staging based deployments, including KEX algorithm errors.

      Issues: CT-16

    • The built-in check for InnoDB did not work for MySQL 5.6 and could fail to identify InnoDB support on the MySQL server.

      Issues: CONT-1577

  • Core Replicator

    • Extraction from the MySQL binary log would fail if the binary log event ID is bigger than a Java Int. This could be triggered if a large (greater than 2B) transaction is inserted into the binary log.

      Issues: CONT-1541

3.4. Continuent Tungsten 4.0.5 GA (4 Mar 2016)

Release Notes 4.0.5 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration.

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration.

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports.

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration for more information.

Continuent Tungsten 4.0.5 Includes the following changes made in Tungsten Replicator 4.0.5

Continuent Tungsten 4.0.5 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.4 release.

Bug Fixes

  • Core Replicator

    • When incorporating user variables with an empty string as values into an SQL query using statement based replication, the replicator would fail to apply the statement and go offline.

      Issues: CONT-1555

3.5. Continuent Tungsten 4.0.4 GA (24 Feb 2016)

Release Notes 4.0.4 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration.

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration.

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports.

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration for more information.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Tungsten Connector

    • The connector has been updated to provide an acknowledgement to the MySQL protocol COM_CHANGE_USER command. This allows client connections that use connection pooling (such as PHP) and the change user command as a verification of an open connection to correctly received an acknowledgement that the connection is available.

      The option is disabled by default. To enable, set the treat.com.change.user.as.ping property to true during configuration with tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1380

      For more information, see Connector Change User as Ping.

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm (in [Tungsten Replicator 5.2 Manual]) would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

  • Core Replicator

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

  • Tungsten Connector

    • A connector running in bridge mode with auto reconnect enabled could try to reconnect to MySQL and attempt additional writes.

      Issues: CONT-1461

    • Automatic retry of query could fail due to interference of keep alive request while re-executing the query.

      Issues: CONT-1512

    • The Tungsten Connector would sometimes retry connectivity on connections that had been killed. The logic has been updated. The default behavior remains the same:

      • Reconnect closed connections

      • Retry autocommitted reads

      The behavior can be modified by using the --connector-autoreconnect-killed-connections (in [Tungsten Replicator 5.2 Manual]). Setting to false disables the reconnection or retry of a connection outside of a planned switch or automatic failover. The default is true, reconnecting and retrying all connections.

      Issues: CONT-1514

  • Tungsten Manager

    • A cluster could go into a panic after a failover if the mysqld and then immediately became available, causing multiple masters to exist.

      Issues: CONT-1482

    • Recovering a node that had been marked as a standby (in [Continuent Tungsten 4.0 Manual]), the node would be recovered into a standard slave, not a standby.

      Issues: CONT-1486

    • The cluster would fail to failover if the interface was down on the master.

      Issues: CONT-1537

    • The embedded Drools libraries have been updated to Drools 6.3. This addresses an issue in Drools which could lead to a memory leak.

      Issues: CONT-1547

Continuent Tungsten 4.0.4 Includes the following changes made in Tungsten Replicator 4.0.4

Continuent Tungsten 4.0.4 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.3 release.

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • Due to a bug within the Drizzle JDBC driver when communicating with MySQL, using the optimizeRowEvents options could lead to significant memory usage and subsequent failure. To alleviate the problem. For more information, see Drizzle JDBC Issue 38.

      Issues: CONT-1115

Bug Fixes

  • Core Replicator

    • When events are filtered on a master, and a slave replicator reconnects to the master, it is possible to get the error server does not have seqno expected by client. The replicator has been updated to correctly supply the sequence number during reconnection.

      Issues: CONT-1384, CONT-1525

    • The timeout used to read information from the MySQL binary logs has been changed from a fixed period of 120 seconds to a configurable parameter. This can be set by using the --property=replicator.extractor.dbms.binlogReadTimeout=180 (in [Tungsten Replicator 5.2 Manual]) property during configuration tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1528

    • When reconnecting within a multi-site multi-master deployment, the session level logging of updates would not be configured correctly in the re-opened session.

      Issues: CONT-1544

    • Within an SOR cluster, an isolated relay site would not resume replication correctly.

      Issues: CONT-1549

3.6. Continuent Tungsten 4.0.3 Not Released (NA)

Release Notes 4.0.3 is a bugfix release that contains critical fixes and improvements.

Due to an internal bug identified shortly before release, Continuent Tungsten 4.0.3 was never released to customers.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration.

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration.

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports.

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration for more information.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Tungsten Connector

    • The connector has been updated to provide an acknowledgement to the MySQL protocol COM_CHANGE_USER command. This allows client connections that use connection pooling (such as PHP) and the change user command as a verification of an open connection to correctly received an acknowledgement that the connection is available.

      The option is disabled by default. To enable, set the treat.com.change.user.as.ping property to true during configuration with tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1380

      For more information, see Connector Change User as Ping.

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm (in [Tungsten Replicator 5.2 Manual]) would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

  • Core Replicator

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

  • Tungsten Connector

    • A connector running in bridge mode with auto reconnect enabled could try to reconnect to MySQL and attempt additional writes.

      Issues: CONT-1461

    • Automatic retry of query could fail due to interference of keep alive request while re-executing the query.

      Issues: CONT-1512

    • The Tungsten Connector would sometimes retry connectivity on connections that had been killed. The logic has been updated. The default behavior remains the same:

      • Reconnect closed connections

      • Retry autocommitted reads

      The behavior can be modified by using the --connector-autoreconnect-killed-connections (in [Tungsten Replicator 5.2 Manual]). Setting to false disables the reconnection or retry of a connection outside of a planned switch or automatic failover. The default is true, reconnecting and retrying all connections.

      Issues: CONT-1514

  • Tungsten Manager

    • A cluster could go into a panic after a failover if the mysqld and then immediately became available, causing multiple masters to exist.

      Issues: CONT-1482

    • Recovering a node that had been marked as a standby (in [Continuent Tungsten 4.0 Manual]), the node would be recovered into a standard slave, not a standby.

      Issues: CONT-1486

    • The cluster would fail to failover if the interface was down on the master.

      Issues: CONT-1537

Continuent Tungsten 4.0.3 Includes the following changes made in Tungsten Replicator 4.0.3

Continuent Tungsten 4.0.3 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.2 release.

Due to an internal bug identified shortly before release, Continuent Tungsten 4.0.3 was never released to customers.

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

      The error is transient and non-specific and deployment should be retried.

      Issues: CONT-1343

  • Core Replicator

    • Due to a bug within the Drizzle JDBC driver when communicating with MySQL, using the optimizeRowEvents options could lead to significant memory usage and subsequent failure. To alleviate the problem. For more information, see Drizzle JDBC Issue 38.

      Issues: CONT-1115

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm (in [Tungsten Replicator 5.2 Manual]) would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

  • Command-line Tools

    • The tungsten_provision_thl (in [Tungsten Replicator 5.2 Manual]) command would not use the user specified --java-file-encoding (in [Tungsten Replicator 5.2 Manual]) setting, which could lead to data corruption during provisioning.

      Issues: CONT-1479

  • Core Replicator

    • A master replicator could fail to finish extracting a fragmented transaction if disconnected during processing.

      Issues: CONT-1163

    • A slave replicator could fail to come ONLINE (in [Tungsten Replicator 5.2 Manual]) if the last THL file is empty.

      Issues: CONT-1164

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

    • The replicator incorrectly assigns LOAD DATA statements to the #UNKNOWN shard. This can happen when the entire length is above 200 characters.

      Issues: CONT-1431

    • In some situations, statements that would be unsafe for parallel execution were not serializing into a single threaded execution properly during the applier phase of the target connection.

      Issues: CONT-1489

    • CSV files generated during batch loading into datawarehouses would be created within a directory structure within the /tmp (in [Tungsten Replicator 5.2 Manual]). On long-running replictors, automated processes that would clean up the /tmp (in [Tungsten Replicator 5.2 Manual]) directory could delete the files causing replication to fail temporarily due to the missing directory.

      The location where staging CSV files are created has now been updated. Files are now stored within the $CONTINUENT_HOME/tmp/staging/$SERVICE directory, following the same naming structure. For example, if Continuent Tungsten has been installed in /opt/continuent (in [Tungsten Replicator 5.2 Manual]), then CSV files for the service alpha, CSV files for the first active applier channel will be stored in /opt/continuent/tmp/staging/alpha/staging0.

      Issues: CONT-1500

  • Filters

    • The pkey (in [Tungsten Replicator 5.2 Manual]) filter could force table metadata to be updated when the update was not required.

      Issues: CONT-1162

    • When using the dropcolumn (in [Tungsten Replicator 5.2 Manual]) filter in combination with the colnames (in [Tungsten Replicator 5.2 Manual]), an issue could arise where differences in the incoming Schema and target schema could result in incorrect SQL statements. The solution is to reconfigure the colnames (in [Tungsten Replicator 5.2 Manual]) on the slave not to extract the schema information from the database but instead to use the incoming data from the source database and the translated THL.

      Issues: CONT-1495

3.7. Continuent Tungsten 4.0.2 GA (1 Oct 2015)

Release Notes 4.0.2 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration.

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration.

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports.

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration for more information.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.2 Manual]) script can now properly update a master/slave cluster to a composite (SOR) cluster without intervention. Follow the instructions for tpm upgrade (in [Tungsten Replicator 5.2 Manual]) and add the --replace-release option. The extra option is not required if you are upgrading to a new version.

      Issues: CONT-47

    • The tpm (in [Tungsten Replicator 5.2 Manual]) script will display a warning if NTP does not appear to be running.

      Issues: CONT-110

Bug Fixes

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.2 Manual]) script could lock tables trying to inspect information_schema for MyISAM tables. The script will now look for MyISAM files in the datadir if possible.

      Issues: CONT-938

  • Core Replicator

    • The replicator could incorrectly parse binary logs that start with a timestamp on 1/1/1970 and cause errors on systems that use STRICT_TRANS_TABLES.

      Issues: CONT-869

    • The replicator could hang when transitioning from ONLINE (in [Tungsten Replicator 5.2 Manual]) to OFFLINE:ERROR (in [Tungsten Replicator 5.2 Manual]). This could happen during the first attempt or following multiple repeated attempts.

      Issues: CONT-1055

  • Tungsten Connector

    • The connector would incorrectly connect to a master when processing the BEGIN command on a read-only connection.

      Issues: CONT-895

    • The connector would incorrectly parse statements that begin with use database;....

      Issues: CONT-949

    • The connector might not forward all request errors to the application, which would in this case wait indefinitely for a response.

      Issues: CONT-975

    • The connector could lose track of the cluster policy and cause the application to hang if it doesn't communicate with a manager.

      Issues: CONT-999

    • The mechanism that keeps idle connections active could become hung by long running transactions.

      Issues: CONT-1047

  • Tungsten Manager

    • The connector could temporarily stop processing requests during the upgrade of an SOR deployment or restarting all managers for a dataservice.

      Issues: CONT-1012

    • The failure of multiple slave replicators could result in only one replicator being put back ONLINE (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1051

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • The replicator can hit a MySQL lock wait timeout when processing large transactions.

      Issues: CONT-1106

    • The replicator can run into OutOfMemory when handling very large Row-Based replication events. This can be avoided by setting --optimize-row-events=false (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1115

  • Tungsten Manager

    • The manager fails to read security.properties during startup. If this occurs, the manager will print a warning in tmsvc.log (in [Tungsten Replicator 5.2 Manual]).

      Issues: CONT-1070

3.8. Continuent Tungsten 4.0.1 GA (20 Jul 2015)

Release Notes 4.0.1 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration.

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration.

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports.

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration for more information.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Core Replicator

    • EBS snapshots have been updated to support MySQL table locks during operation.

      Issues: CONT-89

  • Tungsten Manager

    • The manager would incorrectly shun the entire remote service when the site appears to be unreachable, shunning the remote composite datasource including the physical datasources. This has been updated so that only the composite data source and not underlying physical data sources are shunned.

      Issues: CONT-199

    • The manager would not put relay replicators ONLINE (in [Tungsten Replicator 5.2 Manual]) after being restarted.

      Issues: CONT-545

Bug Fixes

  • Core Replicator

    • When running the trepctl reset (in [Tungsten Replicator 5.2 Manual]) command on a master, DDL statements could be placed into the binary log that would delete corresponding management tables within slaves. Binary logging for these operations is now suppressed for these operations.

      Issues: CONT-533

    • The timezone information for the trep_commit_seqno (in [Tungsten Replicator 5.2 Manual]) table would be incorrect when using parallel replication with a server timezone other than GMT.

      Issues: CONT-621

3.9. Continuent Tungsten 4.0.0 GA (17 Apr 2015)

Release Notes 4.0 is a major release which is designed to provide integration between Release Notes 4.0 and Tungsten Replicator 4.0. Providing MySQL clustering support, while providing replication for MySQL, Oracle, and out to datawarehouses such as HP Vertica, Amazon Redshift and Hadoop.

For more information on replicating data out of a cluster, see Replicating Data Out of a Cluster.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration.

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration.

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports.

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration for more information.

  • When using read-only connectors, and making use of explicit transactions (i.e. with autocommit disabled), queries may be routed to the master, rather than a slave.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 5.2 Manual]) now correctly checks the functionality of the 'echo' protocol to validate 'echo'.

      Issues: CONT-90

    • Force a new directory under /opt/continuent/releases during tpm update (in [Tungsten Replicator 5.2 Manual]) if components are being added/removed.

      Issues: CONT-155

    • tpm configuration setting repl-thl-log-retention - tpm should check for a valid unit

      Issues: CONT-177

  • Tungsten Connector

    • Useless reverse DNS call at connection time can drastically affect performance.

      Issues: CONT-86

    • Add min and max to Connector statistics.

      Issues: CONT-107

  • Tungsten Manager

    • Start managers serially per-dataservice rather than globally. This prevents a race-condition.

      Issues: CONT-27

    • Add manager status command to cctrl (in [Continuent Tungsten 4.0 Manual]).

      Issues: CONT-168

Bug Fixes

  • Installation and Deployment

    • Installing an RPM package can fail if the mysql user doesn't exist.

      Issues: CONT-43

    • Update tpm (in [Tungsten Replicator 5.2 Manual]) to force the replication timezone to GMT.

      Issues: CONT-85

  • Command-line Tools

    • tungsten_set_position (in [Tungsten Replicator 5.2 Manual]) previously did not work within SOR deployments.

      Issues: CONT-24

    • The dsctl set (in [Tungsten Replicator 5.2 Manual]) command does not work properly for events with multiple fragments.

      Issues: CONT-194

  • Tungsten Connector

    • The MySQL Connector/J prerequisite has now been removed from all installations.

      Issues: CONT-48

    • The Connector could raise a Null Pointer Exception after upgrading from Release Notes 2.0.5.

      Issues: CONT-196

  • Tungsten Manager

    • Connector should not allow a data source role change without intermediary offline.

      Issues: CONT-23

    • Isolated relay site does not resume replication correctly.

      Issues: CONT-26

    • Java library call InetAddress.isReachable() can produce false positives

      Issues: CONT-53

    • Switch should rollback upon connector un-ability to apply the change

      Issues: CONT-105

    • Threshold for checking for manager memory leaks too low.

      Issues: CONT-161

    • The 'last man standing' logic within the manager fails to identify the correct host.

      Issues: CONT-163

    • Manager not setting datasource to failed - isolation via ifdown.

      Issues: CONT-164

    • Attempting concurrent operations in cctrl (in [Continuent Tungsten 4.0 Manual]) generates an exception.

      Issues: CONT-165

    • Manager hangs on jmx call if the interface that it uses is put 'ifdown'.

      Issues: CONT-166

    • Non-isolated nodes see the isolated one as online

      Issues: CONT-169

    • Rule that checks for node liveness is firing too frequently.

      Issues: CONT-170

    • Using the recover (in [Continuent Tungsten 4.0 Manual]) command does a gratuitous change of policy.

      Issues: CONT-171

    • After isolation is removed from the master site, it is not recovered to online.

      Issues: CONT-173

    • Monitor is attempting to query for non-existent table on witness host.

      Issues: CONT-174

    • Composite failover does not succeed - replication handshake failure.

      Issues: CONT-175

    • In a composite dataservice, failover does not happen if the master site is isolated from the relay.

      Issues: CONT-188

    • Composite master does not return to online after a failover within the physical master service.

      Issues: CONT-403

3.10. Continuent Tungsten 2.2.0 NYR (Not Yet Released)

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Within composite clusters, TCP/IP port 7 connectivity is now required between managers on each site to confirm availability.

Bug Fixes

  • Installation and Deployment

    • To ensure that the correct number of the managers and witnesses are configured within the system, tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to check and identify potential issues with the configuration. The installation and checks operate as follows:

      • If there are an even number of members in the cluster (i.e. provided to --members (in [Tungsten Replicator 5.2 Manual]) option):

        • If witnesses are provided through --witnesses (in [Tungsten Replicator 5.2 Manual]), continue normally.

        • If witnesses are not provided through --witnesses (in [Tungsten Replicator 5.2 Manual]), an error is thrown and installation stops.

      • If there are an odd number of members in the cluster (i.e. provided to --members (in [Tungsten Replicator 5.2 Manual]) option):

        • If witnesses are provided through --witnesses (in [Tungsten Replicator 5.2 Manual]), a warning is raised and the witness declaration is ignored.

        • If witnesses are not provided through --witnesses (in [Tungsten Replicator 5.2 Manual]), installation continues as normal.

      The number of members is calculated as follows:

      • Explicitly through the --members (in [Tungsten Replicator 5.2 Manual]) option.

      • Implied, when --active-witnesses=false (in [Tungsten Replicator 5.2 Manual]), then the list of hosts declared in --master (in [Tungsten Replicator 5.2 Manual]) and --slaves (in [Tungsten Replicator 5.2 Manual]).

      • Implied, when --active-witnesses=true (in [Tungsten Replicator 5.2 Manual]), then the list of hosts declared in --master (in [Tungsten Replicator 5.2 Manual]) and --slaves (in [Tungsten Replicator 5.2 Manual]) and --witnesses (in [Tungsten Replicator 5.2 Manual]).

      Issues: TUC-2105

    • If ping traffic was denied during installation, then installation could hang while the ping check was performed. A timeout has now been added to ensure that the operation completes successfully.

      Issues: TUC-2107

  • Backup and Restore

    • When using xtrabackup 2.2.x, backups would fail if the innodb_log_file_size option within my.cnf was not specified. tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to check the value and existence of this option during installation and to provide a warning if it is not set, or set to the default.

      Issues: TUC-2224

  • Tungsten Connector

    • The connector will now re-connect to a MySQL server in the event that an opened connection is found closed between two requests (generally following a wait_timeout expiration).

      Issues: TUC-2163

    • When initially starting up, the connector would open a connection to the configured master to retreive configuration information, but the connection would never be closed, leading to open unused connections.

      Issues: TUC-2166

    • The cluster status output by the tungsten cluster status (in [Continuent Tungsten 4.0 Manual]) within a multi-site cluster would fail to display the correct states of different data sources when an entire data service was offline.

      Issues: TUC-2185

    • When the connector has been configured into read-only mode, for example using --application-readonly-port=9999 (in [Tungsten Replicator 5.2 Manual]), the connector would mistakenly route statements starting set autocommit=0 to the master, instead of being routed to a slave.

      Issues: TUC-2198

    • When operating in bridge mode, the connector would retain the client connection when the server had closed the connection. The connector has been updated to close all client connections when the corresponding server connection is closed.

      Issues: TUC-2231

  • Tungsten Manager

    • The manager could enter a situation where after switching a relay on one physical service, remote site relay is incorrectly reconfigured to point at the new relay. This has been corrected so that reconfiguration no longer occurs in this situation.

      Issues: TUC-2164

    • Recovery from a composite cluster failover could create a composite split-brain situation.

      Issues: TUC-2178

    • A statement of record (SOR) cluster would be unable to recover a failed dataservice.

      Issues: TUC-2194

    • A composite datasource would not go into failsafe mode if all the managers within the cluster were stopped.

      Issues: TUC-2206

    • If a composite datasource becomes isolated due to a network partition, the failed datasource would not go into failsafe mode correctly.

      Issues: TUC-2207

    • If a witness became isolated from the rest of the cluster, the rules would not exclude the failed witness and this could lead to memory exhaustion.

      Issues: TUC-2214

  • Documentation

    • The descriptions and definitions of the archive (in [Continuent Tungsten 4.0 Manual]) and standby (in [Continuent Tungsten 4.0 Manual]) roles has been clarified in the documentation.

      For more information, see Replicator Roles.

    • The documentation for the recovery of a multi-site multi-master installation has been updated to provide more information when covering.

      Issues: TUC-2175

      For more information, see Resetting a single dataservice.

3.11. Continuent Tungsten 2.0.5 GA (24 Dec 2014)

Continuent Tungsten 2.0.5 is a bugfix release that contains critical improvements to the handling of times, dates, and timestamp values between servers, including during daylight savings time switches.

Improvements, new features and functionality

  • Installation and Deployment

    • An issue was discovered that altered the way different date and time values were extracted, stored in THL, and applied into target databases. The issue was related to the way the value was stored; the data was not normalized within Continuent Tungsten during replication, particularly if different timezones were used and applied across the replication deployment.

      Examples of the behaviour include:

      • MySQL converts TIMESTAMP values in statements to UTC. Tungsten did not replicate the master time zone, which meant that replicated statements would generate different TIMESTAMP values when replicated to a server with a different time zone from the master.

      • MySQL TIMESTAMP values are stored as UTC, which means that row changes are extracted in UTC. Tungsten did not set the Java VM or MySQL session time zone to UTC when applying such changes, which could result in inconsistent values being applied to replicas.

      • Changes between standard and daylight savings time (DST) result in a short period in which master DBMS servers have a different time zone from replicas. This resulted in errors in applying time-related data generated at the time of the switch.

      • Heterogeneous replication, for example from relational DBMS like MySQL to data warehouses, would result in unexpected conversions to time-related data, again due to inconsistencies in time zones.

      The replication has now been updated to normalize date and time values into UTC throughout the replication topology, including within the wrapper Java processes, databases and when storing the information in THL.

      • Replicator processes now default to UTC internally by setting the Java VM default time zone to UTC. This default can be changed by setting the replicator.time_zone property in the replicator services.propertiesx file but is not recommended other than for problem diagnosis or specialized testing.

      • Replicas store a time zone on statements and row changes extracted from MySQL.

      • Replicators use UTC as the session time zone when applying to MySQL replicas.

      • Replicators similarly default to UTC when applying transactions to data warehouses like Hadoop, Vertica, or Amazon Redshift.

      • The thl (in [Tungsten Replicator 5.2 Manual]) utility prints time-related data using the default GMT time zone. This can be altered using the -timezone (in [Tungsten Replicator 5.2 Manual]) option.

      Best Practices

      We recommend the following steps to ensure successful replication of time-related data.

      • Standardize all DBMS server and host time zones to UTC. This minimizes time zone inconsistencies between applications and data stores. The recommendation is particularly important when replicating between different DBMS types, such as MySQL to Hadoop.

      • Use the default time zone settings for Tungsten replicator. Do not change the time zones unless specifically recommended by VMware support.

      • If you cannot standardize on UTC at least ensure that time zones are set consistently on all hosts and applications.

      Arbitrary time zone settings create a number of corner cases for database management beyond replication. Standardizing on UTC helps minimize them, hence is strongly recommended.

      Upgrade from Older Replicator Versions

      New Tungsten replicators tag THL records with an option to show that the transaction was extracted from a time zone-aware replicator. If a replicator sees that this property is not available, it will automatically switch to the older behavior when applying such transactions to MySQL replicas. This ensures that there is as simple process to upgrade from older replicator versions, which is especially important for Release Notes clusters.

      There are two ways to upgrade a replication topology that extracts from MySQL to the new, time zone-aware behavior.

      • Put the master replicator offline, wait for slaves to catch up fully, then upgrade all replicators at once.

      • Upgrade slave replicators first, then upgrade the master. If the replicators are running in a Release Notes cluster, you must put the cluster in maintenance mode during the upgrade to prevent master failover.

      Important

      You should not upgrade a master Tungsten Replicator before the slave replicas. This can generate transactions that may not be correctly applied by the slaves, since they are not time zone-aware.

      For more information, see Understanding Replication of Date/Time Values.

3.12. Continuent Tungsten 2.0.4 GA (9 Sep 2014)

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

We recommend Java 7 for all Release Notes 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Improvements, new features and functionality

  • Tungsten Manager

    • Tungsten Manager: Improved monitoring fault-tolerance

      Under normal operating conditions, the Tungsten Manager on each DB server host will monitor the local Tungsten Replicator and the database server running on that host and relay the monitoring information thus collected to the other Tungsten Managers in the cluster. In previous releases, Continuent Tungsten was even able to continue to monitor database servers even if a manager on a given DB server node was not running.

      With this release, this functionality has been generalized to handle the monitoring of both database servers and Tungsten replication such that any time a Tungsten Manager is not running on a given DB server host, the remaining Tungsten Managers in the cluster will take over the monitoring activities for both database servers and Tungsten Replicators until the manager on that host resumes operations. This activity takes place automatically and does not require any special configuration or intervention from an administrator.

      The new functionality means that if you have configured Tungsten to fence replication failures and stops, and you stop all Tungsten services on a given node, the rest of the cluster will respond by fencing the associated data source to an OFFLINE (in [Tungsten Replicator 5.2 Manual]) or FAILED (in [Continuent Tungsten 4.0 Manual]) state.

      Full recovery of a failed node requires that a Tungsten Manager be running on the node.

    • Tungsten Connector/Tungsten Manager: Full support for 'relative latency'

      Support for the use and display of the relativeLatency (in [Tungsten Replicator 5.2 Manual]) has been expanded and improved. By default, absolute latency is used by the cluster to determine the configuration.

      When relative latency is used, the difference between the last commit time and the current time is displayed. This will show an increasing latency even on long running transactions, or in the event of a stalled replicator. To enable relative latency, use the --use-relative-latency=true (in [Tungsten Replicator 5.2 Manual]) option to tpm (in [Tungsten Replicator 5.2 Manual]) during configuration.

      The following changes to the operation of Continuent Tungsten have been added to this release when the use of relative latency is enabled:

      • The output of SHOW SLAVE STATUS has been updated to show the Seconds_Behind_Master value.

      • cctrl (in [Continuent Tungsten 4.0 Manual]) will output a new field, relative, showing the relative latency value.

      • The Tungsten Connector will use the value when the maxAppliedLatency option is used in the connection string to determine whether to route a connection to a master or a slave.

      For more information, see Latency or Relative Latency Display.

    • Tungsten Manager: Automated Data Source Fencing Due to Replication Faults

      Release Notes can now be configured to effectively isolate data sources for which replication has stopped or exhibits an error condition. See the updated documentation on Replicator Fencing for further information.

      Issues: TUC-2240

      For more information, see Replicator Fencing.

Bug Fixes

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.2 Manual]) command has been updated to support updated fencing mechanisms.

      Issues: TUC-2245

    • During an upgrade procedure, the process would mistake active witnesses for passive ones.

      Issues: TUC-2280

    • During an update using tpm (in [Tungsten Replicator 5.2 Manual]), the replicator could end up in the OFFLINE (in [Tungsten Replicator 5.2 Manual]) state.

      Issues: TUC-2282

    • When performing an update, particularly in environments such as Multi-Site, Multi-Master, the tpm (in [Tungsten Replicator 5.2 Manual]) command could fail to update the cluster correctly. This could leave the cluster in a diminished state, or fail to upgrade all the components. The tpm (in [Tungsten Replicator 5.2 Manual]) command has been updated as follows:

      • tpm (in [Tungsten Replicator 5.2 Manual]) will no longer attempt to upgrade a Tungsten Replicator with a Continuent Tungsten distribution, and vice versa.

      • When installing Tungsten Replicator, and the $CONTINUENT_PROFILES variable has been set, tpm (in [Tungsten Replicator 5.2 Manual]) will fail, warning that the $REPLICATOR_PROFILES variable should be set instead.

      Issues: TUC-2288, TUC-2292

  • Tungsten Connector

    • When changing connector properties, and reloading the configuration, the updated values would not be updated.

    • When using mysqldump with option --flush-logs, the connector would fail with an Unsupported command error.

      Issues: TUC-2209

    • When the option showRelativeSlaveStatus=true has been specified, the behavior of the connector for checking of latency with read/write splitting would not be used, instead the appliedLatency (in [Tungsten Replicator 5.2 Manual]) figure would be used instead.

      Issues: TUC-2243

    • The connection.close.idle.timeout would fail to be taken into account when the connector was running in bridge mode.

      Issues: TUC-2255

    • When the connector was running in bridge mode, and the connection was killed, the connections would not be correctly closed.

      Issues: TUC-2261

    • The Connector SmartScale would fail to round-robin through slaves when there was no discernable load on the cluster to provide load performance metrics.

      Issues: TUC-2272

    • SmartScale would wrongly load balance connections to a slave even during a switch operation.

      Issues: TUC-2273

    • The connector would update the high water setting before and after a write connection was used, creating additional overhead for connections, generating additional query overhead.

      Issues: TUC-2277

    • When using SmartScale, automatic sessions could be unnecessarily closed upon disconnection, causing slaves to miss valid queries.

      Issues: TUC-2286

  • Tungsten Manager

    • The checker.tungstenreplicator.properties and checker.mysqlserver.properties files would fail to be created correctly on active witnesses.

      Issues: TUC-2250, TUC-2251

    • The manager would fail to show the correct status for the replicator when getting status information by proxy.

      Issues: TUC-2254

    • Under some conditions, the manager would shut down the router gateway due to an invalid membership alarm but would not restart the connector. This would cause all new connections to hang indefinitely.

      Issues: TUC-2278

    • When performing a reset of the replicator service, recovery of the failed service would fail.

      Issues: TUC-2290

  • Other Issues

    • The check_tungsten.sh script could fail to locate the tungsten.cfg or read the correct values from the file.

      Issues: TUC-2263

3.13. Continuent Tungsten 2.0.3 GA (1 Aug 2014)

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

We recommend Java 7 for all Release Notes 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Within composite clusters, TCP/IP port 7 connectivity is now required between managers on each site to confirm availability.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • The default behavior of the manager is to not fence a datasource for which a replicator has stopped or gone into an error state. This was implemented to prevent reducing the overall availability of the deployed service. There are cases and deployments where clusters should not operate with replicators in stopped or error states. This could be configure by changing the following properties to true according to the master or slave role requirements:

    policy.fence.slaveReplicator=false 
    policy.fence.masterReplicator=false

    If they are set to true, the manager should fence the datasource by setting it to a 'failed' state. When this happens, and the datasource is a master, failover will occur. If the datasource is a slave, the datasource will just stay in the failed state indefinitely or until the replicator is back in the online state, in which case the datasource will be recovered to online.

    At present the setting of these properties are not honored.

    Issues: TUC-2241

Improvements, new features and functionality

  • Tungsten Connector

    • The default buffer sizes for the Using Bridge Mode have been updated to 262144 (256KB).

Bug Fixes

  • Installation and Deployment

    • To ensure that the correct number of the managers and witnesses are configured within the system, tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to check and identify potential issues with the configuration. The installation and checks operate as follows:

      • If there are an even number of members in the cluster (i.e. provided to --members (in [Tungsten Replicator 5.2 Manual]) option):

        • If witnesses are provided through --witnesses (in [Tungsten Replicator 5.2 Manual]), continue normally.

        • If witnesses are not provided through --witnesses (in [Tungsten Replicator 5.2 Manual]), an error is thrown and installation stops.

      • If there are an odd number of members in the cluster (i.e. provided to --members (in [Tungsten Replicator 5.2 Manual]) option):

        • If witnesses are provided through --witnesses (in [Tungsten Replicator 5.2 Manual]), a warning is raised and the witness declaration is ignored.

        • If witnesses are not provided through --witnesses (in [Tungsten Replicator 5.2 Manual]), installation continues as normal.

      The number of members is calculated as follows:

      • Explicitly through the --members (in [Tungsten Replicator 5.2 Manual]) option.

      • Implied, when --active-witnesses=false (in [Tungsten Replicator 5.2 Manual]), then the list of hosts declared in --master (in [Tungsten Replicator 5.2 Manual]) and --slaves (in [Tungsten Replicator 5.2 Manual]).

      • Implied, when --active-witnesses=true (in [Tungsten Replicator 5.2 Manual]), then the list of hosts declared in --master (in [Tungsten Replicator 5.2 Manual]) and --slaves (in [Tungsten Replicator 5.2 Manual]) and --witnesses (in [Tungsten Replicator 5.2 Manual]).

      Issues: TUC-2105

    • If ping traffic was denied during installation, then installation could hang while the ping check was performed. A timeout has now been added to ensure that the operation completes successfully.

      Issues: TUC-2107

  • Backup and Restore

    • When using xtrabackup 2.2.x, backups would fail if the innodb_log_file_size option within my.cnf was not specified. tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to check the value and existence of this option during installation and to provide a warning if it is not set, or set to the default.

      Issues: TUC-2224

  • Tungsten Connector

    • The connector will now re-connect to a MySQL server in the event that an opened connection is found closed between two requests (generally following a wait_timeout expiration).

      Issues: TUC-2163

    • When initially starting up, the connector would open a connection to the configured master to retreive configuration information, but the connection would never be closed, leading to open unused connections.

      Issues: TUC-2166

    • The cluster status output by the tungsten cluster status (in [Continuent Tungsten 4.0 Manual]) within a multi-site cluster would fail to display the correct states of different data sources when an entire data service was offline.

      Issues: TUC-2185

    • When the connector has been configured into read-only mode, for example using --application-readonly-port=9999 (in [Tungsten Replicator 5.2 Manual]), the connector would mistakenly route statements starting set autocommit=0 to the master, instead of being routed to a slave.

      Issues: TUC-2198

    • When operating in bridge mode, the connector would retain the client connection when the server had closed the connection. The connector has been updated to close all client connections when the corresponding server connection is closed.

      Issues: TUC-2231

  • Tungsten Manager

    • The manager could enter a situation where after switching a relay on one physical service, remote site relay is incorrectly reconfigured to point at the new relay. This has been corrected so that reconfiguration no longer occurs in this situation.

      Issues: TUC-2164

    • Recovery from a composite cluster failover could create a composite split-brain situation.

      Issues: TUC-2178

    • A statement of record (SOR) cluster would be unable to recover a failed dataservice.

      Issues: TUC-2194

    • A composite datasource would not go into failsafe mode if all the managers within the cluster were stopped.

      Issues: TUC-2206

    • If a composite datasource becomes isolated due to a network partition, the failed datasource would not go into failsafe mode correctly.

      Issues: TUC-2207

    • If a witness became isolated from the rest of the cluster, the rules would not exclude the failed witness and this could lead to memory exhaustion.

      Issues: TUC-2214

  • Documentation

    • The descriptions and definitions of the archive (in [Continuent Tungsten 4.0 Manual]) and standby (in [Continuent Tungsten 4.0 Manual]) roles has been clarified in the documentation.

      For more information, see Replicator Roles.

    • The documentation for the recovery of a multi-site multi-master installation has been updated to provide more information when covering.

      Issues: TUC-2175

      For more information, see Resetting a single dataservice.

3.14. Continuent Tungsten 2.0.2 GA (19 May 2014)

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

In addition, we recommend Java 7 for all Release Notes 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Improvements, new features and functionality

  • Installation and Deployment

    • The default Java garbage collection (GC) used within the Connector, Replicator and Manager has been reconfigured to use parallel garbage collection. The default GC could produce CPU starvation issues during execution.

      Issues: TUC-2101

  • Tungsten Connector

    • Keep-alive functionality has been added to the Connector. When enabled, connections to the database server are kept alive, even when there is no client activity.

      Issues: TUC-2103

      For more information, see Connector Keepalive.

Bug Fixes

  • Tungsten Manager

    • The embedded JGroups service, which manages the communication and management of the manager service has been updated to the latest version. This improves the stability of the service, and removes some of the memory leaks causing manager stalls.

    • A number of issues the memory management on the Manager service, particularly with respect to the included JGroups support have been rectified. These issues caused the manager to use increased amounts of memory that could lead to the manager to stall.

Continuent Tungsten 2.0.2 Includes the following changes made in Tungsten Replicator 2.2.1

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The tpm (in [Tungsten Replicator 5.2 Manual]) tool and configuration have been updated to support both older Oracle SIDs and the new JDBC URL format for Oracle service IDs. When configuring an Oracle service, use --datasource-oracle-sid (in [Tungsten Replicator 5.2 Manual]) for older service specifications, and --datasource-oracle-service (in [Tungsten Replicator 5.2 Manual]) for newer JDBC URL installations.

    Issues: 817

Improvements, new features and functionality

  • Installation and Deployment

    • When using the --enable-heterogeneous-master (in [Tungsten Replicator 5.2 Manual]) option to tpm (in [Tungsten Replicator 5.2 Manual]), the MySQL service is now checked to ensure that ROW-based replication has been enabled.

      Issues: 834

  • Command-line Tools

    • The thl (in [Tungsten Replicator 5.2 Manual]) command has been expanded to support an additional output format, -specs (in [Tungsten Replicator 5.2 Manual]), which adds the field specifications for row-based THL output.

      Issues: 801

      For more information, see thl list -specs Command.

  • Oracle Replication

    • Templates have been added to the suite of DDL translation templates supported by ddlscan (in [Tungsten Replicator 5.2 Manual]) to support Oracle to MySQL replication. Two templates are included:

      • ddl-oracle-mysql provides standard translation of DDL when replicating from Oracle to MySQL

      • ddl-oracle-mysql-pk-only provides standard translation of DDL including automatic selection of a primary key from the available unique indexes if no explicit primary key is defined within Oracle DDL when replicating to MySQL

      Issues: 787

    • ddlscan (in [Tungsten Replicator 5.2 Manual]) has been updated to support parsing of a file containing a list of tables to be parsed for DDL information. The file should be formatted as a CSV file, but only the first argument, table name, will be extracted. Lines starting with a # (hash) character are ignored.

      The file is in the same format as used by setupCDC.sh (in [Tungsten Replicator 5.2 Manual]).

      To use the file, supply the -tableFile (in [Tungsten Replicator 5.2 Manual]) parameter to the command.

      Issues: 832

  • Core Replicator

    • The replicator has been updated to support autorecovery from transient failures that would normally cause the replicator to go OFFLINE (in [Tungsten Replicator 5.2 Manual]) while in either the ONLINE (in [Tungsten Replicator 5.2 Manual]) or GOING-ONLINE:SYNCHRONIZING (in [Tungsten Replicator 5.2 Manual]) state. This enables the replicator to recover from errors such as MySQL restarts, or transient connection errors.

      The period, number of attempted recovery operations, and the delay before a recovery is considered successful are configurable through individual properties.

      Issues: 784

      For more information, see Deploying Automatic Replicator Recovery.

    • The way VARCHAR values were stored and represented within the replicator has been updated which improves performance significantly.

      Issues: 804

    • If the binary logs for MySQL were flushed and purged (using FLUSH LOGS and PURGE BINARY LOGS), and then the replicator is restarted, the replicator would fail to identify and locate the newly created logs with an MySQLExtractException.

      Issues: 851

  • Documentation

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 5.2 Manual]) would incorrectly identify options that accepted true/false values, which could cause incorrect interpretations, or subsequent options on the command-line to be used as true/false indications.

      Issues: 310

    • Removing an existing parallel replication configuration in [Tungsten Replicator 5.2 Manual] using tpm (in [Tungsten Replicator 5.2 Manual]) would cause the replicator to fail due to a mismatch in the status table and current configuration.

      Issues: 867

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) tool would fail to correctly re-provision a master within a fan-in or multi-master configuration. When re-provisioning, the service should be reset with trepctl reset (in [Tungsten Replicator 5.2 Manual]).

      Issues: 709

    • Errors when executing tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) that have been generated by the underlying mysqldump or xtrabackup are now redirected to STDOUT.

      Issues: 802

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) tool would re-provision using a slave in a OFFLINE:ERROR (in [Tungsten Replicator 5.2 Manual]) state, even through this could create a second, invalid, slave deployment. Reprovisioning from a slave in the ERROR state is now blocked, unless the -f (in [Tungsten Replicator 5.2 Manual]) or --force (in [Tungsten Replicator 5.2 Manual]) option is used.

      Issues: 860

      For more information, see The tungsten_provision_slave Script.

  • Oracle Replication

    • Tuning for the CDC extraction from Oracle has been updated to support both a minimum sleep time parameter, minSleepTime, and the increment value used when increasing the sleep time between updates, sleepAddition.

      Issues: 239

      For more information, see Tuning CDC Extraction.

    • The URLs used for connecting to Oracle RAC SCAN addresses were not correct and were incompatible with non-RAC installations. The URL format has been updated to use a URL format that is compatible with both Oracle RAC and non-RAC installations.

      Issues: 479

  • Core Replicator

    • When a timeout occurred on the connection to MySQL for the channel assignment service (part of parallel applier), the replicator would go offline, rather than retrying the connection. The service has now been updated to retry the connection if a timeout occurs. The default reconnect timeout is 120 seconds.

      Issues: 783

    • A slave replicator would incorrectly set the restart sequence number when reading from a master if the slave THL directory was cleared. This would cause slave replicators to fail to restart correctly.

      Issues: 794

    • Unsigned integers are extracted from the source database in a non-platform independent method. This would cause the Oracle applier to incorrectly attempt to apply negative values in place of their unsigned equivalents. The Oracle applier has been updated to correctly translate these values for types identified as unsigned to the correct value. When viewing these values are viewed within the THL, they will still be identified as a negative value.

      Issues: 798

      For more information, see thl list Command.

    • Replication would fail when processing binlog entries containing the statement INSERT INTO ... WHERE... when operating in mixed mode.

      Issues: 807

  • Filters

    • The mysqlsessionsupport (in [Tungsten Replicator 5.2 Manual]) filter would cause replication to fail when the default thread_id was set to -1, for example when STRICT_ALL_TABLES SQL mode had been enabled. The replicator has been updated to interpret -1 as 0 to prevent this error.

      Issues: 821

    • The rename (in [Tungsten Replicator 5.2 Manual]) filter has been updated so that renaming of only the schema name for STATEMENT events. Previously, only ROW events would be renamed by the filter.

      Issues: 842

3.15. Continuent Tungsten 2.0.1 GA (3 January 2014)

Important

The final approved build for Release Notes 2.0.1 is build 1003. Earlier builds do not have the full set of features and functionality, and includes a number of key fixes not in earlier builds of the same release. In particular, updated support for passive witnesses was not available in earlier builds.

Continuent Tungsten 2.0.1 is the first generally available release of Release Notes 2.0, which offers major improvements to Continuent's industry-leading database-as-a-service offering. Release Notes 2.0.1 contains all the improvements incorporated in Version 1.5.4, and the fixes and new features included within Tungsten Replicator 2.2.0, as well as the following features:

  • Cluster Management

    • An improved manager that simplifies recovery of your cluster.

    • New tools to make provisioning and recovery of replication issues.

    • Improved witness host and decision engine to provide better quorum for preventing split-brain and prevent multiple live masters.

    • SSL-based encryption and authentication for cluster management through all command-line tools.

  • Connector

    • SSL support enables SSL and non-SSL clients, and SSL and non-SSL connectivity between the connector and database servers.

    • Support for setting the maximum latency for slaves when redirecting queries.

  • Installation and Deployment

    • Improved tpm installation tool that eases deployment and configuration of all clusters, including multi-master and multi-site/multi-master.

    • INI file based installation through tpm that enables easier installation, including through Puppet and other script-based solutions.

  • Core Replication

    • Includes all Tungsten Replicator 2.2.0 features, including low-impact, low-latency replication, advanced filtering

    • Supports MySQL (5.0, 5.1, 5.5, 5.6), MariaDB (5.5) and Percona Server (5.5).

    • Supports replication to and from MySQL and Oracle, and Oracle to Oracle.

    • Data loading to Vertica and data warehouses, and real-time publishing to MongoDB.

    • SSL-based encryption for exchanging replication data.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • When using the xtrabackup method for performing backups, the default is to use the xtrabackup-full operation to perform a full backup.

    Issues: TUC-1327

  • The default load balancer used for load-balancing connections within the Connector has been updated to use the RO_RELAXED (in [Continuent Tungsten 4.0 Manual]) QoS balancer. This takes account of the HighWater mark when redirecting queries and compares the applied sequence number rather than relying only on the latency.

    Issues: TUC-1589

  • Current strategy for preventing split-brain by using a witness host is not workable for many customers. The witness host configuration and checks have been changed to prevent these problems.

    Issues: TUC-1650

  • Failover could be rolled back because of a failure to release a Virtual IP. The failure has been updated to trigger a warning, not a rollback of failover.

    Issues: TUC-1666

  • An 'UnknownHostException' would cause a failover. The behavior has been updated to result in a suspect DB server.

    Issues: TUC-1667

  • A new type of witness host has been added. A new active witness supports a manager-only based installation. The active witness is able to take part in decisions about failure in the event of datasource and/or network connectivity issues.

    As a result, the following changes apply for all witness host selection and installation:

    • Witnesses must be on the same network subnet as the existing managers.

    • Dataservices must have at least three managers to provide status check during failure.

    • Active witnesses can be created; these install only the manager on target hosts to act witnesses to check network connectivity to the configured dataserver and connectors configured within the service.

    Issues: TUC-1854

    For more information, see Host Types.

  • Failover does not occur if the manager is not running, on the master host, before the time that the database server is stopped.

    Issues: TUC-1900

  • Read-only MySQL slaves no longer work.

    Issues: TUC-1903

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to support configuration of the maximum applied latency for the connector using either the --connector-max-slave-latency (in [Tungsten Replicator 5.2 Manual]) or --connector-max-applied-latency (in [Tungsten Replicator 5.2 Manual]) options.

      Issues: TUC-733

    • Installer should provide a way to setup RO_RELAXED (in [Continuent Tungsten 4.0 Manual]) (read-only with no SQL checking) connectors.

      Issues: TUC-954

    • Post-installation notes do not specify hosts that can run cctrl (in [Continuent Tungsten 4.0 Manual]).

      Issues: TUC-1118

    • Create a tpm cook command that masks the tungsten-cookbook script

      Issues: TUC-1182

    • The tpm (in [Tungsten Replicator 5.2 Manual]) validation has been updated to provided warnings when the sync_binlog and innodb_flush_log_at_trx_commit MySQL options are set incorrectly.

      Issues: TUC-1656

    • A new tpm (in [Tungsten Replicator 5.2 Manual]) command has been added to list different connector connection commands/syntax.

      Issues: TUC-1661

    • Add default path to security files, to facilitate their retrieval.

      Issues: TUC-1676

    • Support a --dataservice-witnesses value of "none"

      Issues: TUC-1715

    • The tpm (in [Tungsten Replicator 5.2 Manual]) command should not be accessible on installed data sources.

      Issues: TUC-1717

    • Allow tpm configuration that is compatible with puppet/chef/etc

      Issues: TUC-1735

    • Auto-generated properties line should go at the top of the files.

      Issues: TUC-1739

    • Add tpm switch for rrIncludeMaster router properties.

      Issues: TUC-1744

    • During installation, the security.access_file.location property should be changed to security.rmi.jmxremote.access_file.location.

      Issues: TUC-1805

    • Split the cross machine checks out of MySQLPermissionsCheck.

      Issues: TUC-1838

    • The installation of Multi-Site Multi-Master deployments has been simplified.

      Issues: TUC-1923

      For more information, see Deploying Multisite/Multimaster Clusters.

  • Command-line Tools

    • A completion script for command-line completion within bash has been added to the installation. The file is located in tools/.tpm.complete within the installation directory.

      Issues: TUC-1591

    • Write scripts to coordinate backups across an entire cluster.

      Issues: TUC-1641

    • cctrl (in [Continuent Tungsten 4.0 Manual]) should not report that recover is an expert command

      Issues: TUC-1839

    • An option, -a, --authenticate has been added to the tpasswd (in [Tungsten Replicator 5.2 Manual]) utility to validate an existing password entry.

      Issues: TUC-1916

  • Cookbook Utility

    • Tungsten cookbook should run manager|replicator|connector dump before collecting logs.

      Issues: TUC-1660

    • Cookbook has been updated to support both active and passive witnesses.

      Issues: TUC-1942

    • Cookbook has been updated to allow backups from masters to be used.

      Issues: TUC-1943

  • Backup and Restore

    • The datasource_backup.sh script has been updated to limit running only on the COORDINATOR and to find a non-MASTER datasource.

      Issues: TUC-1684

  • MySQL Replication

    • Add support for MySQL 5.6

      Issues: TUC-1624

  • Tungsten Connector

    • Support for MySQL 4.0 passwords within the connector has been included. This provides support for both old MySQL versions and older versions of the MySQL protocol used by some libraries and clients.

      Issues: TUC-784

    • Connector must forbid zero keepAliveTimeout.

      Issues: TUC-1714

    • In SOR deployments only, Connector logs show relay data service being added twice.

      Issues: TUC-1720

    • Change default delayBeforeOfflineIfNoManager router property to 30s and constrain it to max 60s in the code.

      Issues: TUC-1752

    • Router Manager connection timeout should be a property.

      Issues: TUC-1754

    • Add client IP and port when logging connector message.

      Issues: TUC-1810

    • Make tungsten cluster status (in [Continuent Tungsten 4.0 Manual]) more sql-like and reduce the amount of information displayed.

      Issues: TUC-1814

    • Connector client side SSL support for MySQL

      Issues: TUC-1825

  • Tungsten Manager

    • cctrl (in [Continuent Tungsten 4.0 Manual]) should show if a given data source is secured.

      Issues: TUC-1816

    • The datasource hostname recover command should not invoke the expert warning.

      Issues: TUC-1840

  • Manager API

    • Smarter enabling of the Manager API

      Issues: TUC-1621

    • Support has been added to specify the addresses for the Manager API to listen on.

      Issues: TUC-1643

    • The Manager API has been updated with a method to list all the available dataservices.

      Issues: TUC-1674

    • Add DataServiceState and DataSource into the payload when applicable

      Issues: TUC-1701

    • Add classes to the Ruby libraries that handle API calls

      Issues: TUC-1707

    • Add an API call that prints the manager live properties

      Issues: TUC-1713

  • Platform Specific Deployments

    • Add Java wrapper support for FreeBSD.

      Issues: TUC-1632

    • Commit FreeBSD fixes to Java sockets and port binding.

      Issues: TUC-1633

  • Documentation

    • Document among the prerequisites that Tungsten installers do not support mysqld_multi.

      Issues: TUC-1679

  • Other Issues

    • Write a tpm test wrapper for the cookbook testing scripts.

      Issues: TUC-1396

    • Document the process of sending emails based on specific log4j messages

      Issues: TUC-1500

    • The check_tungsten.sh script has been updated to check and restart enterprise load balancers that use the xinetd service.

      Issues: TUC-1573

    • Expand zabbix monitoring to match nagios checks.

      Issues: TUC-1638

    • Turn SET NAMES log message into DEBUG.

      Issues: TUC-1644

    • Remove old/extra/redundant configuration files.

      Issues: TUC-1721

    • Backport critical 1.5.4 manager changes to 2.0.1

      Issues: TUC-1855

Bug Fixes

  • Installation and Deployment

    • Tungsten can't install if the 'mysql' client is not in the path.

      Issues: TUC-999

    • An extra -l flag when running sudo command would be added to the configuration.

      Issues: TUC-1025

    • Installer will not easily work when installing SOR data services one host at a time.

      Issues: TUC-1036

    • The tpm (in [Tungsten Replicator 5.2 Manual]) did not verify that the permissions for the tungsten DB user allow for cross-database host access.

      Issues: TUC-1146

    • Specifying a Symbolic link for the Connector/J creates a circular reference.

      Issues: TUC-1567

    • Replication of DATETIME values with a Daylight Savings Time (DST) would replicate incorrect values. Installation of a replication service where there are different timezones for the Java environment and the MySQL environment may cause incorrect replication.

      Issues: 542, TUC-1593

    • The replicator service would not be imported into the cluster directory - causes subsequent failures in switch and other operations.

      Issues: TUC-1594

    • tpm (in [Tungsten Replicator 5.2 Manual]) would fail to skip the GlobalHostAddressesCheck (in [Tungsten Replicator 5.2 Manual]) when performing a tpm configure (in [Tungsten Replicator 5.2 Manual]) followed by tpm validate (in [Tungsten Replicator 5.2 Manual]).

      Issues: TUC-1599

    • tpm (in [Tungsten Replicator 5.2 Manual]) does not recognize datasources when they start with capital letter.

      Issues: TUC-1655

    • Installation of multiple replicator with tpm (in [Tungsten Replicator 5.2 Manual]) fails.

      Issues: TUC-1680

    • The check for Java version fails when OpenJDK does not say "java".

      Issues: TUC-1681

    • The installer did not make sure that witness servers are in the same network as the cluster.

      Issues: TUC-1705

    • tpm (in [Tungsten Replicator 5.2 Manual]) does not install if there is a Tungsten Replicator installer already running.

      Issues: TUC-1712

    • Errors during installation of composite dataservice.

      Issues: TUC-1726

    • The tpm (in [Tungsten Replicator 5.2 Manual]) command returns an ssh error when attempting to install a composite data service.

      Issues: TUC-1727

    • Running tpm (in [Tungsten Replicator 5.2 Manual]) with no arguments raises an error.

      Issues: TUC-1788

    • Installation fails with Ruby 1.9.

      Issues: TUC-1800

    • tpm (in [Tungsten Replicator 5.2 Manual]) will not throw an error if the user gives the connectorj-path as the path to a symlink instead of a real file.

      Issues: TUC-1815

    • tpm (in [Tungsten Replicator 5.2 Manual]) does not check dependencies of security options.

      Issues: TUC-1818

    • When checking process limits during installation, the check would fail the installation process instead of providing a warning.

      Issues: TUC-1822

    • During tpm (in [Tungsten Replicator 5.2 Manual]) validation wrongly complains about a witness not being in the same subnet.

      Issues: TUC-1848

    • During installation, tpm (in [Tungsten Replicator 5.2 Manual]) could install SSL support for the connector even though the MySQL server has not been configured for SSL connectivity.

      Issues: TUC-1909

    • Running tpm update (in [Tungsten Replicator 5.2 Manual]) would cause the master replicator to become a slave during the update when the master had changed from the configuration applied using --dataservice-master-host (in [Tungsten Replicator 5.2 Manual]).

      Issues: TUC-1921

    • tpm (in [Tungsten Replicator 5.2 Manual]) could allow meaningless specifications of active witnesses.

      Issues: TUC-1941

    • tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to provide the correct link to the documentation for further information.

      Issues: TUC-1947

    • Performing tpm reset (in [Tungsten Replicator 5.2 Manual]) would remove all the files within the cluster-home/conf directories, instead of only the files for services tpm (in [Tungsten Replicator 5.2 Manual]) was aware of.

      Issues: TUC-1949

    • tpm (in [Tungsten Replicator 5.2 Manual]) would require the --active-witnesses (in [Tungsten Replicator 5.2 Manual]) or --enable-active-witnesses (in [Tungsten Replicator 5.2 Manual]) option, when other witness types are available for configuration.

      Issues: TUC-1951

    • tpm (in [Tungsten Replicator 5.2 Manual]) would check the same witness subnet when using active witnesses, which do not need to be installed on the same subnet.

      Issues: TUC-1953

    • A tpm update (in [Tungsten Replicator 5.2 Manual]) operation would not recognize active witnesses properly.

      Issues: TUC-1975

    • A tpm uninstall operation would complain about missing databases in connector tests.

      Issues: TUC-1978

    • tpm (in [Tungsten Replicator 5.2 Manual]) would not remove the connector.ro.properties file if the configuration is updated to not have --application-readonly-port (in [Tungsten Replicator 5.2 Manual]).

      Issues: TUC-1981

    • tpm (in [Tungsten Replicator 5.2 Manual]) would enable installation when MariaDB 10.0 was installed, even though this is not a supported configuration.

      Issues: TUC-1987

    • The method used to compare whether hosts were on the same subnet would fail to identify hosts correctly.

      Issues: TUC-1995

  • Command-line Tools

    • Running cctrl (in [Continuent Tungsten 4.0 Manual]) on a host which only had the connector server would not report a useful error. This has now been updated to show a warning message. error

      Issues: TUC-1642

    • The check_tungsten command had different command line arguments from check_tungsten.sh.

      Issues: TUC-1675

    • Nagios check scripts not picking up shunned datasources

      Issues: TUC-1689

    • cctrl (in [Continuent Tungsten 4.0 Manual]) could output the status of a host with a null value in place of the correct hostname.

      Issues: TUC-1893

    • Using the recover datasource command within a composite service would fail, even though datasource recover (in [Continuent Tungsten 4.0 Manual]) would work.

      Issues: TUC-1912

    • The check_tungsten_latency (in [Tungsten Replicator 5.2 Manual]) --perslave-perfdata (in [Tungsten Replicator 5.2 Manual]) option would not include information for relay hosts.

      Issues: TUC-1915

    • A large error message could be found included within the status block of ls (in [Continuent Tungsten 4.0 Manual]) output within cctrl (in [Continuent Tungsten 4.0 Manual]). The error message information has been redirected to the error log.

      Issues: TUC-1931

    • Performing switch (in [Continuent Tungsten 4.0 Manual]) operations within a composite service using active witnesses could raise an error and fail.

      Issues: TUC-1946

    • cctrl (in [Continuent Tungsten 4.0 Manual]) would be unable to create a composite datasource after dropping it.

      Issues: TUC-1956

    • Backwards compatibility for the recover using (in [Continuent Tungsten 4.0 Manual]) has been incorporated.

      Issues: TUC-1971

  • Cookbook Utility

    • The tungsten-cookbook tests fails and does not print current status.

      Issues: TUC-1623

    • The tungsten-cookbook uses resolveip instead of standard name resolution tools.

      Issues: TUC-1646

    • The tungsten-cookbook tool sometimes misunderstands the result of composite recovery.

      Issues: TUC-1662

    • Cookbook gets warnings when used with a MySQL 5.6 client.

      Issues: TUC-1673

    • The cookbook does not wait for a database server to be offline properly.

      Issues: TUC-1685

    • tungsten-cookbook does not check the status of the relay server after a composite recovery.

      Issues: TUC-1695

    • tungsten-cookbook does not check all the components of a datasource when testing a server.

      Issues: TUC-1696

    • tungsten-cookbook does not collect the configuration files under cluster-home.

      Issues: TUC-1697

    • Cookbook should not specify witness hosts in default configuration files etc.

      Issues: TUC-1734

    • Tungsten cookbook fails the replicator test.

      Issues: TUC-1827

    • Using a backup that has been copied across servers within cookbook could overwrite or replace existing backup files, which would then make the backup file appear as older than it should be, making it unavailable in restore operations.

      Issues: TUC-1936

  • Backup and Restore

    • The mysqldump backup option cannot restore if slow_query_log was on during the backup process.

      Issues: TUC-586

    • Using xtrabackup during restore fails if MySQL is running as user 'anything-but-mysql' and without root access.

      Issues: TUC-1005

    • When using mysqldump restore, the operation failed to disable slow and general logging before applying the restore.

      Issues: TUC-1330

    • Backup fails when using the xtrabackup-full agent.

      Issues: TUC-1612

    • Recovery hangs with composite data service.

      Issues: TUC-1657

    • Performing a restore with xtrabackup fails.

      Issues: TUC-1672

    • The datasource backup (in [Continuent Tungsten 4.0 Manual]) operation could fail due to a Ruby error.

      Issues: TUC-1686

    • Restore with xtrabackup fails.

      Issues: TUC-1716

    • Issues when recovering a failed physical dataservice.

      Issues: TUC-1793

    • Backup with xtrabackup fails if datadir is not defined in my.cnf.

      Issues: TUC-1821

    • When using xtrabackup restore fails.

      Issues: TUC-1846

    • After a restore, datasource is welcomed and put online, but never gets to the online state.

      Issues: TUC-1861

    • A restore that occurs immediately after a recover from dataserver failure always fails.

      Issues: TUC-1870

    • Master datasource backup generates superficial failure message but succeeds anyway.

      Issues: TUC-1896

    • Restoration of a full backup would fail due to the inclusion of the xtrabackup_incremental_basedir directory.

      Issues: TUC-1919

    • Backup using xtrabackup 1.6.5 would fail.

      Issues: TUC-1920

    • When using the backup files copied from another server, the replicator could mistakenly use the wrong backup files when performing a restore.

      Issues: TUC-1948

  • Core Replicator

    • Master failure causes partial commits on the slave with single channel parallel apply.

      Issues: TUC-1625

    • Slave applier can fail to log error when DBMS fails due to exception in cleanup.

      Issues: TUC-1626

    • Replication would fail on slave due to null characters created when inserting ___SERVICE___ comments.

      Issues: TUC-1627

    • LOAD (LOCAL) DATA INFILE would fail if the request starts with white spaces.

      Issues: TUC-1639

    • Datasource with a replicator in GOING-ONLINE:RESTORING (in [Tungsten Replicator 5.2 Manual]) shows up with a replicator state=UNKNOWN.

      Issues: TUC-1658

    • An insecure slave can replicate from secure master.

      Issues: TUC-1677

    • Replicator does not drop client connection to master and reconnect within the same time frame as in previous releases.

      Issues: TUC-1688

  • Filters

    • Primary key filter should be able to renew its internal connection after some timeout.

      Issues: TUC-1803

  • Tungsten Connector

    • TSR Session not updated when the database name changes (with sessionId set to DATABASE)

      Issues: TUC-761

    • Router gateway can prevent manager startup if the connector is started before the manager

      Issues: TUC-850

    • The Tungsten show processlist command would throw NPE errors.

      Issues: TUC-1136

    • Selective read/write splitting (SQL-Based routing) has been updated to ensure that it is backwards compatible with previous read/write splitting configurations.

      Issues: TUC-1489

    • Router must go into fail-safe mode if it loses connectivity to a manager during a critical command.

      Issues: TUC-1549

    • Use of the SET NAMES command were not forwarded to attached read-only connections.

      Issues: TUC-1569

    • When using haproxy through a connector connection, the initial query would be rejected.

      Issues: TUC-1581

    • When the dataservices.properties (in [Tungsten Replicator 5.2 Manual]) file is empty, the connector would hang. The operation has now been updated to exit with an exception if the file cannot be found.

      Issues: TUC-1586

    • When in a SOR deployment, the Connector will never return connection requests with RO_RELAXED (in [Continuent Tungsten 4.0 Manual]) and affinity set to relay node only site.

      Issues: TUC-1620

    • Affinity not honored when using direct connections.

      Issues: TUC-1628

    • Connector queries for SHOW SLAVE STATUS return incorrect slave latency of 0 intermittently.

      Issues: TUC-1645

    • The Tungsten Connector does not know it's PID following upgrade to JSW 3.5.17.

      Issues: TUC-1665

    • An attempt to load a driver listener class can cause the connector to hang, at startup.

      Issues: TUC-1669

    • Read connections allocated by connector get 'stale' and are closed by MySQL server due to wait_timeout - causes app 'transparency' issues.

      Issues: TUC-1671

    • Broken connections returned to the c3p0 pool - further use of these will show errors.

      Issues: TUC-1683

    • Router disconnects from a manager in the middle of a switch (in [Continuent Tungsten 4.0 Manual]) command - writes continue to offline master.

      Issues: TUC-1692

    • Connector sessionId passed in database name not retained

      Issues: TUC-1704

    • When using USE DB within a connector after the database had previously been dropped would be incorrectly ignored.

      Issues: TUC-1718

    • The connector tungsten flush privileges (in [Continuent Tungsten 4.0 Manual]) command causes a temporary outage (denies new connection requests).

      Issues: TUC-1730

    • Database context not changed to the correct database when qos=DATABASE is in use.

      Issues: TUC-1779

    • Connector should require a valid manager to operate even when in maintenance mode.

      Issues: TUC-1781

    • Connector allows connections to an offline/on-hold composite dataservice.

      Issues: TUC-1787

    • Router notifications are being sent to routers via GCS. This is unnecessary since a manager only updates routers that are connected to it.

      Issues: TUC-1790

    • Pass through not handling correctly multiple results in 1.5.4.

      Issues: TUC-1792

    • SmartScale will fail to create a database and use immediately.

      Issues: TUC-1836

    • The connector could hang during installation test.

      Issues: TUC-1847

    • Under certain circumstances, SSL-configuration for the Connector would be unable to start properly.

      Issues: TUC-1869

      For more information, see Configuring Connector SSL.

    • Specify where to load security properties from in the connector.

      Issues: TUC-1872

    • A SET NAMES operation would not survive a switch (in [Continuent Tungsten 4.0 Manual]) or failover (in [Continuent Tungsten 4.0 Manual]) operation.

      Issues: TUC-1879

    • The connector (in [Continuent Tungsten 4.0 Manual]) command within cctrl (in [Continuent Tungsten 4.0 Manual]) has been disabled unless the connector and manager are installed on the same host.

      To support the removed functionality, the following changes to the router (in [Continuent Tungsten 4.0 Manual]) command have been made:

      • The * wildcard can be used for connectors within the router (in [Continuent Tungsten 4.0 Manual]) command within cctrl (in [Continuent Tungsten 4.0 Manual]). For example, router * online will place all available connectors online.

      • The built-in command-line completion provides the names of the connectors in addition to the * (wildcard) character for the router (in [Continuent Tungsten 4.0 Manual]) command.

      Issues: TUC-1918

    • Using cursors within stored procedures through the connector would cause a hang in the connector service.

      Issues: TUC-1950

    • The connector would hang when working in a cluster with active witnesses.

      Issues: TUC-1954

    • When specifying the affinity within a connection, the maxAppliedLatency configuration would be ignored.

      Issues: TUC-1960

    • The connector would check for changes to the user.map (in [Continuent Tungsten 4.0 Manual]) frequently, causing lag on high-load servers. The configuration has been updated to allow checking only every 10s.

      Issues: TUC-1972

    • Passing the qos option within a database name would not work when smart scale was enabled.

      Issues: TUC-1982

  • Tungsten Manager

    • The datasource restore (in [Continuent Tungsten 4.0 Manual]) command may fail when using xtrabackup if the file ownership for the backup files is wrong.

      Issues: TUC-1226

    • Dataservice has different "composite" status depending on how its status is called.

      Issues: TUC-1614

    • The switch (in [Continuent Tungsten 4.0 Manual]) command does not validate command line correctly.

      Issues: TUC-1618

    • Composite recovery would fail because a replicator that was previously a master tries to re-apply a transaction that it had previously committed.

      Issues: TUC-1634

    • cctrl (in [Continuent Tungsten 4.0 Manual]) would let you shun the master datasource.

      Issues: TUC-1637

    • During a failover, the master could be left in read-only mode.

      Issues: TUC-1648

    • On occasion, the manager would fail to restart after being hung.

      Issues: TUC-1649

    • The ping command in cctrl (in [Continuent Tungsten 4.0 Manual]) wrongly identifies witness server as unreachable.

      Issues: TUC-1652

    • The failure of primary data source could go unhanded due to a manager restart.

      Issues: TUC-1659

    • The manager reports composite recovery completion although the operation has failed.

      Issues: TUC-1663

    • A transient error can cause a confused state.

      Issues: TUC-1678

    • Composite recovery could fail, but the manager says it was complete.

      Issues: TUC-1694

    • The internal Call to OpenReplicatorManager.status() during transition from online to offline results in a NullPointerException.

      Issues: TUC-1708

    • Relay does not fail over when the database server is stopped.

      Issues: TUC-1711

    • The cctrl (in [Continuent Tungsten 4.0 Manual]) would raise an error when running a backup from a master.

      Issues: TUC-1789

    • Tungsten manager may report false host failures due to a temporary problem with name resolution.

      Issues: TUC-1797

    • cctrl (in [Continuent Tungsten 4.0 Manual]) could report a manager as ONLINE (in [Tungsten Replicator 5.2 Manual]) even though the datasource would in fact be OFFLINE (in [Tungsten Replicator 5.2 Manual]).

      Issues: TUC-1804

    • The manager would not see a secured replicator.

      Issues: TUC-1806

    • Slave replicators never come online after a switch when using secure thl.

      Issues: TUC-1807

    • cctrl (in [Continuent Tungsten 4.0 Manual]) complains of missing security file when security is not enabled.

      Issues: TUC-1808

    • Switch in relay site fails and takes offline all nodes.

      Issues: TUC-1809

    • A switch in the relay site sets the relay to replicate from itself.

      Issues: TUC-1811

    • In a composite deployment, a switch in the primary site is not propagated to the relay.

      Issues: TUC-1813

    • cctrl (in [Continuent Tungsten 4.0 Manual]) exposes security passwords unnecessarily.

      Issues: TUC-1817

    • The master datasource is not available following the failover (in [Continuent Tungsten 4.0 Manual]) command.

      Issues: TUC-1841

    • The manager does not support a non-standard replicator RMI port.

      Issues: TUC-1842

    • In a multi-site deployment, automatic failover does not happen in maintenance mode, due to replicator issues.

      Issues: TUC-1845

    • During the recovery of a composite dataservice, the restore of a shunned master could fail because the previous and current roles did not match.

      Issues: TUC-1857

    • A stopped dataserver would not be detected if cluster was in maintenance mode when it was stopped.

      Issues: TUC-1860

    • Manager attempts to get status of remote replicator from the local service - causes a failure to catch up from a relay.

      Issues: TUC-1864

    • A switch (in [Continuent Tungsten 4.0 Manual]) operation could fail in single site deployment.

      Issues: TUC-1867

    • In a configuration with a relay of a composite site, if all active data datasources are unavailable, a switch (in [Continuent Tungsten 4.0 Manual]) operation would raise invalid exception messages.

      Issues: TUC-1875

    • recover using (in [Continuent Tungsten 4.0 Manual]) fails in the simplest case for 2.0.1.

      Issues: TUC-1876

    • Manager fails safe even if it is in the quorum set and primary partition.

      Issues: TUC-1878

    • Single command recover (in [Continuent Tungsten 4.0 Manual]) does not work - does not find datasources to recover even if they exist.

      Issues: TUC-1881

    • Failover causes old master node name to disappear from cctrl (in [Continuent Tungsten 4.0 Manual]) ls (in [Continuent Tungsten 4.0 Manual]) command.

      Issues: TUC-1894

    • ClusterManagementHandler can read/write datasources directly from the local disk - can cause cluster configuration information corruption.

      Issues: TUC-1899

    • Stopping managers does not cause membership validation rules to kick in. This can lead to an invalid group.

      Issues: TUC-1901

    • The manager rules could fail to fence a composite datasource for which all managers in the service are unreachable.

      Issues: TUC-1902

    • recover using (in [Continuent Tungsten 4.0 Manual]) in a master service could convert one of the datasources into a relay instead of a slave.

      Issues: TUC-1907

    • CREATE COMPOSITE DATASOURCE could result in an exception if the master datasource site was used.

      Issues: TUC-1911

    • The manager would throw a false alarm if the trep_commit_seqno (in [Tungsten Replicator 5.2 Manual]) table was empty. This was due to the manager being started before the replicator had created the required table.

      Issues: TUC-1917

    • Composite recovery within a cloud deployment could fail.

      Issues: TUC-1922

    • Errors could be raised when using the set master (in [Continuent Tungsten 4.0 Manual]) and recover using (in [Continuent Tungsten 4.0 Manual]) commands within cctrl (in [Continuent Tungsten 4.0 Manual]).

      Issues: TUC-1930

    • Composite recovery could fail in a site with multiple masters.

      Issues: TUC-1932

    • A failed master within a dataservice would cause the datasource names to disappear.

      Issues: TUC-1933

    • Running switch (in [Continuent Tungsten 4.0 Manual]) command after performing recovery could fail within a multi-site deployment.

      Issues: TUC-1934

    • Performing a switch (in [Continuent Tungsten 4.0 Manual]) operation when there are active witness could cause an error message indicating a fault, when in fact the operation completed successfully.

      Issues: TUC-1935

    • After performing a switch operation, a slave could report to the previous, not active, relay.

      Issues: TUC-1939

    • Running operations on active witness datasources would raise NullPointerException errors.

      Issues: TUC-1944, TUC-1945

    • Errors would be reported in the log when deserializing configuration information between the manager and connector.

      Issues: TUC-1963

    • Automatic failover would fail to run if an active witness was the coordinator for the dataservice.

      Issues: TUC-1964

    • Connectors would disappear after restarting the coordinator.

      Issues: TUC-1966

    • The coordinator would attempt to check database server liveness if a manager on a witness host goes away.

      Issues: TUC-1970

    • Composite recovery using a streaming backup results in a site with multiple masters.

      Issues: TUC-1992

    • Installing a composite dataservice would create two master services.

      Issues: TUC-1996

  • Manager API

    • API call for a single server does not report replicator status.

      Issues: TUC-1615

    • API "promote" command does not operate in a composite dataservice.

      Issues: TUC-1617

    • Some indispensable commands missing from manager API.

      Issues: TUC-1654

    • Manager API does not answer to /manager/status/svc_name without Accept header

      Issues: TUC-1690

    • The Manager API lets you shun a master.

      Issues: TUC-1706

    • The call to 'policy' API fails in composite dataservice.

      Issues: TUC-1725

  • Platform Specific Deployments

    • Windows service registration scripts won't work.

      Issues: TUC-1636

    • FreeBSD: Replicator hangs when going offline. Can cause switch to hang/abort.

      Issues: TUC-1668

  • Documentation

  • Other Issues

    • The shared libraries used by Continuent Tungsten have now been centralized in the cluster-home directory.

      Issues: TUC-1310

    • Some build warnings in Java 1.6 become errors in Java 1.7.

      Issues: TUC-1731

    • The test_connection_routing_and_isolation.rb test_tuc_98 test never selects the correct master.

      Issues: TUC-1780

    • During testing, a test that stops and restarts the replicator fails because a replicator that is actually running shows up, subsequently, as stopped.

      Issues: TUC-1895

    • The wrapper for the service was not honoring the configured wait period during a restart, which could cause a hang or failure when the service was restarted.

      Issues: TUC-1910, TUC-1913

Continuent Tungsten 2.0.1 Includes the following changes made in Tungsten Replicator 2.2.0

Release Notes 2.2.0 is a bug fix and feature release that contains a number of key improvements to the installation and management of the replicator:

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • Installations for Amazon RDS must use tungsten-installer; support is not currently available for tpm (in [Tungsten Replicator 5.2 Manual]).

Improvements, new features and functionality

Bug Fixes

  • Installation and Deployment

    • When performing a Vertica deployment, tpm (in [Tungsten Replicator 5.2 Manual]) would fail to create the correct configuration parameters. In addition, error messages and warnings would be generated that did not apply to Vertica installations. tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to simplify the Vertica installation process.

      Issues: 688, 781

      For more information, see Installing Vertica Replication.

    • When configuring a single host to support a parallel, multi-channel deployment, tpm (in [Tungsten Replicator 5.2 Manual]) would report that this operation was not supported. tpm (in [Tungsten Replicator 5.2 Manual]) has now been updated to support single host parallel apply configurations.

      Issues: 737

    • Configuring an installation with a preferred path for MySQL deployments using the --preferred-path (in [Tungsten Replicator 5.2 Manual]) option would not set the PATH variable correctly, this would lead to the tools from an incorrect directory being used when performing backup or restore operations. tpm (in [Tungsten Replicator 5.2 Manual]) has been updated to correctly set the environment during execution.

      Issues: 752

  • Command-line Tools

    • When using the -sql (in [Tungsten Replicator 5.2 Manual]) option to the thl (in [Tungsten Replicator 5.2 Manual]), additional metadata and options would be displayed. The tool has now been updated to only output the corresponding SQL.

      Issues: 264

    • DATETIME values could be displayed incorrectly in the THL when using the thl (in [Tungsten Replicator 5.2 Manual]) tool to show log contents.

      Issues: 676

    • An incorrect RMI port could be used within a deployment if a non-standard RMI port was specified during installation, affecting the operation of trepctl (in [Tungsten Replicator 5.2 Manual]). The precedence for selecting the RMI port to use has been updated to use the -port (in [Tungsten Replicator 5.2 Manual]), the system property, and then service properties for the selected service and/or trepctl (in [Tungsten Replicator 5.2 Manual]) executable.

      Issues: 695

  • Backup and Restore

    • During installation, tpm (in [Tungsten Replicator 5.2 Manual]) would fail to check the version for Percona XtraBackup when working with built-in InnoDB support in MySQL. The check has now been updated and validation will fail if XtraBackup 2.1 or later is used with a MySQL 5.1 and built-in InnoDB support.

      Issues: 671

    • When using xtrabackup during a restore operation, the restore would fail. The problem was due to a difference in the interface for XtraBackup 2.1.6.

      Issues: 778

  • Oracle Replication

    • When performing an Oracle deployment, tpm (in [Tungsten Replicator 5.2 Manual]) would apply incorrect parameters and filters and check MySQL specific environment information. The following changes have been made:

      • The colnames (in [Tungsten Replicator 5.2 Manual]) filter is no longer added to Oracle master (extractor) deployments.

      • Incorrect schema value would be defined for the replicator schema.

      The check for mysqldump is still performed on an Oracle master host; use --preferred-path (in [Tungsten Replicator 5.2 Manual]) to set a valid location, or disable the MySQLDumpCheck validation check.

      Issues: 685

  • Core Replicator

    • DECIMAL values could be extracted from the MySQL binary log incorrectly when using statement based logging.

      Issues: 650

    • A null pointer exception could be raised by the master, which would lead to the slave failing to connect to the master correctly. The slave will now retry the connection.

      Issues: 698

    • A slave replicator could fail when synchronizing the THL if the master goes offline. This was due to network interrupts during a failure not being recognised properly.

      Issues: 714

    • In certain circumstances, a replicator could apply transactions that had been generated by itself. This could happen during a failover, leading to events written to the THL, but without the trep_commit_seqno (in [Tungsten Replicator 5.2 Manual]) table having been updated. To fix this problem, consistency checks on the THL contents are now performed during startup. In addition, all replicators now write their currently assigned role to a file within the configuration directory of the running replication service, called static-servicename.role.

      When the replicator goes online, a static-servicename.role file is examined. If the current role identified in that file was a master, and the current role of the replicator is a slave, then the THL consistency checks are enabled. These check the following situations:

      • If the trep_commit_seqno (in [Tungsten Replicator 5.2 Manual]) is out of sync with the contents of the THL provided that the last THL record exists and matches the source-id of the transaction.

      • If the current log position is different to the THL position, and assuming that THL position exists, then an error will be raised and the replicator will go offline. This behavior can be overridden by using the trepctl online -force (in [Tungsten Replicator 5.2 Manual]) command.

      Once the checks have been completed, the new role for the replicator is updated in the static-servicename.role file.

      Important

      The static-servicename.role file must be deleted, or the THL files must be deleted, when restoring a backup. This is to ensure that the correct current log position is identified.

      Issues: 735

    • An UnsupportedEncodingException error could occur when extracting statement based replication events if the MySQL character set did not match a valid Java character set used by the replicator.

      Issues: 743

    • When using Row-based replication, replicating into a table on the slave that did not exist, a Null-Pointer Exception would be raised. The replicator now correctly raises an SQL error indicating that the table does not exist.

      Issues: 747

    • During a master failure under load, the number of transactions making it to the slave before the master replicator fails.

      Issues: 753

    • Upgrading a replicator and changing the hostname could cause the replicator to skip events in the THL. This was due to the way in which the source-id of events in the slave replicator checks the information compared to the remote THL read from the master. This particularly affect standalone replicators. The fix adds a new property, replicator.repositionOnSourceIdChange. This is a boolean value, and specifies whether the replicator should try to reposition to the correct location in the THL when the source ID has been modified.

      Issues: 754

    • Running trepctl reset (in [Tungsten Replicator 5.2 Manual]) on a service deployed in an multi-master (all master) configuration would not correctly remove the schema from the database.

      Issues: 758

    • Replication of temporary tables with the same name, but within different sessions would cause a conflict in the slave.

      Issues: 772

  • Filters

    • The PrimaryKeyFilter in [Tungsten Replicator 5.2 Manual] would not renew connections to the master to determine the primary key information. When replication had been running for a long time, the active connection would be dropped, but never renewed. The filter has been updated to re-connect on failure.

      Issues: 670

      For more information, see PrimaryKey Filter.

3.16. Continuent Tungsten 1.5.4 GA (Not yet released)

Release Notes 1.5.4 is a maintenance release that adds important bug fixes to the Tungsten 1.5.3 release currently in use by most Tungsten customers. It contains the following key improvements:

  • Introduces quorum into Tungsten clusters to help avoid split brain problems due to network partitions. Cluster members vote whenever a node becomes unresponsive and only continue operating if they are in the majority. This feature greatly reduces the chances of multiple live masters.

  • Enables automatic restart of managers after network hangs that disrupt communications between managers. This feature enables clusters to ride out transient problems with physical hosts such as storage becoming inaccessible or high CPU usage that would otherwise cause cluster members to lose contact with each other, thereby causing application outages or manager non-responsiveness.

  • Adds "witness-only managers" which replace the previous witness hosts. Witness-only managers participate in quorum computation but do not manage a DBMS. This feature allows 2 node clusters to operate reliably across Amazon availability zones and any environment where managers run on separate networks.

  • Numerous minor improvements to cluster configuration files to eliminate and/or document product settings for simpler and more reliable operation.

Continuent recommends that customers who are awaiting specific fixes for 1.5.3 release consider upgrade to Release Notes 1.5.4 as soon as it is generally available. All other customers should consider upgrade to Release Notes 2.0.1 as soon as it is convenient. In addition, we recommend all new projects start out with version 2.0.1.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Failover could be rolled back because of a failure to release a Virtual IP. The failure has been updated to trigger a warning, not a rollback of failover.

    Issues: TUC-1666

  • An 'UnknownHostException' would cause a failover. The behavior has been updated to result in a suspect DB server.

    Issues: TUC-1667

  • Failover does not occur if the manager is not running, on the master host, before the time that the database server is stopped.

    Issues: TUC-1900

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 5.2 Manual]) should validate connector defaults that would fail an upgrade.

      Issues: TUC-1850

    • Improve tpm (in [Tungsten Replicator 5.2 Manual]) error message when running from wrong directory.

      Issues: TUC-1853

  • Tungsten Connector

    • Add support for MySQL cursors in the connector.

      Issues: TUC-1411

    • Connector must forbid zero keepAliveTimeout.

      Issues: TUC-1714

    • In SOR deployments only, Connector logs show relay data service being added twice.

      Issues: TUC-1720

    • Change default delayBeforeOfflineIfNoManager router property to 30s and constrain it to max 60s in the code.

      Issues: TUC-1752

    • Router Manager connection timeout should be a property.

      Issues: TUC-1754

    • Reject server version that don't start with a number.

      Issues: TUC-1776

    • Add client IP and port when logging connector message.

      Issues: TUC-1810

    • Make tungsten cluster status (in [Continuent Tungsten 4.0 Manual]) more sql-like and reduce the amount of information displayed.

      Issues: TUC-1814

    • Allow connections without a schema name.

      Issues: TUC-1829

  • Other Issues

    • Remove old/extra/redundant configuration files.

      Issues: TUC-1721

Bug Fixes

  • Installation and Deployment

    • Within tpm (in [Tungsten Replicator 5.2 Manual]) the witness host was previously required and was not validated

      Issues: TUC-1733

    • Ruby tests should abort if installation fails

      Issues: TUC-1736

    • Test witness hosts on startup of the manager and have the manager exit if there are any invalid witness hosts.

      Issues: TUC-1773

    • Installation fails with Ruby 1.9.

      Issues: TUC-1800

    • When using tpm (in [Tungsten Replicator 5.2 Manual]) to start from a specific event, the correct directory would not be used for the selected method.

      Issues: TUC-1824

    • When specifying a witness host check with tpm (in [Tungsten Replicator 5.2 Manual]), the check works for IP addresses but fails when using host names.

      Issues: TUC-1833

    • Cluster members do not reliably form a group following installation.

      Issues: TUC-1852

    • Installation fails with Ruby 1.9.1.

      Issues: TUC-1868

  • Command-line Tools

    • Nagios check scripts not picking up shunned datasources

      Issues: TUC-1689

  • Cookbook Utility

    • Cookbook should not specify witness hosts in default configuration files etc.

      Issues: TUC-1734

  • Backup and Restore

    • Restore with xtrabackup empties the data directory and then fails.

      Issues: TUC-1849

    • A recovered datasource does not always come online when in automatic policy mode

      Issues: TUC-1851

    • Restore on datasource in slave dataservice fails to reload.

      Issues: TUC-1856

    • After a restore, datasource is welcomed and put online, but never gets to the online state.

      Issues: TUC-1861

    • A restore that occurs immediately after a recover from dataserver failure always fails.

      Issues: TUC-1870

  • Core Replicator

    • LOAD (LOCAL) DATA INFILE would fail if the request starts with white spaces.

      Issues: TUC-1639

    • Null values are not correctly handled in keys for row events

      Issues: TUC-1823

  • Tungsten Connector

    • Connector fails to send back full result of stored procedure called by prepared statement (pass through mode on).

      Issues: TUC-36

    • Router gateway can prevent manager startup if the connector is started before the manager

      Issues: TUC-850

    • The Tungsten show processlist command would throw NPE errors.

      Issues: TUC-1136

    • The default SQL Router properties uses the wrong load balancer

      Issues: TUC-1437

    • Router must go into fail-safe mode if it loses connectivity to a manager during a critical command.

      Issues: TUC-1549

    • When in a SOR deployment, the Connector will never return connection requests with RO_RELAXED (in [Continuent Tungsten 4.0 Manual]) and affinity set to relay node only site.

      Issues: TUC-1620

    • Affinity not honored when using direct connections.

      Issues: TUC-1628

    • An attempt to load a driver listener class can cause the connector to hang, at startup.

      Issues: TUC-1669

    • Broken connections returned to the c3p0 pool - further use of these will show errors.

      Issues: TUC-1683

    • The connector tungsten flush privileges (in [Continuent Tungsten 4.0 Manual]) command causes a temporary outage (denies new connection requests).

      Issues: TUC-1730

    • Connector should require a valid manager to operate even when in maintenance mode.

      Issues: TUC-1781

    • Session variables support for row replication

      Issues: TUC-1784

    • Connector allows connections to an offline/on-hold composite dataservice.

      Issues: TUC-1787

    • Router notifications are being sent to routers via GCS. This is unnecessary since a manager only updates routers that are connected to it.

      Issues: TUC-1790

    • Pass through not handling correctly multiple results in 1.5.4.

      Issues: TUC-1792

    • SmartScale will fail to create a database and use immediately.

      Issues: TUC-1836

  • Tungsten Manager

    • A manager that cannot see itself as a part of a group should fail safe and restart

      Issues: TUC-1722

    • Retry of tests for networking failure does not work in the manager/rules

      Issues: TUC-1723

    • The 'vip check' command produces a scary message in the manager log if a VIP is not defined

      Issues: TUC-1772

    • Restored Slave did not change to correct master

      Issues: TUC-1794

    • If a manager leaves a group due to a brief outage, and does not restart, it remains stranded from the rest of the group but 'thinks' it's still a part of the group. This contributed to the main cause of hanging/restarts during operations.

      Issues: TUC-1830

    • Failover of relay aborts when relay host reboots, leaving data sources of slave service in shunned or offline state.

      Issues: TUC-1832

    • The recover (in [Continuent Tungsten 4.0 Manual]) command completes but cannot welcome the datasource, leading to a failure in tests.

      Issues: TUC-1837

    • After failover on primary master, relay datasource points to wrong master and has invalid role.

      Issues: TUC-1858

    • A stopped dataserver would not be detected if cluster was in maintenance mode when it was stopped.

      Issues: TUC-1860

    • Manager attempts to get status of remote replicator from the local service - causes a failure to catch up from a relay.

      Issues: TUC-1864

    • Using the recover using command can result in more than one service in a composite service having a master and if this happens, the composite service will have two masters.

      Issues: TUC-1874

    • Using the recover using command, the operation recovers a datasource to a master when it should recover it to a relay.

      Issues: TUC-1882

    • ClusterManagementHandler can read/write datasources directly from the local disk - can cause cluster configuration information corruption.

      Issues: TUC-1899

  • Platform Specific Deployments

    • FreeBSD: Replicator hangs when going offline. Can cause switch to hang/abort.

      Issues: TUC-1668