Release Notes

Continuent Ltd

Abstract

This document provides release notes for all released versions of Continuent software.

Build date: 2018-09-21 (7bbed01e)

Up to date builds of this document: Release Notes (Online), Release Notes (PDF)


Table of Contents

1. Tungsten Replicator Release Notes
1.1. Tungsten Replicator 6.0.3 GA (5 September 2018)
1.2. Tungsten Replicator 6.0.2 GA (27 June 2018)
1.3. Tungsten Replicator 6.0.1 GA (30 May 2018)
1.4. Tungsten Replicator 6.0.0 GA (4 April 2018)
1.5. Tungsten Replicator 5.3.3 GA (20 September 2018)
1.6. Tungsten Replicator 5.3.2 GA (4 June 2018)
1.7. Tungsten Replicator 5.3.1 GA (18 April 2018)
1.8. Tungsten Replicator 5.3.0 GA (12 December 2017)
1.9. Tungsten Replicator 5.2.2 GA (22 October 2017)
1.10. Tungsten Replicator 5.2.1 GA (21 September 2017)
1.11. Tungsten Replicator 5.2.0 GA (19 July 2017)
1.12. Tungsten Replicator 5.1.1 GA (23 May 2017)
1.13. Tungsten Replicator 5.1.0 GA (26 April 2017)
1.14. Tungsten Replicator 5.0.1 GA (23 February 2017)
1.15. Tungsten Replicator 5.0.0 GA (7 December 2015)
2. Tungsten Clustering Release Notes
2.1. Tungsten Clustering 6.0.3 GA (5 September 2018)
2.2. Tungsten Clustering 6.0.2 GA (27 June 2018)
2.3. Tungsten Clustering 6.0.1 GA (30 May 2018)
2.4. Tungsten Clustering 6.0.0 GA (4 April 2018)
2.5. Tungsten Clustering 5.3.3 GA (20 September 2018)
2.6. Tungsten Clustering 5.3.2 GA (4 June 2018)
2.7. Tungsten Clustering 5.3.1 GA (18 April 2018)
2.8. Tungsten Clustering 5.3.0 GA (12 December 2017)
2.9. Tungsten Clustering 5.2.2 GA (22 October 2017)
2.10. Tungsten Clustering 5.2.1 GA (21 September 2017)
2.11. Tungsten Clustering 5.2.0 GA (19 July 2017)
2.12. Tungsten Clustering 5.1.1 GA (23 May 2017)
2.13. Tungsten Clustering 5.1.0 GA (26 April 2017)
2.14. Tungsten Clustering 5.0.1 GA (23 February 2017)
2.15. Tungsten Clustering 5.0.0 GA (7 December 2015)
2.16. Tungsten Clustering 4.0.0 Not yet released (Not yet released)
3. Continuent Tungsten Release Notes
3.1. Continuent Tungsten 4.0.8 GA (22 May 2017)
3.2. Continuent Tungsten 4.0.7 GA (23 February 2017)
3.3. Continuent Tungsten 4.0.6 GA (8 December 2016)
3.4. Continuent Tungsten 4.0.5 GA (4 March 2016)
3.5. Continuent Tungsten 4.0.4 GA (24 February 2016)
3.6. Continuent Tungsten 4.0.3 Not Released (NA)
3.7. Continuent Tungsten 4.0.2 GA (1 October 2015)
3.8. Continuent Tungsten 4.0.1 GA (20 July 2015)
3.9. Continuent Tungsten 4.0.0 GA (17 April 2015)
3.10. Continuent Tungsten 2.2.0 NYR (Not Yet Released)
3.11. Continuent Tungsten 2.0.5 GA (24 Dec 2014)
3.12. Continuent Tungsten 2.0.4 GA (9 Sep 2014)
3.13. Continuent Tungsten 2.0.3 GA (1 Aug 2014)
3.14. Continuent Tungsten 2.0.2 GA (19 May 2014)
3.15. Continuent Tungsten 2.0.1 GA (3 January 2014)
3.16. Continuent Tungsten 1.5.4 GA (Not yet released)

1. Tungsten Replicator Release Notes

1.1. Tungsten Replicator 6.0.3 GA (5 September 2018)

Version End of Life. 5 September 2021

Release 6.0.3 is a bugfix release.

Improvements, new features and functionality

  • Oracle Replication

    • Oracle connection strings can now be configured using the Oracle TNS name, rather than purely the Oracle service or SID names. To use this option, specify the TNS name using the --datasource-oracle-service (in [Tungsten Replicator 6.0 Manual]) option to tpm (in [Tungsten Replicator 6.0 Manual]). This will configure the connection using the service name or TNS name if this can be determined. If the TNS name cannot be resolved automatically, use the --oracle-redo-tnsadmin-home to specify the directory where the Oracle tnsnames.ora file is located.

      To use the JDBC listener rather than the TNS service, use the --datasource-oracle-sid (in [Tungsten Replicator 6.0 Manual]) option.

      Issues: CT-380

    • Oracle support has been improved, adding support for Oracle TNS naming and support for extracting Oracle RAC using the Oracle Redo Reader functionality.

      Support has been added for extracting data from Oracle RAC hosts. To enable extraction from Oracle RAC requires use of the new Oracle service name (TNS) specification, and a different option to tpm (in [Tungsten Replicator 6.0 Manual]) to enable different Redo Reader configuration.

      To enable extraction from an Oracle RAC instance, use the --oracle-redo-rac-enabled=true (in [Tungsten Replicator 6.0 Manual]) option to tpm (in [Tungsten Replicator 6.0 Manual]). In addition, you should specify the connection information to Oracle using the --datasource-oracle-service (in [Tungsten Replicator 6.0 Manual]) option to specify the TNS name, and optionally specify the location of the tnsnames.ora file using the --oracle-redo-tnsadmin-home option to tpm (in [Tungsten Replicator 6.0 Manual]).

      If your RAC environment uses a different edition ASM than used by the core Oracle deployment, the --oracle-redo-asm-home (in [Tungsten Replicator 6.0 Manual]) option can be used to specify the home directory for the ASM version in use.

      Currently, this includes an action script for use with Oracle RAC hosts to be used when switching RAC hosts during operation in the event of a failure. The action script can be found in support/oracle-rac-scripts/action_script.scr.

      Issues: CT-660, CT-666

      For more information, see Oracle Replication on Oracle RAC (in [Tungsten Replicator 6.0 Manual]).

  • Core Replicator

    • The output from thl list (in [Tungsten Replicator 6.0 Manual]) now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

    • The replicator has been updated to support the new character sets supported by MySQL 5.7 and MySQL 8.0, including the UTF-8-mb4 series.

      Issues: CT-700

Bug Fixes

  • Installation and Deployment

    • During installation, tpm (in [Tungsten Replicator 6.0 Manual]) attempts to find the system commands such as service and systemctl used to start and stop databases. If these were not in the PATH, tpm (in [Tungsten Replicator 6.0 Manual]) would fail to find a start/stop for the configured database. In addition to looking for these tools in the PATH tpm (in [Tungsten Replicator 6.0 Manual]) also explicitly looks in the /sbin, /bin, /usr/bin and /usr/sbin directories.

      Issues: CT-722

  • Command-line Tools

    • Using tpm diag (in [Tungsten Replicator 6.0 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-610

    • When running tpm diag (in [Tungsten Replicator 6.0 Manual]), the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

    • Due to the operating taking a long time or timing out, the capture of the output from lsof has been removed from running tpm diag (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-731

  • Oracle Replication

    • When performing an Oracle installation for applying data, tpm (in [Tungsten Replicator 6.0 Manual]) would report an issue with permissions not required for app;ying data into Oracle.

      Issues: CT-664

    • The prepare-offboard-fetcher.pl (in [Tungsten Replicator 6.0 Manual]) script has been updated to address an issue with one of the checks made during execution.

      Issues: CT-665

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

    • When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when log_slave_updates was enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.

      Issues: CT-686

1.2. Tungsten Replicator 6.0.2 GA (27 June 2018)

Version End of Life. 27 June 2021

Release 6.0.2 is a bugfix release. No issues were fixed in the replicator release.

1.3. Tungsten Replicator 6.0.1 GA (30 May 2018)

Version End of Life. 30 May 2021

Release 6.0.1 is a bugfix release.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The tungsten_set_position (in [Tungsten Replicator 6.0 Manual]) and tungsten_get_position commands have been deprecated and will be removed in the 6.1.0 release. These commands only worked with MySQL datasources. Use the dsctl (in [Tungsten Replicator 6.0 Manual]) command, which works with a much wider range of datasources.

    Issues: CT-517

Improvements, new features and functionality

  • Installation and Deployment

    • The tpm diag (in [Tungsten Replicator 6.0 Manual]) command has been improved to include more information about the environment, including:

      • The output from the lsof command.

      • The output from the ps command.

      • The output from the show full processlist command within mysql.

      • Copies of all the .properties configuration files.

      • Copies of all the my.cnf files, including directory configurations.

      • Improvements to the clarity of some commands.

      • The INI files used by tpm (in [Tungsten Replicator 6.0 Manual]) (if using INI installs) are included.

      Issues: CT-530, CT-611, CT-615, CT-623

  • Command-line Tools

    • The trepctl services (in [Tungsten Replicator 6.0 Manual]) has been updated to support the auto-refresh option using the -r command-line optionoption.

      Issues: CT-627

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) has been updated with a new command, servicetable (in [Tungsten Replicator 6.0 Manual]) command. This outputs the status information for multiple services in a tabular format to make it easier to identify the state for multi-service replicators. For example:

      shell> trepctl servicetable
      Processing servicetable command...
      Service | Status | Role | MasterConnectUri | SeqNo | Latency
      -------------------- | ------------------------------ | ---------- | ------------------------------ | ---------- | ----------
      alpha | ONLINE | slave | thl://trfiltera:2112/ | 322 | 0.00
      beta | ONLINE | slave | thl://ubuntuheterosrc:2112/ | 12 | 4658.59
      Finished servicetable command...

      The command also supports the auto-refresh option, -r.

      Issues: CT-637

Bug Fixes

  • Installation and Deployment

    • Support for the GEOMETRY data type within MySQL 5.7 and above has been added. This provides full support for both extractiong and applying of the datatype to MySQL.

      This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.

      Issues: CT-403

    • When using Net::SSH within tpm (in [Tungsten Replicator 6.0 Manual]), more detailed information about any specific failures or errors is now provided.

      Issues: CT-523

    • tpm (in [Tungsten Replicator 6.0 Manual]) would mistakenly report issues with JSON columns during installation which no longer applies as JSON support for MySQL 5.7 was added in 6.0.0.

      Issues: CT-635

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 6.0 Manual]) could hang within different scenarios, including being executed in the background, or part of a background script or cronjob. The script could also fail to restart MySQL correctly

      Issues: CT-319, CT-572

    • The trepctl status (in [Tungsten Replicator 6.0 Manual]) would fail badly if the service name did not exist in the configuration, or if multipl services were configured.

      Issues: CT-545, CT-593

    • When using tpm (in [Tungsten Replicator 6.0 Manual]) with the INI method, the command would search multiple locations for suitable INI files. This could lead to multiple definitions of the same service, which could in turn lead to duplication of the installation process and occasional failures. If multiple INI files are found, a warning is now produced to highlight the potential for failures.

      Issues: CT-626

    • When setting optimizeRowEvents back to false (it is enabled by default), the replicator could fail with IndexOutOfBound errors.

      Issues: CT-631

    • Using trepctl qs (in [Tungsten Replicator 6.0 Manual]) where the sequence number could be larger than an INT would cause an error.

      Issues: CT-642

  • Oracle Replication

    • The prepare_offboard_fetcher script could fail due to the use of command that may not exist on some platforms. Under some circumstances the script could also be installed as non-executable.

      Issues: CT-420, CT-421

  • Heterogeneous Replication

    • The templates for ddlscan (in [Tungsten Replicator 6.0 Manual]) for MySQL to Oracle do not escape field names correctly.

      Issues: CT-249

    • When replicating data into MongoDB, numeric values and date values would be represented in the target database as strings, not as their native values.

      Issues: CT-581, CT-582

    • The default partition method used when loading data through CSV files showed an incorrect example format. Previously it was advised to use:

      'commit_hour='yyyy-MM-dd-HH

      It should just show the data format:

      yyyy-MM-dd-HH

      Issues: CT-607

    • The Javascript batch loader for Redshift could generate an error when loading the object used to derive information during loading.

      Issues: CT-620

    • The templates for ddlscan (in [Tungsten Replicator 6.0 Manual]) for Oracle to Redshift failed to handle the NUMBER type correctly.

      Issues: CT-621

  • Core Replicator

    • Optimizing deletes in row-based replication could delete the wrong rows if the pkey (in [Tungsten Replicator 6.0 Manual]) had not been enabled.

      Issues: CT-557

    • The included Drizzle driver would incorrectly assign values to prepared statements if the fields in the prepared statement included a question mark

      Issues: CT-608

    • During replication, the replictor could raise the java.util.ConcurrentModificationException error intermittently.

      Warning

      This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility with the metadata.

      Issues: CT-618

  • Filters

    • The truncatetext (in [Tungsten Replicator 6.0 Manual]) filter was not configurable within all topologies. The configuration has now been updated so that the filter can be used in MySQL and other database environments.

      Issues: CT-386

1.4. Tungsten Replicator 6.0.0 GA (4 April 2018)

Version End of Life. 4 April 2021

Release 6.0.0 is a feature and bugfix release. This release contains the following key fixes:

  • Added PostgreSQL applier support.

  • Added support for custom primary key field selection for source tables that cannot be configured with a primary key within the database.

  • Added a new filter for including whole of transaction metadata information into each event.

  • Added support for extended transaction information within the Kafka applier so that all the messages for a given transaction can be identified.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Support for using Java 7 with Continuent Tungsten has been removed. Java 8 or higher must be used for all deployments.

    Issues: CT-450

Improvements, new features and functionality

  • Heterogeneous Replication

    • The Kafka applier now supports the inclusion of transaction information into each Kafka message broadcast, including the list of schema/tables and row counts for the entire transaction, as well as information about whether the message is the first or last message/row within an overall transaction. The transaction information can also be sent as a separate message on an independent Kafka topic.

      Issues: CT-496, CT-586

      For more information, see Optional Configuration Parameters for Kafka (in [Tungsten Replicator 6.0 Manual]).

  • Core Replicator

    • Experimental support for writing row-based data through SQL into PostgreSQL has been added back to the replicator. This includes basic support fr the replication of the data. Currently databases and tables must be created by hand. A future release will incorporate full support for DDL translation.

      Issues: CT-149

  • Filters

    • The pkey (in [Tungsten Replicator 6.0 Manual]) has been extended to support the specification of custom primary key fields. This enables fields in the source data to be marked as primary keys even if the source database does not have the keys specified. This is useful for heterogeneous loading of data where a unique key may exist, but cannot be defined due to the application or database that created the tables.

      Issues: CT-481

      For more information, see Setting Custom Primary Key Definitions (in [Tungsten Replicator 6.0 Manual]).

    • A new filter, rowaddtxninfo (in [Tungsten Replicator 6.0 Manual]) has been added which embeds row counts, both total and per schema/table, to the metadata for a THL event/transaction.

      Issues: CT-497

Bug Fixes

  • Installation and Deployment

    • When performing a tpm reverse (in [Tungsten Replicator 6.0 Manual]), the --replication-port (in [Tungsten Replicator 6.0 Manual]) setting would be replaced with it's alias, --oracle-tns-port (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-597

  • Core Replicator

    • An internal optimization within the replicator that would attempt to optimise row-based information and operations has been removed. The effects of the optimization were actually seen in very few situations, and it duplicated work and operations performed by the pkey (in [Tungsten Replicator 6.0 Manual]) filter. Unfortunately the same optimization could also cause issues within heterogeneous deployments with the removal of information.

      Issues: CT-318

    • The internal storage of the MySQL server ID has been updated to support larger server IDs. This works with any MySQL deployment, but has been specifically expanded to work better with some cloud deployments where the server ID cannot be controlled.

      Issues: CT-439

    • The format of some errors and log entries would contain invalid characters.

      Issues: CT-493

1.5. Tungsten Replicator 5.3.3 GA (20 September 2018)

Version End of Life. 20 September 2021

Release 5.3.3 is a bug fix release.

Improvements, new features and functionality

  • Core Replicator

    • The output from thl list (in [Tungsten Replicator 5.3 Manual]) now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

Bug Fixes

  • Command-line Tools

    • Using tpm diag (in [Tungsten Replicator 5.3 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-610

    • When running tpm diag (in [Tungsten Replicator 5.3 Manual]), the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

1.6. Tungsten Replicator 5.3.2 GA (4 June 2018)

Version End of Life. 7 June 2019

Release 5.3.2 is a bug fix release.

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 5.3 Manual]) would mistakenly report issues with JSON columns during installation which no longer applies as JSON support for MySQL 5.7 was added in 6.0.0.

      Issues: CT-635

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.3 Manual]) could hang within different scenarios, including being executed in the background, or part of a background script or cronjob. The script could also fail to restart MySQL correctly

      Issues: CT-319, CT-572

    • When setting optimizeRowEvents back to false (it is enabled by default), the replicator could fail with IndexOutOfBound errors.

      Issues: CT-631

    • Using trepctl qs (in [Tungsten Replicator 5.3 Manual]) where the sequence number could be larger than an INT would cause an error.

      Issues: CT-642

  • Core Replicator

    • During replication, the replictor could raise the java.util.ConcurrentModificationException error intermittently.

      Warning

      This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility with the metadata.

      Issues: CT-618

1.7. Tungsten Replicator 5.3.1 GA (18 April 2018)

Version End of Life. 7 June 2019

Release 5.3.1 is a bug fix release that adds support for the GEOMETRY data type in MySQL 5.7 and above, and a number of bug fixes.

Bug Fixes

  • Installation and Deployment

    • Support for the GEOMETRY data type within MySQL 5.7 and above has been added. This provides full support for both extractiong and applying of the datatype to MySQL.

      This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.

      Issues: CT-403

1.8. Tungsten Replicator 5.3.0 GA (12 December 2017)

Version End of Life. 7 June 2019

Release 5.3.0 is an important feature release that contains some key new functionality for replication. In particular:

  • JSON data type column extraction support for MySQL 5.7 and higher.

  • Generated column extraction support for MySQL 5.7 and higher.

  • DDL translation support for heterogeneous targets, initially support DDL translation for MySQL to MySQL, Vertica and Redshift targets.

  • Support for data concentration support for replication into a single target schema (with additional source schema information added to each table) for both HPE Vertica and Amazon Redshift targets.

  • Rebranded and updated support for Oracle extraction with the Oracle Redo Reader, including improvements to offboard deployment, more configuration options, and support for the deployment and installation of multiple offboard replication services within a single replicator.

This release also contains a number of important bug fixes and minor improvements to the product.

Improvements, new features and functionality

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. The new file is replicator-user.log. The original file, trepsvc.log remains unchanged.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

    • Support for Java 7 (JDK or JRE 1.7) has been deprecated, and will be removed in the 6.0.0 release. The software is compiled using Java 8 with Java 7 compatibility.

      Issues: CT-252

    • Some Javascript filters had DOS style line breaks.

      Issues: CT-376

    • Support for JSON datatypes and generated columns within MySQL 5.7 and greater has been added to the MySQL extraction component of the replicator.

      Important

      Due to a MySQL bug, the way that JSON and generated columns is represented within MySQL binary log, it is possible for the size of the data, and the reported size re different and this could cause data corruption To account for this behavior and to prevent data inconsistencies, the replicator can be configured to either ignore, warn, or stop, if the mismatch occurs.

      This can be set by modifying the property replicator.extractor.dbms.json_length_mismatch_policy.

      Until this problem is addressed within MySQL, tpm (in [Tungsten Replicator 5.3 Manual]) will still generate a warning about the issue which can be ignored during installation by using the --skip-validation-check=MySQLGeneratedColumnCheck (in [Tungsten Replicator 5.3 Manual]).

      For more information on the effects of the bug, see MySQL Bug #88791.

      Issues: CT-5, CT-468

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.3 Manual]) command has been updated to correctly operate with CentOS 7 and higher. Due to an underlying change in the way IP configuration information was sourced, the extraction of the IP address information has been updated to use the ip addr command.

      Issues: CT-35

    • The THL retention setting is now checked in more detail during installation. When the --thl-log-retention (in [Tungsten Replicator 5.3 Manual]) is configured when extracting from MySQL, the value is compared to the binary log expiry setting in MySQL (expire_logs_days). If the value is less, then a warning is produced to highlight the potential for loss of data.

      Issues: CT-91

    • A new option, --oracle-redo-temp-tablespace (in [Tungsten Replicator 5.3 Manual]) has been added to configure the temporary tablespace within Oracle redo reader extractor deployments.

      Issues: CT-321

  • Command-line Tools

    • The sizes outputs for the thl list (in [Tungsten Replicator 5.3 Manual]) command, such as -sizes (in [Tungsten Replicator 5.3 Manual]) or -sizesdetail (in [Tungsten Replicator 5.3 Manual]) command now additionally output summary information for the selected THL events:

      Total ROW chunks: 8 with 7 updated rows (50%)
      Total STATEMENT chunks: 8 with 2552 bytes (50%)
      16 events processed

      A new option has also been added, -sizessummary (in [Tungsten Replicator 5.3 Manual]), that only outputs the summary information.

      Issues: CT-433

      For more information, see thl list -sizessummary Command (in [Tungsten Replicator 5.3 Manual]).

  • Oracle Replication

    • A new option for tpm (in [Tungsten Replicator 5.3 Manual]) has been added, --oracle-tns-port (in [Tungsten Replicator 5.3 Manual]), which is an alias for --replication-port (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-274

    • The fetcher and miner ports can now be explicitly set. Previously they were fixed as port 7901 and 7902 respectively. Use the --oracle-redo-fetcher-port (in [Tungsten Replicator 5.3 Manual]) and --oracle-redo-miner-port (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-290

  • Heterogeneous Replication

    • The HPE Vertica applier has been updated and expanded so that data can be concentrated from multiple source schemas into a single schema, where all the souce and target schemas share a common table structure. The new functionality relies on the new adddbrowname filter, and a new batch applier script that handles the concentration.

      This functionality also incorporates options to keep a longterm copy of all the CDC data generated by the replicator by copying the data to a secondary set of staging tables. Both this and the core target information are configurable during installation.

      Note

      Full documentation on using this feature is under production and will be available shortly.

      Issues: CT-95

    • Support has now been added for a full DDL replication and translation support, initially from MySQL targets through to Amazon Redshift and HPE Vertica. The functionality allows for schemas and tables to be created, modified, and deleted, without the need to tuse ddlscan (in [Tungsten Replicator 5.3 Manual]), and without having to worry about making changes that stop replication until the structures can be changed.

      The DDL translation supports the following features:

      • Full replication of schema and table operations.

      • Configurable translation of data types, including size differences.

      • Automatically creates staging tables for batch-based appliers.

      • Support for centralized and long term schema replication.

      • Ability to add arbitrary columns to all replicated tables.

      • Ability to choose whether to apply different schema operations on specific schemas or tables. The following options can be controlled:

        • Creating schema

        • Creating table

        • Adding columns to existing table

        • Deleting columns from existing table

        • Modifying columns in existing table

        • Deleting table

        • Deleting schema

        For each operation, the operationg can be applied, ignored, stop replication with an error, or applied with archiving. In the case of the last example, a copy of the table is kept, and changes are applied only to the active table. This enables you to retain existing data and structure so that analytics can continue on a known version of the table. The naming and format of the table can also be set.

        For operations that add or change columns, you choose whether value for the new column within the existing rows for the table are set to the default value, or an explicit value.

      • Data is automatically flushed and committed before table changes are made to ensure that replication does not stop. This process happens automatically, so replicating data, adding a column, and replicating further data does not stop replication, even if the data would normally fail because of table differences and batch applier timings.

      • Existing table schemas can be extracted and replicated automatically through to a target without requiring ddlscan (in [Tungsten Replicator 5.3 Manual]) to create the initial tables.

      Note

      Full documentation on using this feature is under production and will be available shortly.

      Issues: CT-131, CT-132

    • The Javascript files used for applying data into batch targets (Redshift, Hadoop, Cassandra, Vertica) have been updated and improved to ensure:

      • Field names are correctly escaped

      • Error messages now contain more information about the problem

      • Where relevant, the host database errors and CSV files are now kept in the event of an error to help identification of the underlying problem.

      These changes should make it easier to identify issues, and to prevent certain issues occurring during replication.

      Issues: CT-96, CT-235

    • The CSV writer module which is used in all batch-related appliers (Redshift, Hadoop, Vertica, Cassandra) has been updated so that it provides more information about the potential problem when a CSV write is identified as invalid.

      Issues: CT-236

    • Support for replicating into Hadoop environments where the underlying filesystem is protected by Kerberos security and authentication has been added to the Hadoop applier. A new file, hadoop_kerberos.js has been added to the distribution which should be edited and used in place of the normal hadoop.js batch file.

      Issues: CT-266

      For more information, see Replicating into Kerberos Secured HDFS (in [Tungsten Replicator 5.3 Manual]).

    • The Amazon Redshift applier has been updated and expanded so that data can be concentrated from multiple source schemas into a single schema, where all the souce and target schemas share a common table structure. The new functionality relies on the new adddbrowname filter, and a new batch applier script that handles the concentration.

      Note

      Full documentation on using this feature is under production and will be available shortly.

      Issues: CT-408

  • Filters

    • A new filter, rowadddbname (in [Tungsten Replicator 6.0 Manual]), has been added to the replicator. This filter adds the incoming schema name, and optional numeric hash value of the schema, to every row of THL row-based changes. The filter is designed to be used with heterogeneous and analytics applications where data is being concentrated into a single schema and where the source schema name will be lost during the concentration and replication process.

      In particular, it is designed to work in harmony with the new Redshift and Vertica based single-schema appliers where data from multiple, identical, schemas are written into a single target schema for analysis.

      Issues: CT-98

    • A new filter has been added, rowadddbname (in [Tungsten Replicator 6.0 Manual]), which adds the source database name and optional database hash to every incoming row of data. This can be used to help identify source information when concentrating information into a single schema.

      Issues: CT-407

Bug Fixes

  • Installation and Deployment

    • An issue has been identified with the way certain operating systems now configure their open files limits, which can upset the checks within tpm (in [Tungsten Replicator 5.3 Manual]) that determine the open files limits configured for MySQL. To ensure that the open files limit has been set correctly, check the configuration of the service:

      1. Copy the system configuration:

        shell> sudo cp /lib/systemd/system/mysql.service /etc/systemd/system/
        shell> sudo vim /etc/systemd/system/mysql.service
      2. Add the following line to the end of the copied file:

        LimitNOFILE=infinity
      3. Reload the systemctl daemon:

        shell> sudo systemctl daemon-reload
      4. Restart MySQL:

        shell> service mysql restart

      That configures everything properly and MySQL should now take note of the open_files_limit config option.

      Issues: CT-148

    • The check to determine if triggers had been enabled within the MySQL data source would not get executed correctly, meaning that warnings about unsupported triggers would not trigger a notification.

      Issues: CT-185

    • When using tpm diag (in [Tungsten Replicator 5.3 Manual]) on a MySQL deployment, the MySQL error log would not be identified and included properly if the default datadir option was not /var/lib/mysql.

      Issues: CT-359

    • Installation when enabling security through SSL could fail intermittently during installation because the certificates would fail to get copied to the required directory during the installation process.

      Issues: CT-402

    • The Net::SSH libraries used by tpm (in [Tungsten Replicator 5.3 Manual]) have been updated to reflect the deprecation of paranoid parameter.

      Issues: CT-426

    • Using a complex password, particularly one with single or double quotes, when specifying a password for tpm (in [Tungsten Replicator 5.3 Manual]), could cause checks and the installation to raise errors or fail, although the actual configuration would work properly. The problem was limited to internal checks by tpm (in [Tungsten Replicator 5.3 Manual]) only.

      Issues: CT-440

  • Command-line Tools

    • The startall (in [Tungsten Replicator 5.3 Manual]) command would fail to correctly start the Oracle redo reader process.

      Issues: CT-283

    • The tpm (in [Tungsten Replicator 5.3 Manual]) command would fail to remove the Oracle redo reader user when using tpm uninstall.

      Issues: CT-299

    • The replicator stop (in [Tungsten Replicator 5.3 Manual]) command would not stop the Oracle redo reader process.

      Issues: CT-300

    • Within Vertica deployments, the internal identity of the applier was set incorrectly to PostgreSQL. This would make it difficult for certain internal processes to identify the true datasource type. The setting did not affect the actual operation.

      Issues: CT-452

  • Oracle Replication

    • Oracle deployments have been updated so that the replicator is always running in UTF-8 and the NLS_LANG setting is set correctly. This will affect primarily CDC and Oracle applier deployments.

      Issues: CT-251

    • The ddlscan (in [Tungsten Replicator 5.3 Manual]) templates for Oracle to MySQL would incorrectly map NUMBER types into DECIMAL with an invalid size definition. This has been updated so that anything larger than a 19 digit NUMBER to a MySQL BIGINT.

      Issues: CT-259

    • The Oracle redo reader component has been rebranded to Continuent, Ltd, and changed internally to be identified as simply 'oracle redo reader'. This has changed the following elements within the product:

      • All components and references to vmrr and vmrrd have been changed to orarr and orarrd (in [Tungsten Replicator 5.3 Manual]) respectively.

      • All tpm (in [Tungsten Replicator 5.3 Manual]) options that contain vmware have been replaced with oracle, including:

        install-vmware-redo-reader install-oracle-redo-reader (in [Tungsten Replicator 5.3 Manual])
        repl-install-vmware-redo-reader repl-install-oracle-redo-reader (in [Tungsten Replicator 5.3 Manual])
      • All internal references, including the configuration paameters for the redo reader, have been updated to use orarr.

      • The default username and password used with the redo reader have changed from vmrruser to orarruser, and vmrruserpwd to orrruserpwd.

      • The template files used to configure the redo reader have been changed from vmrr_response_file to orarr_response_file, and vmrr_response_file to offboard_orarr_response_file.

      • The vmrrd_wrapper has been renamed to orarrd_wrapper.

      Issues: CT-19, CT-282, CT-367

    • When running the orarrd (in [Tungsten Replicator 5.3 Manual]) command to execute the console, the command would fail and report:

      When running orarrd console, you get the following response:
      tungsten@dbora1 alpha$ orarrd_alpha console
      orarr is already started

      Issues: CT-397

    • The orarrrd script contained incorrect environment variables for testing the validity of the installation. This could cause access to the Redo Reader console to fail.

      Issues: CT-401

  • Heterogeneous Replication

    • The Redshift applier would use a relative directory for the AWS configuration reference, but would refer to the wrong location.

      Issues: CT-375

    • The sample configuration file for Redshift mistakenly contained $ characters to indicate variables. These dollar signs are not required.

      Issues: CT-406

  • Core Replicator

    • When parsing THL data it was possible for the internal THL processing to lead to a java.util.ConcurrentModificationException. This indicated that the underlying THL event metadata structure used internally had changed between uses.

      Issues: CT-355

1.9. Tungsten Replicator 5.2.2 GA (22 October 2017)

Version End of Life. 31 January 2019

Tungsten Replicator 5.2.2 is a minor bugfix release that addresses some bugs found in the previous 5.2.1 (in [Tungsten Replicator 5.2 Manual]) release. It is a recommended upgrade for all users making use of cluster to big data replication.

Bug Fixes

  • Installation and Deployment

    • The ConvertStringFromMySQL filter would fail with Null Pointer Exception when processing large multi-row transactions that contained a mixture of NULL and non-NULL values.

      Issues: CT-399

1.10. Tungsten Replicator 5.2.1 GA (21 September 2017)

Version End of Life. 31 January 2019

Tungsten Replicator 5.2.1 is a minor bugfix release that addresses some bugs found in the previous 5.2.0 (in [Tungsten Replicator 5.2 Manual]) release. It is a recommended upgrade for all users.

Improvements, new features and functionality

  • Installation and Deployment

    • The autocomplete information in env.sh has been updated to support newer trepctl (in [Tungsten Replicator 5.2 Manual]) and thl (in [Tungsten Replicator 5.2 Manual]) commands.

      Issues: CT-292

  • Oracle Replication

    • A new script, prepare-offboard-fetcher.pl (in [Tungsten Replicator 5.2 Manual]) has been written to aid with the configuration of offboard fetchers for Oracle deployments. Both the old and new scripts support the use of rsync and manually copying the PLOG files during deployment.

      Issues: CT-270, CT-273, CT-289

  • Documentation

    • Basic and experimental support for Solaris 11 has been added to the installation process with tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-160

    • CPU information has been added to the file generated by tpm diag (in [Tungsten Replicator 5.2 Manual]), using the information from /proc/cpu_info.

      Issues: CT-281

    • The Javadocs have been removed by default from all builds and releases.

      Issues: CT-353

Bug Fixes

  • Installation and Deployment

    • The MySQLMyISAMCheck (in [Tungsten Replicator 5.2 Manual]) could fail during a typical install, but the information given for how to correct or address the problem was incomplete. The message has now been updated to correctly identify the potential issue and how to ensure the check runs correctly.

      Issues: CT-198

    • The tpm (in [Tungsten Replicator 5.2 Manual]) command would mistakenly complain about 'backup' configuration files that may have been created or copied into the installation directory, which would prevent installation for completing. The tpm (in [Tungsten Replicator 5.2 Manual]) now explicitly looks only for files ending in .properties

      Issues: CT-324

    • The note provided by tpm (in [Tungsten Replicator 5.2 Manual]) to ensure that the release notes has been read and accepted has been removed.

      Issues: CT-325

    • Commercial builds were mistakenly not using the Tanuki service wrapper for deployments. The effect of this bug was minimal for standard deployments, but within Multi-site, Multi-master (MSMM) deployments it would cause the application not to start properly during boot time.

      Issues: CT-326

    • The information for fixing the error from tpm (in [Tungsten Replicator 5.2 Manual]) of multiple lines on ssh error has been updated. The additional situation where this can occur is a trap has been set on the logout operation.

      Issues: CT-333

    • When using the thl list (in [Tungsten Replicator 5.2 Manual]) with the -last (in [Tungsten Replicator 5.2 Manual]) or -first (in [Tungsten Replicator 5.2 Manual]) options and an additional argument, an error could be raised and the command would fail.

      Issues: CT-337

    • The tpm diag (in [Tungsten Replicator 5.2 Manual]) could fail to complete properly when trying to get MySQL error log information from a remote host.

      Issues: CT-348

  • Core Replicator

    • The DDL templates for use with ddlscan (in [Tungsten Replicator 5.2 Manual]) for RedShift deployments have been updated so that they correctly translate BINARY types into VARCHAR rather than the non-existent BINARY types.

      Issues: CT-291

    • When a primary key field in a compound key is NULL it could cause an error in the Kafka and ElasticSearch appliers where a generated ID was created.

      Issues: CT-332

    • When parsing THL data it was possible for the internal THL processing to lead to a java.util.ConcurrentModificationException. This indicated that the underlying THL event metadata structure used internally had changed between uses.

      Issues: CT-355

1.11. Tungsten Replicator 5.2.0 GA (19 July 2017)

Version End of Life. 31 January 2019

Tungsten Replicator 5.2.0 is a new feature release that contains a combination of new features, specifically new replicator applier targets:

This release also provides improvements to the trepctl (in [Tungsten Replicator 5.2 Manual]) and thl (in [Tungsten Replicator 5.2 Manual]) commands, and bug fixes to improve stability.

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide clearer and more detailed information on certain aspects of it's operation. Two new commands have been added, trepctl qs (in [Tungsten Replicator 5.2 Manual]) and trepctl perf (in [Tungsten Replicator 5.2 Manual]):

      • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide a simplified status output that provides an easier to understand status, using the qs (in [Tungsten Replicator 5.2 Manual]) command. For example:

        shell> trepctl qs
        State: alpha Online for 1172.724s, running for 124280.671s
        Latency: 0.71s from source DB commit time on thl://ubuntuheterosrc:2112/ into target database
         7564.198s since last source commit
        Sequence: 4860 last applied, 0 transactions behind (0-4860 stored) estimate 0.00s before synchronization
      • The trepctl perf (in [Tungsten Replicator 5.2 Manual]) command provides detailed performance information on the operation and status of the replicator and individual stages. This can be useful to identify where any additional latency or performance issues lie:

        shell> trepctl perf
        Statistics since last put online 1360.141s ago
        Stage | Seqno | Latency | Events | Extraction | Filtering | Applying | Other | Total
        remote-to-thl | 4860 | 0.475s | 70 | 116713.145s | 0.000s | 2.920s | 0.000s | 116716.065s
         Avg time per Event | 1667.331s | 0.000s | 0.000s | 0.042s | 1667.372s
        thl-to-q | 4860 | 0.527s | 3180 | 113842.933s | 0.011s | 2873.039s | 0.102s | 116716.085s
         Avg time per Event | 35.800s | 0.000s | 0.000s | 0.903s | 36.703s
        q-to-dbms | 4860 | 0.536s | 3180 | 112989.667s | 0.010s | 3701.035s | 25.554s | 116716.266s
         Avg time per Event | 35.531s | 0.000s | 0.008s | 1.164s | 36.703s

      Issues: CT-29

    • A number of improvements have been made to the identification of long running transactions within the replicator:

      • A new field has been added to the output of trepctl status -name tasks (in [Tungsten Replicator 5.2 Manual]):

        timeInCurrentEvent : 6571.462

        This shows the time that the replictor has been processing the current event. For a long-running event, it helps to indicate that the replicator is still processing the curent event. Note that this is a just a counter for how low the current event has been running. For a replicator that is idle, this will show the time the replicator has spent both processing the original event and waiting to process the new event.

      • The thl list (in [Tungsten Replicator 5.2 Manual]) has been expanded to provide simple and detailed THL size information so that large transactions can be identified. Using the -sizes (in [Tungsten Replicator 5.2 Manual]) and -sizesdetail (in [Tungsten Replicator 5.2 Manual]) displays detailed information about the size of the SQL, number of rows, or both for each stored event. For example:

        shell> thl list -sizes
        SEQ# Frag# Tstamp
        ...
        12 0 2017-06-28 13:21:11.0 Event total: 1 chunks 73 bytes in SQL statements 0 rows
        13 0 2017-06-28 13:21:10.0 Event total: 1645 chunks 0 bytes in SQL statements 1645 rows
        14 0 2017-06-28 13:21:11.0 Event total: 1 chunks 36 bytes in SQL statements 0 rows

        For more information, see thl list -sizes Command (in [Tungsten Replicator 5.2 Manual]) and thl list -sizesdetail Command (in [Tungsten Replicator 5.2 Manual]).

      • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide more detailed information on the performance of the replicator, see trepctl perf (in [Tungsten Replicator 5.2 Manual]).

      • For easier navigation and selection of THL events, the thl (in [Tungsten Replicator 5.2 Manual]) has had two further command-line options added, -first (in [Tungsten Replicator 5.2 Manual]) and -last (in [Tungsten Replicator 5.2 Manual]) to select the first and last events in the THL. Both also take an optional number that shows the first N or last N events.

      Issues: CT-34

    • A new command, tungsten_send_diag (in [Tungsten Replicator 5.2 Manual]), has been added that provides a simplified method for sending a tpm diag (in [Tungsten Replicator 5.2 Manual]) output automatically through to the support team. The new command uploads the diagnostic information directly in Amazon S3 without requiring a separate upload to Zendesk.

      Issues: CT-158

    • A new command, clean_release_directory (in [Tungsten Replicator 5.2 Manual]) has been added to the distribution. This command removes old releases from the installation directory that have been created during either upgrades or configuration updates. The command removes all old entries except the current active one, and the last five entries.

      Issues: CT-204

  • Heterogeneous Replication

    • A new applier has been added to Tungsten Replicator that applies data directly into Cassandra. Data is loaded using a batch applier that writes the data through staging tables into Cassandra.

      Issues: CT-43

      For more information, see Deploying MySQL to Cassandra Replication (in [Tungsten Replicator 5.2 Manual]).

    • A new applier has been added to Tungsten Replicator that applies data directly into Kafka. Incoming row data is converted into a JSON document which is then embedded into a Kafka message and sent on a topic using the schema and table name.

      Issues: CT-101

      For more information, see Deploying MySQL to Kafka Replication (in [Tungsten Replicator 5.2 Manual]).

    • Tungsten Replicator has been certified compatible with Vertica 8 using the existing vertica6.js (in [Tungsten Replicator 5.2 Manual]) batch-loading script.

      Issues: CT-152

    • A new applier has been added to Tungsten Replicator that applies data directly into Elasticsearch. Incoming row data is converted into a JSON document and then uploaded directly into an Elasticsearch index and type according either to explicitly settings, or based automatically on the schema and table name.

      Issues: CT-220

      For more information, see Deploying MySQL to Elasticsearch Replication (in [Tungsten Replicator 5.2 Manual]).

  • Filters

    • The filter functionality has been improved and standardised as a continuing effort to make the filters more usable. At the moment, the effect is embedded into the new filters in this release (SkipEventByType (in [Tungsten Replicator 5.2 Manual]) and ConvertStringFromMySQLFilter (in [Tungsten Replicator 5.2 Manual])). These new filters do make use of a new configuration file system and format based on JSON that will eventually become the standard method to configure all filters.

      Issues: CT-214

      For more information, see ConvertStringFromMySQL Filter (in [Tungsten Replicator 5.2 Manual]), SkipEventByType Filter (in [Tungsten Replicator 5.2 Manual]).

    • A new filter, SkipEventByType (in [Tungsten Replicator 5.2 Manual]), has been added. This allows for events to be skipped based on their operation type (INSERT, UPDATE, DELETE). This can be applied on a schema and/or table basis, alongside a default option that will be applied to all schema/table combinations not explicitly specified.

      Issues: CT-216

      For more information, see SkipEventByType Filter (in [Tungsten Replicator 5.2 Manual]).

    • A new filter, ConvertStringFromMySQLFilter (in [Tungsten Replicator 5.2 Manual]), has been added. This allows for conversion of data extracted and stored in the native MySQL environment (where --mysql-use-bytes-for-string=false (in [Tungsten Replicator 5.2 Manual])). This is particularly useful in situations where data is being replicated out of an existing cluster (where bytes are used by default), but the data is being replicated to a heterogeneous target.

      Issues: CT-217

      For more information, see ConvertStringFromMySQL Filter (in [Tungsten Replicator 5.2 Manual]).

  • Documentation

    • The documentation has been updated to make the use of the --property (in [Tungsten Replicator 5.2 Manual]) option to tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-180

Bug Fixes

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) command could hang during the execution of an external command which could cause the entire process to fail to complete properly.

      Issues: CT-82

    • When a replicator has been configured a cluster slave, the masterListenUri (in [Tungsten Replicator 5.2 Manual]) would be blank. This was because a pure cluster-slave configuration did not correctly configure the necessary pipelines.

      Issues: CT-197

    • The query (in [Tungsten Replicator 5.2 Manual]) tool has been updated to provide better error handling and messages during an error. This particularly affects tools which embed the use of this command, such as tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-203

    • An auto-refresh option has been added to certain commands within trepctl (in [Tungsten Replicator 5.2 Manual]). By adding the -r (in [Tungsten Replicator 5.2 Manual]) option and the number of seconds to either trepctl status (in [Tungsten Replicator 5.2 Manual]), trepctl qs (in [Tungsten Replicator 5.2 Manual]), or trepctl perf (in [Tungsten Replicator 5.2 Manual]) commands. For example, trepctl qs -r 5 (in [Tungsten Replicator 5.2 Manual]) would refresh the quick status command every 5 seconds.

      Issues: CT-209

1.12. Tungsten Replicator 5.1.1 GA (23 May 2017)

Version End of Life. 26 October 2018

Tungsten Replicator 5.1.1 is a minor bugfix release that addresses some bugs found in the previous 5.1.0 (in [Tungsten Replicator 5.1 Manual]) release. It is a recommended upgrade for all users.

Bug Fixes

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.1 Manual]) command has been updated:

      • The -ascmd (in [Tungsten Replicator 5.1 Manual]) option has been added to output the current position as a command that you can use verbatim to reset the status. For example:

        shell> dsctl get -ascmd
        dsctl set -seqno 17 -epoch 11 -event-id "mysql-bin.000082:0000000014031577;-1" -source-id "ubuntu"
      • The -reset (in [Tungsten Replicator 5.1 Manual]) option has been added so that the current position can be reset and then set using dsctl set -reset without having to run two separate commands.

      Issues: CT-24

    • The availability and default configuration of some filters has been changed so that certain filters are now available in all configurations. This does not effect existing filter deployments.

      Issues: CT-84

    • The tungsten_provision_slave (in [Tungsten Replicator 5.1 Manual]) command could fail to complete properly due to a problem with the threads created during the provision process.

      Issues: CT-202

  • Backup and Restore

    • The trepctl backup (in [Tungsten Replicator 5.1 Manual]) operation could fail if the system ran out of disk space, or the storage.index file could not be written or become corrrupted. The backup system will now recreate the file if the information could be read properly.

      Issues: CT-122

  • Heterogeneous Replication

    • When creating DDL from an Oracle source for Hadoop using ddlscan (in [Tungsten Replicator 5.1 Manual]), the template that is used to create the metadata file was missing.

      Issues: CT-206

1.13. Tungsten Replicator 5.1.0 GA (26 April 2017)

Version End of Life. 26 October 2018

Tungsten Replicator 5.1.0 is a minor feature release and constains some significant improvements in the compatiblity and stability for Hadoop loading, JavaScript filters, heterogeneous filter compatibility and important bug fixes.

Improvements, new features and functionality

  • Installation and Deployment

    • The list of supported Ruby versions has been updating to support Ruby up to and including Ruby 2.4.0.

      Issues: CT-138

  • Heterogeneous Replication

    • The support for loading into Hadoop has been improved with better compatibility for recent Hadoop releases from the major Hadoop distributions.

      • MapR 5.2

      • Cloudera 5.8

      In addition to ensuring the basic compatibility of these tools, the continuent-tools-hadoop has been updated to support the use of the beeline as well as the hive command.

      Issues: CT-153, CT-155

      For more information, see The load-reduce-check Tool (in [Tungsten Replicator 5.1 Manual]).

    • The replicator and load-reduce-check (in [Tungsten Replicator 5.1 Manual]) command that is part of the continuent-tools-hadoop repository has been updated so that it can support loading and replication into Hadoop from Oracle. This includes creating suitable DDL templates and support for accessing Oracle via JDBC to load DDL information.

      Issues: CT-168

  • Filters

    • The JavaScript environment has been updated to include a standardized set of filter functionality. This is proivided and loaded as standard into all JavaScript filters. The core utilities are provided in the coreutils.js file.

      The current file provides three functions:

      • load — which loads an external JavaScript file.

      • readJSONFile — which loads an external JSON file into a variable.

      • JSON — provides JSON class including the ability to dump a JavaScript variable into a JSON string.

      Issues: CT-99

    • The thl (in [Tungsten Replicator 5.1 Manual]) has been improved to support -from (in [Tungsten Replicator 5.1 Manual]) and -to (in [Tungsten Replicator 5.1 Manual]) options for selecting the range. These act as synonyms for the existing -low (in [Tungsten Replicator 5.1 Manual]) and -high (in [Tungsten Replicator 5.1 Manual]) options and can be used with all commands.

      Issues: CT-111

    • A number of filters have been updated so that the THL metadata for the transaction includes whether a specific filter has been applied to the transaction in question. This is designed to make it easier to determine whether the filter has been applied, particularly in heterogeneous replication, and also to determine whether the incoming transaction are suitable to be applied to a targert that requires them. Currently the metadata is only added to the transactions and no enforcement is made.

      The following filters add this information:

      The format of the metadata is tungsten_filter_NAME=true.

      Issues: CT-157

Bug Fixes

  • Installation and Deployment

    • The rubygems extension to Ruby was loaded correctly causing some tools to fail to load correctly, or fail to use the Net/SSH tools correctly.

      Issues: CT-143

    • One of the checks built into tpm (in [Tungsten Replicator 5.1 Manual]), MySQLUnsupportedDataTypesCheck (in [Tungsten Replicator 5.1 Manual]) was spelt incorrectly, which meant that it was difficult to bypass and ultimately did not always correctly run or get ignored.

      Issues: CT-147

    • The tpm update command could fail when using Ruby 1.8.7.

      Issues: CT-165

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.1 Manual]) could fail if the innodb_log_home_dir and innodb_data_home_dir were set to a value different to the datadir option, and the --direct (in [Tungsten Replicator 5.1 Manual]) was used.

      Issues: CT-83, CT-141

  • Heterogeneous Replication

    • The Hadoop loader would previously load CSV files directly into the /users/tungsten within HDFS, completely ignoring the setting of thr replication user within the replicator. This has been corrected so that data can be loaded into the configured replication user.

      Issues: CT-134

    • By default the the Hadoop loader would default to use a directory structure that matched the SERVICENAME/SCHEMANAME/TABLENAME. This cause problems with the default DDL templates and the continuent-tools-hadoop tools which used only the schema and table name.

      Issues: CT-135

1.14. Tungsten Replicator 5.0.1 GA (23 February 2017)

Version End of Life. 30 June 2018

Tungsten Replicator 5.0.1 is a bugfix release that contains critical fixes and improvements from the Tungsten Replicator 5.0.0 release. Specifically, it changes the default security and other settings to make upgrades from previous releases easier, and other fixes and improvements to the Oracle support and command-line tools.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The default security configuration for new installations is for security, including SSL and TLS and authentication, to be disabled. In 5.0.0 the default was to enable full security on all components which could lead to problems and difficulty when upgrading.

    Issues: CT-18

  • The Ruby Net::SSH module, which has been bundled with Tungsten Replicator in past releases, is no longer included. This is due to the wide range of Ruby versions and deployment environments that we support, and differences in the Net::SSH module supported and used with different Ruby versions. In order to simplify the process and ensure that the platforms we support operate correctly, the Net::SSH module has been removed and will now need to be installed before deployment.

    To ensure you have the correct environment before deployment, ensure both the Net::SSH and Net::SCP Ruby modules are installed using gem:

    shell> gem install net-ssh
    shell> gem install net-scp

    Depending on your environment, you may also need to install the io-console module:

    shell> gem install io-console

    If during installation you get an error similar to this:

    mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

    It indicates that you do not have the Ruby development headers installed. Use your native package management interface (for example yum or apt and install the ruby-dev package. For example:

    shell> sudo apt install ruby-dev

    Issues: CT-88

  • The replicator (in [Tungsten Replicator 5.0 Manual]) is no longer restarted when updating the configuration with tpm (in [Tungsten Replicator 5.0 Manual]) when using the --replace-tls-certificate (in [Tungsten Replicator 5.0 Manual]) option.

    Issues: CT-120

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

Improvements, new features and functionality

  • Installation and Deployment

    • Tungsten Replicator is now certified for deployment on systems running Java 8.

      Issues: CT-27

  • Core Replicator

    • The replicator will now generate a detailed heap dump in the event of a failure. This will help during debugging and identifying any issues.

      Issues: CT-11

  • Filters

    • The Rhino JS, which is incorporated for use by the filtering and batch loading mechanisms, has been updated to Rhino 1.7R4. This addresses a number of different issues with the embedded library, including a performance issue that could lead to increased latency during filter operations.

      Issues: CT-21

Bug Fixes

  • Installation and Deployment

    • The Ruby Net::SSH libraries used by tpm (in [Tungsten Replicator 5.0 Manual]) have been updated to the latest version. This addresses issues with SSH and staging based deployments, including KEX algorithm errors.

      Issues: CT-16

    • On some platforms the keytool command could fail to be found, causing an error within the installation when generating certificates.

      Issues: CT-73

  • Command-line Tools

    • The tpasswd (in [Tungsten Replicator 5.0 Manual]) could create a log file with the wrong permissions.

      Issues: CT-117

  • Core Replicator

    • Checksums in MySQL could cause problems when parsing the MySQL binary log due to a change in the way the checksum information is recorded within the binary log. This would cause the replicator to become unable to come online.

      Issues: CT-72

Known Issues

  • Behavior Changes

    • Due to new requirements of the embedded and included Ruby Net::SSH module, the Ruby io-console module may need to be installed before installation or upgrade. This can be achieved using:

      shell> gem install io-console

1.15. Tungsten Replicator 5.0.0 GA (7 December 2015)

Version End of Life. 30 June 2018

VMware Continuent for Replication 5.0.0 is a major release that incorporates the following changes:

  • The software release has been renamed. For most users of VMware Continuent for Replication, the filename will start with vmware-continuent-replication. If you are using an Oracle DBMS as the source and have purchased support for the latest version, the filename will start with vmware-continuent-replication-oracle-source.

    The documentation has not been updated to reflect this change. While reading these examples you will see references to tungsten-replicator which will apply to your software release.

  • New Oracle Extraction module that reads the Oracle Redo logs provided faster, more compatible, and more efficient method for extracting data from Oracle databases. For more information, see Oracle Replication using Redo Reader (in [Tungsten Replicator 5.0 Manual]).

  • Security, including file permissions and TLS/SSL is now enabled by default. For more information, see Deployment Security (in [Tungsten Replicator 5.0 Manual]).

  • License keys are now required during installation. For more information, see Deploy License Keys (in [Tungsten Replicator 5.0 Manual]).

  • Support for RHEL 7 and CentOS 7.

  • Basic support for MySQL 5.7.

  • Cleaner and simpler directory layout.

Upgrading from previous versions should be fully tested before attempted in a production environment. The changes listed below affect tpm (in [Tungsten Replicator 5.0 Manual]) output and the requirements for operation.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Tungsten Replicator now requires license keys in order to operate.

    License keys are provided to all customers with an active support contract. Login to my.vmware.com to identify your support contract and the associated license keys. After collecting the license keys, they should be placed into /etc/tungsten/continuent.licenses or /opt/continuent/share/continuent.licenses. The /opt/continuent (in [Tungsten Replicator 5.0 Manual]) path should be replaced with your value for --install-directory (in [Tungsten Replicator 5.0 Manual]). Place each license on a new line in the file and make sure it is readable by the tungsten system user.

    If you are testing VMware Continuent or don't have your license key, talk with your sales contact for assistance. You may enable a trial-mode by using the license key TRIAL. This will not affect the runtime operation of VMware Continuent but may impact your ability to get rapid support.

    The tpm (in [Tungsten Replicator 5.0 Manual]) script will display a warning if license keys are not provided or if the provided license keys are not valid.

  • Tungsten Replicator now enables security by default. Security includes:

    • Authentication between command-line tools (trepctl (in [Tungsten Replicator 5.0 Manual])) and background services.

    • SSL/TLS between command-line tools and background services.

    • SSL/TLS between Tungsten Replicator and datasources.

    • File permissions and access by all components.

    The security changes require a certificate file to be generated prior to operation. The tpm (in [Tungsten Replicator 5.0 Manual]) command can do that during upgrade if you are using a staging directory. Alternatively, you can create the certificate (in [Tungsten Replicator 5.0 Manual]) and update your configuration with the corresponding argument. This is required if you are installing from an INI file. See Installing from a Staging Host with Manually Generated Certificates (in [Tungsten Replicator 5.0 Manual]) or Installing via INI File with Manually Generated Certificates (in [Tungsten Replicator 5.0 Manual]) for more information. This functionality may be disabled by adding --disable-security-controls (in [Tungsten Replicator 5.0 Manual]) to your configuration.

    If you would like tpm (in [Tungsten Replicator 5.0 Manual]) to generate the necessary certificate from the staging directory. Run tpm update (in [Tungsten Replicator 5.0 Manual]) with the --replace-tls-certificate (in [Tungsten Replicator 5.0 Manual]) option.

    staging-shell> ./tools/tpm update --replace-tls-certificate

    For more information, see Deployment Security (in [Tungsten Replicator 5.0 Manual]).

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

  • Tungsten Replicator now includes RELEASE_NOTES in the package and displays a warning if they have not been reviewed.

    During some tpm (in [Tungsten Replicator 5.0 Manual]) commands, the script will check to see if the release notes have been reviewed and accepted. This may be done by running tools/accept_release_notes from the staging directory. The script will display the information and prompt the user for acceptance. A hidden file will be created on the staging server to mark the release notes have been accepted and the warning will not be displayed.

    This process may be automated by calling tools/accept_release_notes -y prior to installation. The script will mark the release notes as accepted and the warning will not be displayed.

    Issues: CONT-1122

Improvements, new features and functionality

  • Installation and Deployment

    • During installation, tpm (in [Tungsten Replicator 5.0 Manual]) writes the configuration log to /tmp/tungsten-configure.log. If the file exists, but is owned by a separate user the operation will fail with a Permission Denied error. The operation has now been updated to create a directory within /tmp (in [Tungsten Replicator 5.0 Manual]) with the name of the current user where the configuration log will be stored. For example, if the user is tungsten, the log will be written to /tmp/tungsten/tungsten-configure.log.

      Issues: CONT-1402

  • Oracle Replication

    • The replicator will automatically determine if the Oracle JDBC driver is available within $ORACLE_HOME/jdbc/lib or the current path, and will copy it into the distribution directory during installation if available.

      Issues: CONT-1344

Bug Fixes

  • Installation and Deployment

    • During installation, a failed installation by tpm (in [Tungsten Replicator 5.0 Manual]), running tpm uninstall could also fail. The command now correctly uninstalls even a partial installation.

      Issues: CONT-1359

Known Issues

  • Oracle Replication

    • The user configuration for Oracle users required when enabling Oracle extraction has a number of rules that must be followed to ensure valid replication:

      The replication user (configured with --replication-user (in [Tungsten Replicator 5.0 Manual])) has the following rules

      • The user should not contain data that will be replicated to other hosts.

      • If the user contains replicated data and filters are used, the results of replication cannot be guaranteed.

      • A different replication user must be used for each service extracting from the same Oracle Database.

      Issues: CONT-1403

Tungsten Replicator 5.0.0 Includes the following changes made in Tungsten Replicator 5.0.0

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The Bristlecone load generator toolkit is no longer included with Release Notes by default.

    Issues: CONT-903

  • The scripts previously located within the scripts directory have now been relocated to the standard bin directory. This does not affect their availability if the env.sh (in [Tungsten Replicator 5.0 Manual]) script has been used to update your path. This includes, but is not limited to, the following commands:

    • ebs_snapshot.sh

    • file_copy_snapshot.sh

    • multi_trepctl

    • tungsten_get_position

    • tungsten_provision_slave

    • tungsten_provision_thl

    • tungsten_read_master_events

    • tungsten_set_position

    • xtrabackup.sh

    • xtrabackup_to_slave

    Issues: CONT-904

  • The backup (in [Tungsten Replicator 5.0 Manual]) and restore (in [Tungsten Replicator 5.0 Manual]) functionality in trepctl (in [Tungsten Replicator 5.0 Manual]) has been deprecated and will be removed in a future release.

    Issues: CONT-906

  • The batch loading scripts used by HP Vertica, Hadoop and Amazon Redshift appliers have been moved to the appliers/batch directory.

    Issues: CONT-907

  • The location of the JavaScript filters has been moved to new location in keeping with the rest of the configuration:

    • samples/extensions/javascript has moved to support/filters-javascript

    • samples/scripts/javascript-advanced has moved to support/filters-javascript

    The use of these filters has not changed but the default location for some filter configuration files has moved to support/filters-config. Check your current configuration before upgrading.

    Issues: CONT-908

  • The ddlscan (in [Tungsten Replicator 5.0 Manual]) templates have been moved to the support/ddlscan directory.

    Issues: CONT-909

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

  • The Vertica applier should write exceptions to a temporary file during replication.

    The applier statements will include the EXCEPTIONS attribute in each statement to assist in debugging. Review the replicator log or trepctl status (in [Tungsten Replicator 5.0 Manual]) output for more details.

    Issues: CONT-1169

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • Use of LOAD DATA commands requires the correct permissions to be given to the mysql user. One of the following must be done.

      • The tungsten system user must have the same default group as the mysql system user.

      • The mysql system user must be a member of the default group for tungsten system user.

      • The --file-protection-level (in [Tungsten Replicator 5.0 Manual]) option must be set to none to allow full visibility to all temporary files.

    • The replicator can hit a MySQL lock wait timeout when processing large transactions.

      Issues: CONT-1106

    • The replicator can run into OutOfMemory when handling very large Row-Based replication events. This can be avoided by setting --optimize-row-events=false (in [Tungsten Replicator 5.0 Manual]).

      Issues: CONT-1115

    • The replicator can fail during LOAD DATA commands or Vertica loading if the system permissions are not set correctly. If this is encountered, make sure the MySQL or Vertica system users are a member of the Tungsten system group. The issue may also be avoided by removing system file protections with --file-protection-level=none (in [Tungsten Replicator 5.0 Manual]).

      Issues: CONT-1460

Improvements, new features and functionality

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.0 Manual]) has been updated to provide help output when specifically requested with the -h or -help options.

      Issues: CONT-1003

      For more information, see dsctl help Command (in [Tungsten Replicator 5.0 Manual]).

Bug Fixes

  • Core Replicator

    • A master replicator could fail to finish extracting a fragmented transaction if disconnected during processing.

      Issues: CONT-1163

    • A slave replicator could fail to come ONLINE (in [Tungsten Replicator 6.0 Manual]) if the last THL file is empty.

      Issues: CONT-1164

    • The replicator applier and filters may fail with ORA-955 because the replicator did not check for metadata tables using uppercase table names.

      Issues: CONT-1375

    • The replicator incorrectly assigns LOAD DATA statements to the #UNKNOWN shard. This can happen when the entire length is above 200 characters.

      Issues: CONT-1431

2. Tungsten Clustering Release Notes

2.1. Tungsten Clustering 6.0.3 GA (5 September 2018)

Version End of Life. 5 September 2021

Release 6.0.3 is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Tungsten Clustering 6.0 Manual]) now outputs a note and recommendation for performing backups of your cluster when installation has been completed.

      Issues: CT-730

  • Command-line Tools

    • The tungsten_prep_upgrade (in [Tungsten Clustering 6.0 Manual]) command has been updated to support an explicit host definition for the MySQL host in place of defaulting to the localhost (127.0.0.1). Use the --host (in [Tungsten Clustering 6.0 Manual]) option.

      Issues: CT-656

    • A new Nagios compatible check script has been added to the release, check_tungsten_policy (in [Tungsten Clustering 6.0 Manual]), which returns the currently active policy mode.

      Issues: CT-675

      For more information, see The check_tungsten_policy Command (in [Tungsten Clustering 6.0 Manual]).

  • Tungsten Connector

    • When receiving an error within MySQLPacket, the Connector now prints out the full content of the underlying error message.

      Issues: CT-636

    • The connctor has been updated to allow dataservice selection to be deterministic and ordered rather than random by configuration. The updated configuration enables the connector to be set to use an ordered list of clusters within a composite or multimaster solution.

      To set the order of the service selected during operation, the information must be set within the user.map (in [Tungsten Clustering 6.0 Manual]). The configuration is based on an ordered, comma-separated list of services to use which are then selected in order. The specification operates on the following rules:

      • List of service names in order

      • If the service name has a dash prefix it is always explicitly excluded from the list of available datasources

      • If a datasource is not specified, it will always be picked last

      For example, in a setup made of three data service, usa, asia and europe, using affinity usa,asia,-europe will select data sources in data service usa. If usa is not available, in asia. If asia is not availble, then connection will not succeed since europe has been negated.

      Issues: CT-650

      For more information, see The check_tungsten_policy Command (in [Tungsten Clustering 6.0 Manual]).

  • Tungsten Manager

    • The router gateway which provides communication between the manager and connector could shutdown even when quorum was available in a two-node cluster.

      Issues: CT-676

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Clustering 6.0 Manual]) would fail during installation if the current directory was not writable by the current user.

      Issues: CT-564

    • When performing an update of a cluster with tpm (in [Tungsten Clustering 6.0 Manual]), the cluster would be switched to MAINTENANCE (in [Tungsten Clustering 6.0 Manual]) but would remain in this policy after the update. The original policy is now retained during the update.

      Issues: CT-595

    • Multimaster cluster installations would fail if the hostname contains two or more hyphens or periods.

      Issues: CT-682, CT-695

    • tpm (in [Tungsten Clustering 6.0 Manual]) would fail to set properties within the defaults section of the configuration within multimaster clusters.

      Issues: CT-683

  • Command-line Tools

    • Using tpm diag (in [Tungsten Clustering 6.0 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Clustering 6.0 Manual]).

      Issues: CT-610

    • Using tpm connector (in [Tungsten Clustering 6.0 Manual]) at the command-line would fail if the core MySQL configuration file (i.e. /etc/my.cnf) did not exist.

      Issues: CT-641

  • Tungsten Connector

    • The connector would fail to set reusable network addresses during configuration which could delay or slow startup until the address/port become available again.

      Issues: CT-694

    • When operating in bridge mode, the connector would fail to check whether the driver was in enabled/disabled mode, which could cause upgrades to fail as part of a graceful shutdown/update operation.

      Issues: CT-696

    • Multiple connectors within a cluster could all connect to the same manager within a given service, increasing the load on the single manager.

      Issues: CT-717

    • The Tungsten Connector could mistakenly get the master data source of the wrong data service within composite multimaster deployments during configuration.

      Issues: CT-719

  • Tungsten Manager

    • Performing a switch operation within a multimaster cluster with three or more clusters when the cluster was in MAINTENANCE (in [Tungsten Clustering 6.0 Manual]) mode and the cross-site replicators are offline would lead to an unrecoverable cluster failure.

      Issues: CT-589

    • During a switch operation on a multi-master cluster when the cluster has been put into maintenance mode, the manager will put the cross-site replicators back into the online state.

      Issues: CT-591

    • When using the connector, the connector --cluster-status --json command would output header and footer information in place of bare JSON which would then cause JSON parsing to fail.

      Issues: CT-685

    • A memory leak within the manager, particularly in multimaster deployments, could cause the Java VM to consume more and more CPU cycles and then restart.

      Issues: CT-673, CT-691

    • During a relay failover within a composite or composite multimaster deployment, if the communications had also failed between sites when the failover occured the manager would be unable to determine the correct master of the remote site.

      Issues: CT-703

    • Within composite multimaster deployments, during a cascading MySQL failure and switch operation across sites, the secondary site could misconfigure the cross-site relay.

      Issues: CT-713

    • A memory leak was identified in the router manager component that manages the communicating between the manager and the connector.

      Issues: CT-715

    • In a deployment, single cluster or composite multimaster where there is either the potential for high-latency across sites, or high latency within a site due to high loads on the connectors, the manager could mis-identify this high latency as a failure. This would trigger a quorum validation. These would be reported as network hangs, even though the result of the quorum check would be valid.

      To address this, the processing of router notifications processed by the connector and all other operations have been separated. This reduces the change of a heartbeat gap between hosts and therefore the connectors are available to the managers even under high loads or latency.

      Issues: CT-725

Tungsten Clustering 6.0.3 Includes the following changes made in Tungsten Replicator 6.0.3

Release 6.0.3 is a bugfix release.

Improvements, new features and functionality

  • Core Replicator

    • The output from thl list (in [Tungsten Replicator 6.0 Manual]) now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

    • The replicator has been updated to support the new character sets supported by MySQL 5.7 and MySQL 8.0, including the UTF-8-mb4 series.

      Issues: CT-700

Bug Fixes

  • Installation and Deployment

    • During installation, tpm (in [Tungsten Replicator 6.0 Manual]) attempts to find the system commands such as service and systemctl used to start and stop databases. If these were not in the PATH, tpm (in [Tungsten Replicator 6.0 Manual]) would fail to find a start/stop for the configured database. In addition to looking for these tools in the PATH tpm (in [Tungsten Replicator 6.0 Manual]) also explicitly looks in the /sbin, /bin, /usr/bin and /usr/sbin directories.

      Issues: CT-722

  • Command-line Tools

    • Using tpm diag (in [Tungsten Replicator 6.0 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-610

    • When running tpm diag (in [Tungsten Replicator 6.0 Manual]), the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

    • Due to the operating taking a long time or timing out, the capture of the output from lsof has been removed from running tpm diag (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-731

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

    • When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when log_slave_updates was enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.

      Issues: CT-686

2.2. Tungsten Clustering 6.0.2 GA (27 June 2018)

Version End of Life. 27 June 2021

This is a bugfix release.

Bug Fixes

  • Tungsten Manager

    • Within a multimaster cluster, the manager could set the master to read-only when performing a switch operation.

      Issues: CT-672

2.3. Tungsten Clustering 6.0.1 GA (30 May 2018)

Version End of Life. 30 May 2021

This is a bugfix release.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • It was previously impossible to change from a non-SSL installation to an SSL installation using self-generated certificates if an INI style configuration was being used. This can now be achieved by using the following command-line:

    shell> tools/tpm update --replace-release --replace-jgroups-certificate --replace-tls-certificate

    Issues: CT-442

  • Previously the system had been configured to dump heap files by default when the system ran out of memory which was useful for debugging by the development team. This has now been disabled.

    Issues: CT-604

Improvements, new features and functionality

  • Installation and Deployment

    • The tpm diag (in [Tungsten Clustering 6.0 Manual]) command has been improved to include more information about the environment, including:

      • The output from the lsof command.

      • The output from the ps command.

      • The output from the show full processlist command within mysql.

      • Copies of all the .properties configuration files.

      • Copies of all the cluster configuration and .properties files.

      • Copies of all the my.cnf files, including directory configurations.

      • The output from the connector cluster-status (in [Tungsten Clustering 6.0 Manual]) command.

      • The output from all services in multimaster clustering deployments.

      • Improvements to the clarity of some commands.

      • The INI files used by tpm (in [Tungsten Clustering 6.0 Manual]) (if using INI installs) are included.

      Issues: CT-530, CT-611, CT-615, CT-623

  • Tungsten Manager

    • The REASON FOR MAINTENANCE MODE message has been updated when a failover has occured to specifically indicate a failover rather than a switch.

      Issues: CT-624

Bug Fixes

  • Tungsten Manager

    • A script used internally by the manager to determine the status of replication, called mysql_checker_query.sql, had been identified as providing bad information under certain complex circumstances. The effects of the bad script could include out of memory failures. The script and query has been rewritten.

      Issues: CT-457

    • The first execution of ls (in [Tungsten Clustering 6.0 Manual]) within cctrl (in [Tungsten Clustering 6.0 Manual]) within multimaster clusters could fail to provide the cluster status information at the top (world) level.

      Issues: CT-551

    • Performing a switch in a two-cluster multimaster deployment could fail if the cross-site replicators were not accessible.

      Issues: CT-592

    • An error executing the query checker script would not get identified and trapped properly.

      Issues: CT-632

    • Within a running cluster, managers on different hosts with a composite cluster could show different cluster state information after a switch operation.

      Issues: CT-633, CT-634

    • The API has been updated to improve compatiblity with the Tungsten Dashboard.

      Issues: CT-639

Tungsten Clustering 6.0.1 Includes the following changes made in Tungsten Replicator 6.0.1

Release 6.0.1 is a bugfix release.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The tungsten_set_position (in [Tungsten Replicator 6.0 Manual]) and tungsten_get_position commands have been deprecated and will be removed in the 6.1.0 release. These commands only worked with MySQL datasources. Use the dsctl (in [Tungsten Replicator 6.0 Manual]) command, which works with a much wider range of datasources.

    Issues: CT-517

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl services (in [Tungsten Replicator 6.0 Manual]) has been updated to support the auto-refresh option using the -r command-line optionoption.

      Issues: CT-627

    • The trepctl (in [Tungsten Replicator 6.0 Manual]) has been updated with a new command, servicetable (in [Tungsten Replicator 6.0 Manual]) command. This outputs the status information for multiple services in a tabular format to make it easier to identify the state for multi-service replicators. For example:

      shell> trepctl servicetable
      Processing servicetable command...
      Service | Status | Role | MasterConnectUri | SeqNo | Latency
      -------------------- | ------------------------------ | ---------- | ------------------------------ | ---------- | ----------
      alpha | ONLINE | slave | thl://trfiltera:2112/ | 322 | 0.00
      beta | ONLINE | slave | thl://ubuntuheterosrc:2112/ | 12 | 4658.59
      Finished servicetable command...

      The command also supports the auto-refresh option, -r.

      Issues: CT-637

Bug Fixes

  • Installation and Deployment

    • Support for the GEOMETRY data type within MySQL 5.7 and above has been added. This provides full support for both extractiong and applying of the datatype to MySQL.

      This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.

      Issues: CT-403

    • When using Net::SSH within tpm (in [Tungsten Replicator 6.0 Manual]), more detailed information about any specific failures or errors is now provided.

      Issues: CT-523

    • tpm (in [Tungsten Replicator 6.0 Manual]) would mistakenly report issues with JSON columns during installation which no longer applies as JSON support for MySQL 5.7 was added in 6.0.0.

      Issues: CT-635

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 6.0 Manual]) could hang within different scenarios, including being executed in the background, or part of a background script or cronjob. The script could also fail to restart MySQL correctly

      Issues: CT-319, CT-572

    • The trepctl status (in [Tungsten Replicator 6.0 Manual]) would fail badly if the service name did not exist in the configuration, or if multipl services were configured.

      Issues: CT-545, CT-593

    • When using tpm (in [Tungsten Replicator 6.0 Manual]) with the INI method, the command would search multiple locations for suitable INI files. This could lead to multiple definitions of the same service, which could in turn lead to duplication of the installation process and occasional failures. If multiple INI files are found, a warning is now produced to highlight the potential for failures.

      Issues: CT-626

    • When setting optimizeRowEvents back to false (it is enabled by default), the replicator could fail with IndexOutOfBound errors.

      Issues: CT-631

    • Using trepctl qs (in [Tungsten Replicator 6.0 Manual]) where the sequence number could be larger than an INT would cause an error.

      Issues: CT-642

  • Oracle Replication

    • The prepare_offboard_fetcher script could fail due to the use of command that may not exist on some platforms. Under some circumstances the script could also be installed as non-executable.

      Issues: CT-420, CT-421

  • Heterogeneous Replication

    • The templates for ddlscan (in [Tungsten Replicator 6.0 Manual]) for MySQL to Oracle do not escape field names correctly.

      Issues: CT-249

    • When replicating data into MongoDB, numeric values and date values would be represented in the target database as strings, not as their native values.

      Issues: CT-581, CT-582

    • The default partition method used when loading data through CSV files showed an incorrect example format. Previously it was advised to use:

      'commit_hour='yyyy-MM-dd-HH

      It should just show the data format:

      yyyy-MM-dd-HH

      Issues: CT-607

    • The Javascript batch loader for Redshift could generate an error when loading the object used to derive information during loading.

      Issues: CT-620

    • The templates for ddlscan (in [Tungsten Replicator 6.0 Manual]) for Oracle to Redshift failed to handle the NUMBER type correctly.

      Issues: CT-621

  • Core Replicator

    • Optimizing deletes in row-based replication could delete the wrong rows if the pkey (in [Tungsten Replicator 6.0 Manual]) had not been enabled.

      Issues: CT-557

    • The included Drizzle driver would incorrectly assign values to prepared statements if the fields in the prepared statement included a question mark

      Issues: CT-608

    • During replication, the replictor could raise the java.util.ConcurrentModificationException error intermittently.

      Warning

      This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility with the metadata.

      Issues: CT-618

  • Filters

    • The truncatetext (in [Tungsten Replicator 6.0 Manual]) filter was not configurable within all topologies. The configuration has now been updated so that the filter can be used in MySQL and other database environments.

      Issues: CT-386

2.4. Tungsten Clustering 6.0.0 GA (4 April 2018)

Version End of Life. 4 April 2021

Continuent Tungsten 6.0.0 is a major update to the operation and deployment of multi-master and composite clusters. Within the new framework, a composite multi-master cluster is configured as follows:

  • Clusters within a composite cluster are now managed in a unified fashion, including the overall replication progress across clusters.

  • Cross-site replicators are configured as additional services within the main replicator.

  • Cross-site replicators are managed by the manager as part of a complete composite cluster solution.

  • A new global progress counter displays the current progress for the local and cross-site replication.

  • Connectors are configured by default to provide affinity for the local, and then the remote cluster.

The cluster package name has been changed, and upgrades from older versions to the new configuration and layout are supported.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • A new unified cluster deployment is available, the Composite Multimaster. This is an updated version of the Multi-site/Multi-master deployment in previous releases. It encompasses a number of significant changes and improvements:

    • Single, cluster-based, deployment using the new deployment type of composite-multi-master.

    • Unified multimaster cluster status within cctrl (in [Tungsten Clustering 6.0 Manual]).

    • Global progress counter indiciting the current cluster and cross-cluster performance.

    Issues: CT-105, CT-313, CT-431, CT-467

  • The name of the cluster deployment package for Continuent Tungsten has changed. Packages are now named to match the product, for example, release-notes-1-99.tar.gz.

    Issues: CT-271, CT-438

  • Support for using Java 7 with Continuent Tungsten has been removed. Java 8 or higher must be used for all deployments.

    Issues: CT-450

  • The behavior of the cctrl (in [Tungsten Clustering 6.0 Manual]) has changed to operate better within the new composite deployments. Without the -multi (in [Tungsten Clustering 6.0 Manual]) argument, cctrl (in [Tungsten Clustering 6.0 Manual]) will cd (in [Tungsten Clustering 6.0 Manual]) into the local standalone service. This matches the previous releases for cctrl (in [Tungsten Clustering 6.0 Manual]), but instead all services are still accessible without needing to use the -multi (in [Tungsten Clustering 6.0 Manual]) option. With the -multi (in [Tungsten Clustering 6.0 Manual]) argument, cctrl (in [Tungsten Clustering 6.0 Manual]) will not automatically cd (in [Tungsten Clustering 6.0 Manual]) into the local standalone service but will show all available services.

    Issues: CT-524

  • Due to the change in the nature of the services and clustering within SOR and multimaster configurations, the tungsten_provision_slave (in [Tungsten Clustering 6.0 Manual]) command has been updated to support cross-cluster provisioning. Because there would now be a conflict of service names, a cross cluster provision should use the --force (in [Tungsten Clustering 6.0 Manual]) option. The --service (in [Tungsten Clustering 6.0 Manual]) option should still be set to the local service being reset. For example:

    shell> tungsten_provision_slave --source=db4 --service=east --direct --force

    Issues: CT-567

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • During an upgrade installation from a v4 or v4 MSMM deployment, you may get additional, empty, schemas creates within your MySQL database. These schemas are harmless and can safely be removed. For example, if you have two services in your MSMM deployment, east and west, during the upgrade you will get two empty schemas, tungsten_east_from_west and tungsten_west_from_east.

    This will be addressed in a future release.

    Issues: CT-559

  • During a switch operation on a multi-master cluster when the cluster has been put into maintenance mode, the manager will put the cross-site replicators back into the online state.

    This will be addressed in a future release.

    Issues: CT-591

  • When performing a tpm update (in [Tungsten Clustering 6.0 Manual]) operation to change the configuration and the cluster is in AUTOMATIC (in [Tungsten Clustering 6.0 Manual]) mode, the update will complete correctly but the cluster may be left in MAINTENANCE (in [Tungsten Clustering 6.0 Manual]) mode instead of being placed back into AUTOMATIC (in [Tungsten Clustering 6.0 Manual]) mode.

    This will be addressed in a future release.

    Issues: CT-595

  • When performing a tpm update (in [Tungsten Clustering 6.0 Manual]) in a cluster with an active witness, the host with the witness will not be restarted correctly resulting in the witness being down on that host.

    This will be addressed in a future release.

    Issues: CT-596

  • In a composite multimaster cluster deployment where there are three or more clusters, a failure in the MySQL server in one node in a clsuter could fail to be identified, and ultimately the failover within the environment to fail, either within the cluster or across clusters.

    This will be addressed in a future release.

    Issues: CT-619

Improvements, new features and functionality

  • Installation and Deployment

    • A new utility script, tungsten_prep_upgrade (in [Tungsten Clustering 6.0 Manual]) has been provided as part of the standard installation. The script is specifically designed to assist during the upgrade of a multi-site/multi-master deployment from 5.3.0 and earlier to the new Multimaster 6.0.0 deployment.

      Issues: CT-104

  • Command-line Tools

    • The cctrl (in [Tungsten Clustering 6.0 Manual]) command now includes a show topology command, that outputs the current toplogy for the cluster or component being viewed.

      Issues: CT-429

    • The tpm diag (in [Tungsten Clustering 6.0 Manual]) command has been extended to include multimaster cluster status information, one for each configured service and cross-site service.

      Issues: CT-594

  • Tungsten Connector

    • By default, within composite multi-master clusters, the affinity for the connector is configured to connect to the master for the site on which the connector lives first and then, if that master is not available, connect to the other site

      Issues: CT-448

Bug Fixes

  • Command-line Tools

    • The mm_tpm diag command could complain that an extra replicator is configured and running, even though it would be valid as part of a multi-master deployment.

      Issues: CT-396

    • The mm_trepctl (in [Tungsten Clustering 6.0 Manual]) command could fail to display any status information while obtaining the core statistic information from each host.

      Issues: CT-437

  • Tungsten Manager

    • When performing a recover or switch operation within maintenance mode, the cluster would automatically revert to automatic mode just before and immediately after a switch, which could lead to problems correctly recovering a cluster.

      Issues: CT-472

Tungsten Clustering 6.0.0 Includes the following changes made in Tungsten Replicator 6.0.0

Release 6.0.0 is a feature and bugfix release. This release contains the following key fixes:

  • Added PostgreSQL applier support.

  • Added support for custom primary key field selection for source tables that cannot be configured with a primary key within the database.

  • Added a new filter for including whole of transaction metadata information into each event.

  • Added support for extended transaction information within the Kafka applier so that all the messages for a given transaction can be identified.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Support for using Java 7 with Continuent Tungsten has been removed. Java 8 or higher must be used for all deployments.

    Issues: CT-450

Improvements, new features and functionality

  • Heterogeneous Replication

    • The Kafka applier now supports the inclusion of transaction information into each Kafka message broadcast, including the list of schema/tables and row counts for the entire transaction, as well as information about whether the message is the first or last message/row within an overall transaction. The transaction information can also be sent as a separate message on an independent Kafka topic.

      Issues: CT-496, CT-586

      For more information, see Optional Configuration Parameters for Kafka (in [Tungsten Replicator 6.0 Manual]).

  • Core Replicator

    • Experimental support for writing row-based data through SQL into PostgreSQL has been added back to the replicator. This includes basic support fr the replication of the data. Currently databases and tables must be created by hand. A future release will incorporate full support for DDL translation.

      Issues: CT-149

  • Filters

    • The pkey (in [Tungsten Replicator 6.0 Manual]) has been extended to support the specification of custom primary key fields. This enables fields in the source data to be marked as primary keys even if the source database does not have the keys specified. This is useful for heterogeneous loading of data where a unique key may exist, but cannot be defined due to the application or database that created the tables.

      Issues: CT-481

      For more information, see Setting Custom Primary Key Definitions (in [Tungsten Replicator 6.0 Manual]).

    • A new filter, rowaddtxninfo (in [Tungsten Replicator 6.0 Manual]) has been added which embeds row counts, both total and per schema/table, to the metadata for a THL event/transaction.

      Issues: CT-497

Bug Fixes

  • Installation and Deployment

    • When performing a tpm reverse (in [Tungsten Replicator 6.0 Manual]), the --replication-port (in [Tungsten Replicator 6.0 Manual]) setting would be replaced with it's alias, --oracle-tns-port (in [Tungsten Replicator 6.0 Manual]).

      Issues: CT-597

  • Core Replicator

    • An internal optimization within the replicator that would attempt to optimise row-based information and operations has been removed. The effects of the optimization were actually seen in very few situations, and it duplicated work and operations performed by the pkey (in [Tungsten Replicator 6.0 Manual]) filter. Unfortunately the same optimization could also cause issues within heterogeneous deployments with the removal of information.

      Issues: CT-318

    • The internal storage of the MySQL server ID has been updated to support larger server IDs. This works with any MySQL deployment, but has been specifically expanded to work better with some cloud deployments where the server ID cannot be controlled.

      Issues: CT-439

    • The format of some errors and log entries would contain invalid characters.

      Issues: CT-493

2.5. Tungsten Clustering 5.3.3 GA (20 September 2018)

Version End of Life. 7 June 2019

This is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Tungsten Clustering for MySQL 5.3 Manual]) now outputs a note and recommendation for performing backups of your cluster when installation has been completed.

      Issues: CT-730

  • Command-line Tools

    • The tungsten_prep_upgrade command has been updated to support an explicit host definition for the MySQL host in place of defaulting to the localhost (127.0.0.1). Use the --host option.

      Issues: CT-656

    • A new Nagios compatible check script has been added to the release, check_tungsten_policy (in [Tungsten Clustering for MySQL 5.3 Manual]), which returns the currently active policy mode.

      Issues: CT-675

      For more information, see The check_tungsten_policy Command (in [Tungsten Clustering for MySQL 5.3 Manual]).

  • Tungsten Connector

    • When receiving an error within MySQLPacket, the Connector now prints out the full content of the underlying error message.

      Issues: CT-636

  • Tungsten Manager

    • The router gateway which provides communication between the manager and connector could shutdown even when quorum was available in a two-node cluster.

      Issues: CT-676

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Clustering for MySQL 5.3 Manual]) would fail during installation if the current directory was not writable by the current user.

      Issues: CT-564

    • When performing a tpm update (in [Tungsten Clustering for MySQL 5.3 Manual]) in a cluster with an active witness, the host with the witness will not be restarted correctly resulting in the witness being down on that host.

      Issues: CT-596

  • Command-line Tools

    • Using tpm diag (in [Tungsten Clustering for MySQL 5.3 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Clustering for MySQL 5.3 Manual]).

      Issues: CT-610

    • Using tpm connector (in [Tungsten Clustering for MySQL 5.3 Manual]) at the command-line would fail if the core MySQL configuration file (i.e. /etc/my.cnf) did not exist.

      Issues: CT-641

  • Tungsten Connector

    • The connector would fail to set reusable network addresses during configuration which could delay or slow startup until the address/port become available again.

      Issues: CT-694

    • When operating in bridge mode, the connector would fail to check whether the driver was in enabled/disabled mode, which could cause upgrades to fail as part of a graceful shutdown/update operation.

      Issues: CT-696

    • Multiple connectors within a cluster could all connect to the same manager within a given service, increasing the load on the single manager.

      Issues: CT-717

  • Tungsten Manager

    • When using the connector, the connector --cluster-status --json command would output header and footer information in place of bare JSON which would then cause JSON parsing to fail.

      Issues: CT-685

    • A memory leak within the manager, particularly in multimaster deployments, could cause the Java VM to consume more and more CPU cycles and then restart.

      Issues: CT-673, CT-691

    • During a relay failover within a composite or multi-site mukti-mastero deployment, if the communications had also failed between sites when the failover occured the manager would be unable to determine the correct master of the remote site.

      Issues: CT-703

    • A memory leak was identified in the router manager component that manages the communicating between the manager and the connector.

      Issues: CT-715

    • In a deployment, single cluster or composite multimaster where there is either the potential for high-latency across sites, or high latency within a site due to high loads on the connectors, the manager could mis-identify this high latency as a failure. This would trigger a quorum validation. These would be reported as network hangs, even though the result of the quorum check would be valid.

      To address this, the processing of router notifications processed by the connector and all other operations have been separated. This reduces the change of a heartbeat gap between hosts and therefore the connectors are available to the managers even under high loads or latency.

      Issues: CT-725

Tungsten Clustering 5.3.3 Includes the following changes made in Tungsten Replicator 5.3.3

Release 5.3.3 is a bug fix release.

Improvements, new features and functionality

  • Core Replicator

    • The output from thl list (in [Tungsten Replicator 5.3 Manual]) now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

Bug Fixes

  • Command-line Tools

    • Using tpm diag (in [Tungsten Replicator 5.3 Manual]), the command would ignore options on the command-line, including --net-ssh-option (in [Tungsten Replicator 5.3 Manual]).

      Issues: CT-610

    • When running tpm diag (in [Tungsten Replicator 5.3 Manual]), the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

2.6. Tungsten Clustering 5.3.2 GA (4 June 2018)

Version End of Life. 7 June 2019

This is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • The tpm diag (in [Tungsten Clustering for MySQL 5.3 Manual]) command has been improved to include more information about the environment, including:

      • The output from the lsof command.

      • The output from the ps command.

      • The output from the show full processlist command within mysql.

      • Copies of all the .properties configuration files.

      • Copies of all the cluster configuration and .properties files.

      • Copies of all the my.cnf files, including directory configurations.

      • The output from the connector cluster-status (in [Tungsten Clustering for MySQL 5.3 Manual]) command.

      • The output from all services in multimaster clustering deployments.

      • Improvements to the clarity of some commands.

      • The INI files used by tpm (in [Tungsten Clustering for MySQL 5.3 Manual]) (if using INI installs) are included.

      Issues: CT-530, CT-611, CT-615, CT-623

Bug Fixes

  • Tungsten Manager

    • A script used internally by the manager to determine the status of replication, called mysql_checker_query.sql, had been identified as providing bad information under certain complex circumstances. The effects of the bad script could include out of memory failures. The script and query has been rewritten.

      Issues: CT-457

    • The first execution of ls (in [Tungsten Clustering for MySQL 5.3 Manual]) within cctrl (in [Tungsten Clustering for MySQL 5.3 Manual]) within multimaster clusters could fail to provide the cluster status information at the top (world) level.

      Issues: CT-551

    • An error executing the query checker script would not get identified and trapped properly.

      Issues: CT-632

    • Within a running cluster, managers on different hosts with a composite cluster could show different cluster state information after a switch operation.

      Issues: CT-633, CT-634

Tungsten Clustering 5.3.2 Includes the following changes made in Tungsten Replicator 5.3.2

Release 5.3.2 is a bug fix release.

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 5.3 Manual]) would mistakenly report issues with JSON columns during installation which no longer applies as JSON support for MySQL 5.7 was added in 6.0.0.

      Issues: CT-635

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.3 Manual]) could hang within different scenarios, including being executed in the background, or part of a background script or cronjob. The script could also fail to restart MySQL correctly

      Issues: CT-319, CT-572

    • When setting optimizeRowEvents back to false (it is enabled by default), the replicator could fail with IndexOutOfBound errors.

      Issues: CT-631

    • Using trepctl qs (in [Tungsten Replicator 5.3 Manual]) where the sequence number could be larger than an INT would cause an error.

      Issues: CT-642

  • Core Replicator

    • During replication, the replictor could raise the java.util.ConcurrentModificationException error intermittently.

      Warning

      This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility with the metadata.

      Issues: CT-618

2.7. Tungsten Clustering 5.3.1 GA (18 April 2018)

Version End of Life. 7 June 2019

Release 5.3.1 is a bug fix release that adds support for the GEOMETRY data type in MySQL 5.7 and above, and a number of bug fixes.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • It was previously impossible to change from a non-SSL installation to an SSL installation using self-generated certificates if an INI style configuration was being used. This can now be achieved by using the following command-line:

    shell> tools/tpm update --replace-release --replace-jgroups-certificate --replace-tls-certificate

    Issues: CT-442

  • Previously the system had been configured to dump heap files by default when the system ran out of memory which was useful for debugging by the development team. This has now been disabled.

    Issues: CT-604

Tungsten Clustering 5.3.1 Includes the following changes made in Tungsten Replicator 5.3.1

Release 5.3.1 is a bug fix release that adds support for the GEOMETRY data type in MySQL 5.7 and above, and a number of bug fixes.

Bug Fixes

  • Installation and Deployment

    • Support for the GEOMETRY data type within MySQL 5.7 and above has been added. This provides full support for both extractiong and applying of the datatype to MySQL.

      This change is not backwards compatible; when upgrading, you should upgrade slaves first and then the master to ensure compatibility. Once you have extracted data with the GEOMETRY type into THL, the THL will no longer be compatible with any version of the replicator that does not support the GEOMETRY datatype.

      Issues: CT-403

2.8. Tungsten Clustering 5.3.0 GA (12 December 2017)

Version End of Life. 7 June 2019

Release 5.3.0 is a new feature release that contains improvements to the core replicator and manager, including adding new functionality in preparation for the next major release (6.0.0) and future functionality.

Key improvements include:

  • Improved and simplified user-focused logging, to make it easier to identify issues and problems.

  • Easier access to the overall cluster status from the command-line through the Connector cluster-status command.

  • Many fixes and stabilisation improvements to the Connector.

Improvements, new features and functionality

  • Tungsten Connector

    • The connector (in [Tungsten Clustering for MySQL 5.3 Manual]) has been extended to provide cluster status information, and to also to provide this information encapsulated in a JSON format. To get the cluster status through the connector (in [Tungsten Clustering for MySQL 5.3 Manual]) command:

      shell> connector cluster-status

      To get the information in JSON format:

      shell> connector cluster-status -json

      Issues: CONT-630

      For more information, see Connector connector cluster status on the Command-line (in [Tungsten Clustering for MySQL 5.3 Manual]).

Bug Fixes

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. These files are:

      • manager-user.log

      • connector-user.log

      • replicator-user.log

      These files should be much smaller, and much simpler to read and digest in the event of a problem. Currently the information and warnings added to the logs are being adjusted so that the new log files do not contain unnecessary entries.

      The original log files (tmsvc.log, connector.log, trepsvc.log) remain unchanged in terms of the information logged to them.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

  • Command-line Tools

    • Backups using datasource backup (in [Tungsten Clustering for MySQL 5.3 Manual]) could fail to complete properly when using xtrabackup.

      Issues: CT-352

    • The tpm diag (in [Tungsten Clustering for MySQL 5.3 Manual]) would fail to get manager logs from hosts that were configured without a replicator, for example standalone connector or witness hosts.

      Issues: CT-360

  • Tungsten Connector

    • If the MySQL server returns a 'too many open connections' error when connecting through the Drizzle driver, the connector could hang with a log message about BufferUnderFlow.

      Issues: CT-86

    • Support for complex passwords within user.map (in [Tungsten Clustering for MySQL 5.3 Manual]) that may include one or more single or double quotes have been updated. The following rules now apply for passwords in user.map (in [Tungsten Clustering for MySQL 5.3 Manual]):

      • Quotes ' and double quotes " are now supported in the user.map password.

      • If there's a space in the password, the password needs to be surrounded with " or ':

        "password with space"
      • If there's one or several " or ' in the password without space, the password doesn't need to be surrounded

        my"pas'w'or"d"
      • If the password itself starts and ends with the same quote (" or '), it needs to be surrounded by quotes.

        "'mypassword'" so that the actual password is 'mypassword'.

      As a general rule, if the password is enclosed in either single or double quotes, these are not included as part of the password during authentication.

      Issues: CONT-239

    • When starting up, the Connector would connect to the first master in the first data service within it's own internal list, now the 1st entry of the user.map (in [Tungsten Clustering for MySQL 5.3 Manual]) configuration.

      Issues: CT-385

    • When a connection gets its channel updated by a read/write split (either automatically because Smartscale has been enabled, or manually with selective read/write splitting), the channel that is left in background will be wrongly set as "in use", so the keepalive task won't be able to ping it anymore.

      Issues: CT-388

    • The bridgeServerToClientForcedCloseTimeout property default value has been reduced from 500ms to 50ms.

      Issues: CT-392

      For more information, see Adjusting the Bridge Mode Forced Client Disconnect Timeout (in [Tungsten Clustering for MySQL 5.3 Manual]).

    • Under certain circumstances it would be possible for the Connector, when configured to choose a slave based on the slave latency (i.e. using the --connector-max-slave-latency (in [Tungsten Clustering for MySQL 5.3 Manual]) configuration option), to select the wrong slave. Rather than choosing the most advanced slave in terms of the latency, the slave with the highest latency could be selected instead.

      Issues: CONT-421

    • The connector would log a message each time a connection disappeared without being properly closed. For connections through load balancers this is standard behavior, and could lead to a large number of log entries that would make it difficult to find other errors. The default setting has been changed so the connection warnings are no longer produced by default. This can be changed by setting the printConnectionWarnings property to true.

      Issues: CT-456

  • Tungsten Manager

    • If the manager is on the same host as the coordinator, and there was an error writing information to the disk, and a failover situation occurred, the failiver would not take place. Since a disk write failure is a possible scenario for the the failure to occur, it could lead to the cluster being in an unstable state.

      Issues: CT-364

    • Within a composite deployment, switching a node in a local cluster would cause all relays within the entire composite cluster to point to that node as a master datasource.

      Issues: CT-378

Tungsten Clustering 5.3.0 Includes the following changes made in Tungsten Replicator 5.3.0

Release 5.3.0 is an important feature release that contains some key new functionality for replication. In particular:

  • JSON data type column extraction support for MySQL 5.7 and higher.

  • Generated column extraction support for MySQL 5.7 and higher.

  • DDL translation support for heterogeneous targets, initially support DDL translation for MySQL to MySQL, Vertica and Redshift targets.

  • Support for data concentration support for replication into a single target schema (with additional source schema information added to each table) for both HPE Vertica and Amazon Redshift targets.

  • Rebranded and updated support for Oracle extraction with the Oracle Redo Reader, including improvements to offboard deployment, more configuration options, and support for the deployment and installation of multiple offboard replication services within a single replicator.

This release also contains a number of important bug fixes and minor improvements to the product.

Improvements, new features and functionality

  • Behavior Changes

    • The way that information is logged has been improved so that it should be easier to identify and find errors and the causes of errors when looking at the logs. To achieve this, logging is now provided into an additional file, one for each component, and the new files contain only errors at the WARNING or ERROR levels. The new file is replicator-user.log. The original file, trepsvc.log remains unchanged.

      All log files have been updated to ensure that where relevant the service name for the corresponding entry is included. This should further help to identify and pinpoint issues by making it clearer what service triggered a particular logging event.

      Issues: CT-30, CT-69

    • Support for Java 7 (JDK or JRE 1.7) has been deprecated, and will be removed in the 6.0.0 release. The software is compiled using Java 8 with Java 7 compatibility.

      Issues: CT-252

    • Some Javascript filters had DOS style line breaks.

      Issues: CT-376

    • Support for JSON datatypes and generated columns within MySQL 5.7 and greater has been added to the MySQL extraction component of the replicator.

      Important

      Due to a MySQL bug, the way that JSON and generated columns is represented within MySQL binary log, it is possible for the size of the data, and the reported size re different and this could cause data corruption To account for this behavior and to prevent data inconsistencies, the replicator can be configured to either ignore, warn, or stop, if the mismatch occurs.

      This can be set by modifying the property replicator.extractor.dbms.json_length_mismatch_policy.

      Until this problem is addressed within MySQL, tpm (in [Tungsten Replicator 5.3 Manual]) will still generate a warning about the issue which can be ignored during installation by using the --skip-validation-check=MySQLGeneratedColumnCheck (in [Tungsten Replicator 5.3 Manual]).

      For more information on the effects of the bug, see MySQL Bug #88791.

      Issues: CT-5, CT-468

  • Installation and Deployment

    • The tpm (in [Tungsten Replicator 5.3 Manual]) command has been updated to correctly operate with CentOS 7 and higher. Due to an underlying change in the way IP configuration information was sourced, the extraction of the IP address information has been updated to use the ip addr command.

      Issues: CT-35

    • The THL retention setting is now checked in more detail during installation. When the --thl-log-retention (in [Tungsten Replicator 5.3 Manual]) is configured when extracting from MySQL, the value is compared to the binary log expiry setting in MySQL (expire_logs_days). If the value is less, then a warning is produced to highlight the potential for loss of data.

      Issues: CT-91

    • A new option, --oracle-redo-temp-tablespace (in [Tungsten Replicator 5.3 Manual]) has been added to configure the temporary tablespace within Oracle redo reader extractor deployments.

      Issues: CT-321

  • Command-line Tools

    • The sizes outputs for the thl list (in [Tungsten Replicator 5.3 Manual]) command, such as -sizes (in [Tungsten Replicator 5.3 Manual]) or -sizesdetail (in [Tungsten Replicator 5.3 Manual]) command now additionally output summary information for the selected THL events:

      Total ROW chunks: 8 with 7 updated rows (50%)
      Total STATEMENT chunks: 8 with 2552 bytes (50%)
      16 events processed

      A new option has also been added, -sizessummary (in [Tungsten Replicator 5.3 Manual]), that only outputs the summary information.

      Issues: CT-433

      For more information, see thl list -sizessummary Command (in [Tungsten Replicator 5.3 Manual]).

  • Filters

    • A new filter, rowadddbname (in [Tungsten Replicator 6.0 Manual]), has been added to the replicator. This filter adds the incoming schema name, and optional numeric hash value of the schema, to every row of THL row-based changes. The filter is designed to be used with heterogeneous and analytics applications where data is being concentrated into a single schema and where the source schema name will be lost during the concentration and replication process.

      In particular, it is designed to work in harmony with the new Redshift and Vertica based single-schema appliers where data from multiple, identical, schemas are written into a single target schema for analysis.

      Issues: CT-98

    • A new filter has been added, rowadddbname (in [Tungsten Replicator 6.0 Manual]), which adds the source database name and optional database hash to every incoming row of data. This can be used to help identify source information when concentrating information into a single schema.

      Issues: CT-407

Bug Fixes

  • Installation and Deployment

    • An issue has been identified with the way certain operating systems now configure their open files limits, which can upset the checks within tpm (in [Tungsten Replicator 5.3 Manual]) that determine the open files limits configured for MySQL. To ensure that the open files limit has been set correctly, check the configuration of the service:

      1. Copy the system configuration:

        shell> sudo cp /lib/systemd/system/mysql.service /etc/systemd/system/
        shell> sudo vim /etc/systemd/system/mysql.service
      2. Add the following line to the end of the copied file:

        LimitNOFILE=infinity
      3. Reload the systemctl daemon:

        shell> sudo systemctl daemon-reload
      4. Restart MySQL:

        shell> service mysql restart

      That configures everything properly and MySQL should now take note of the open_files_limit config option.

      Issues: CT-148

    • The check to determine if triggers had been enabled within the MySQL data source would not get executed correctly, meaning that warnings about unsupported triggers would not trigger a notification.

      Issues: CT-185

    • When using tpm diag (in [Tungsten Replicator 5.3 Manual]) on a MySQL deployment, the MySQL error log would not be identified and included properly if the default datadir option was not /var/lib/mysql.

      Issues: CT-359

    • Installation when enabling security through SSL could fail intermittently during installation because the certificates would fail to get copied to the required directory during the installation process.

      Issues: CT-402

    • The Net::SSH libraries used by tpm (in [Tungsten Replicator 5.3 Manual]) have been updated to reflect the deprecation of paranoid parameter.

      Issues: CT-426

    • Using a complex password, particularly one with single or double quotes, when specifying a password for tpm (in [Tungsten Replicator 5.3 Manual]), could cause checks and the installation to raise errors or fail, although the actual configuration would work properly. The problem was limited to internal checks by tpm (in [Tungsten Replicator 5.3 Manual]) only.

      Issues: CT-440

  • Command-line Tools

    • The startall (in [Tungsten Replicator 5.3 Manual]) command would fail to correctly start the Oracle redo reader process.

      Issues: CT-283

    • The tpm (in [Tungsten Replicator 5.3 Manual]) command would fail to remove the Oracle redo reader user when using tpm uninstall.

      Issues: CT-299

    • The replicator stop (in [Tungsten Replicator 5.3 Manual]) command would not stop the Oracle redo reader process.

      Issues: CT-300

    • Within Vertica deployments, the internal identity of the applier was set incorrectly to PostgreSQL. This would make it difficult for certain internal processes to identify the true datasource type. The setting did not affect the actual operation.

      Issues: CT-452

  • Core Replicator

    • When parsing THL data it was possible for the internal THL processing to lead to a java.util.ConcurrentModificationException. This indicated that the underlying THL event metadata structure used internally had changed between uses.

      Issues: CT-355

2.9. Tungsten Clustering 5.2.2 GA (22 October 2017)

Version End of Life. 31 January 2019

Release 5.2.2 is a bug fix release that addresses a specific issue with high-volumne concurrent connections through Tungsten Connector.

Bug Fixes

  • Command-line Tools

    • The tungsten_send_diag (in [Tungsten Clustering for MySQL 5.2 Manual]) command could fail with the error Can't locate Digest/HMAC_SHA1.pm.

      Issues: CT-389

  • Tungsten Connector

    • A bug was located in the performance optimization as part of CT-340, which could cause the Connector to start dropping connections during periods of heavy load.

      Issues: CT-398

Tungsten Clustering 5.2.2 Includes the following changes made in Tungsten Replicator 5.2.2

Tungsten Replicator 5.2.2 is a minor bugfix release that addresses some bugs found in the previous 5.2.1 (in [Tungsten Replicator 5.2 Manual]) release. It is a recommended upgrade for all users making use of cluster to big data replication.

Bug Fixes

2.10. Tungsten Clustering 5.2.1 GA (21 September 2017)

Version End of Life. 31 January 2019

Release 5.2.1 is a bug fix release.

Improvements, new features and functionality

  • Tungsten Connector

    • Host-based read-write splitting is now also available in bridge mode. The solution can work either by using a modified /etc/hosts (in [Tungsten Clustering for MySQL 5.2 Manual]), or by using multiple localhost entries in user.map (in [Tungsten Clustering for MySQL 5.2 Manual]):

      @hostoption 127.0.0.2 qos=RO_RELAXED

      Any other IP will get the default configuration (generally RW_STRICT (in [Tungsten Clustering for MySQL 5.2 Manual])).

      Issues: CT-341

Bug Fixes

  • Installation and Deployment

    • The tpm connector (in [Tungsten Clustering for MySQL 5.2 Manual]) command would fail to import any local configuration options.

      Issues: CT-137

  • Command-line Tools

    • The tpm connector (in [Tungsten Clustering for MySQL 5.2 Manual]) would fail in MySQL 5.7 deployments because MySQL expects to use SSL by default.

      Issues: CT-363

  • Tungsten Connector

    • A small optimisation has been found in the way the connector reads packets from MySQL.

      Issues: CT-340

    • The logging configuration for the Connector had been badly configured with a check time on the logging file of 30ms in place of desired 30s. This introduced a significant performance deficit due to over-checking the file. This has now been updated to 30s.

      Issues: CT-342

    • When running in bridge mode, the Connecto would not disconnect ongoing connections after losing contact with managers.

      Issues: CT-371

Tungsten Clustering 5.2.1 Includes the following changes made in Tungsten Replicator 5.2.1

Tungsten Replicator 5.2.1 is a minor bugfix release that addresses some bugs found in the previous 5.2.0 (in [Tungsten Replicator 5.2 Manual]) release. It is a recommended upgrade for all users.

Improvements, new features and functionality

Bug Fixes

2.11. Tungsten Clustering 5.2.0 GA (19 July 2017)

Version End of Life. 31 January 2019

Release 5.2.0 is a new feature release that contains improvements to the trepctl (in [Tungsten Clustering for MySQL 5.2 Manual]) and thl (in [Tungsten Clustering for MySQL 5.2 Manual]) commands for better understanding of replication state, particularly with larger transactions, and provides support for new appliers in the Tungsten Replicator.

Bug Fixes

  • Tungsten Manager

    • Due to an issue with the manager, timeouts, and the time taken to perform a switch when restarting the replicator, upgrades and switches between different versions of Continuent Tungsten could fail. The timings have been adjust to address the issue.

      Issues: CT-192

    • A memory leak in the manager could cause the manager to restart after exhausting memory. The issue was most often seen when monitoring the system where the frequent update of status information.

      Issues: CT-211

Tungsten Clustering 5.2.0 Includes the following changes made in Tungsten Replicator 5.2.0

Tungsten Replicator 5.2.0 is a new feature release that contains a combination of new features, specifically new replicator applier targets:

This release also provides improvements to the trepctl (in [Tungsten Replicator 5.2 Manual]) and thl (in [Tungsten Replicator 5.2 Manual]) commands, and bug fixes to improve stability.

Improvements, new features and functionality

  • Command-line Tools

    • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide clearer and more detailed information on certain aspects of it's operation. Two new commands have been added, trepctl qs (in [Tungsten Replicator 5.2 Manual]) and trepctl perf (in [Tungsten Replicator 5.2 Manual]):

      • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide a simplified status output that provides an easier to understand status, using the qs (in [Tungsten Replicator 5.2 Manual]) command. For example:

        shell> trepctl qs
        State: alpha Online for 1172.724s, running for 124280.671s
        Latency: 0.71s from source DB commit time on thl://ubuntuheterosrc:2112/ into target database
         7564.198s since last source commit
        Sequence: 4860 last applied, 0 transactions behind (0-4860 stored) estimate 0.00s before synchronization
      • The trepctl perf (in [Tungsten Replicator 5.2 Manual]) command provides detailed performance information on the operation and status of the replicator and individual stages. This can be useful to identify where any additional latency or performance issues lie:

        shell> trepctl perf
        Statistics since last put online 1360.141s ago
        Stage | Seqno | Latency | Events | Extraction | Filtering | Applying | Other | Total
        remote-to-thl | 4860 | 0.475s | 70 | 116713.145s | 0.000s | 2.920s | 0.000s | 116716.065s
         Avg time per Event | 1667.331s | 0.000s | 0.000s | 0.042s | 1667.372s
        thl-to-q | 4860 | 0.527s | 3180 | 113842.933s | 0.011s | 2873.039s | 0.102s | 116716.085s
         Avg time per Event | 35.800s | 0.000s | 0.000s | 0.903s | 36.703s
        q-to-dbms | 4860 | 0.536s | 3180 | 112989.667s | 0.010s | 3701.035s | 25.554s | 116716.266s
         Avg time per Event | 35.531s | 0.000s | 0.008s | 1.164s | 36.703s

      Issues: CT-29

    • A number of improvements have been made to the identification of long running transactions within the replicator:

      • A new field has been added to the output of trepctl status -name tasks (in [Tungsten Replicator 5.2 Manual]):

        timeInCurrentEvent : 6571.462

        This shows the time that the replictor has been processing the current event. For a long-running event, it helps to indicate that the replicator is still processing the curent event. Note that this is a just a counter for how low the current event has been running. For a replicator that is idle, this will show the time the replicator has spent both processing the original event and waiting to process the new event.

      • The thl list (in [Tungsten Replicator 5.2 Manual]) has been expanded to provide simple and detailed THL size information so that large transactions can be identified. Using the -sizes (in [Tungsten Replicator 5.2 Manual]) and -sizesdetail (in [Tungsten Replicator 5.2 Manual]) displays detailed information about the size of the SQL, number of rows, or both for each stored event. For example:

        shell> thl list -sizes
        SEQ# Frag# Tstamp
        ...
        12 0 2017-06-28 13:21:11.0 Event total: 1 chunks 73 bytes in SQL statements 0 rows
        13 0 2017-06-28 13:21:10.0 Event total: 1645 chunks 0 bytes in SQL statements 1645 rows
        14 0 2017-06-28 13:21:11.0 Event total: 1 chunks 36 bytes in SQL statements 0 rows

        For more information, see thl list -sizes Command (in [Tungsten Replicator 5.2 Manual]) and thl list -sizesdetail Command (in [Tungsten Replicator 5.2 Manual]).

      • The trepctl (in [Tungsten Replicator 5.2 Manual]) command has been updated to provide more detailed information on the performance of the replicator, see trepctl perf (in [Tungsten Replicator 5.2 Manual]).

      • For easier navigation and selection of THL events, the thl (in [Tungsten Replicator 5.2 Manual]) has had two further command-line options added, -first (in [Tungsten Replicator 5.2 Manual]) and -last (in [Tungsten Replicator 5.2 Manual]) to select the first and last events in the THL. Both also take an optional number that shows the first N or last N events.

      Issues: CT-34

    • A new command, tungsten_send_diag (in [Tungsten Replicator 5.2 Manual]), has been added that provides a simplified method for sending a tpm diag (in [Tungsten Replicator 5.2 Manual]) output automatically through to the support team. The new command uploads the diagnostic information directly in Amazon S3 without requiring a separate upload to Zendesk.

      Issues: CT-158

    • A new command, clean_release_directory (in [Tungsten Replicator 5.2 Manual]) has been added to the distribution. This command removes old releases from the installation directory that have been created during either upgrades or configuration updates. The command removes all old entries except the current active one, and the last five entries.

      Issues: CT-204

  • Documentation

    • The documentation has been updated to make the use of the --property (in [Tungsten Replicator 5.2 Manual]) option to tpm (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-180

Bug Fixes

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]) command could hang during the execution of an external command which could cause the entire process to fail to complete properly.

      Issues: CT-82

    • When a replicator has been configured a cluster slave, the masterListenUri (in [Tungsten Replicator 5.2 Manual]) would be blank. This was because a pure cluster-slave configuration did not correctly configure the necessary pipelines.

      Issues: CT-197

    • The query (in [Tungsten Replicator 5.2 Manual]) tool has been updated to provide better error handling and messages during an error. This particularly affects tools which embed the use of this command, such as tungsten_provision_slave (in [Tungsten Replicator 5.2 Manual]).

      Issues: CT-203

    • An auto-refresh option has been added to certain commands within trepctl (in [Tungsten Replicator 5.2 Manual]). By adding the -r (in [Tungsten Replicator 5.2 Manual]) option and the number of seconds to either trepctl status (in [Tungsten Replicator 5.2 Manual]), trepctl qs (in [Tungsten Replicator 5.2 Manual]), or trepctl perf (in [Tungsten Replicator 5.2 Manual]) commands. For example, trepctl qs -r 5 (in [Tungsten Replicator 5.2 Manual]) would refresh the quick status command every 5 seconds.

      Issues: CT-209

2.12. Tungsten Clustering 5.1.1 GA (23 May 2017)

Version End of Life. 26 October 2018

Bug Fixes

  • Tungsten Manager

    • A memory leak in the manager could cause the manager to restart after exhausting memory. The issue was most often seen when monitoring the system where the frequent update of status information.

      Issues: CT-211

Tungsten Clustering 5.1.1 Includes the following changes made in Tungsten Replicator 5.1.1

Tungsten Replicator 5.1.1 is a minor bugfix release that addresses some bugs found in the previous 5.1.0 (in [Tungsten Replicator 5.1 Manual]) release. It is a recommended upgrade for all users.

Bug Fixes

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.1 Manual]) command has been updated:

      • The -ascmd (in [Tungsten Replicator 5.1 Manual]) option has been added to output the current position as a command that you can use verbatim to reset the status. For example:

        shell> dsctl get -ascmd
        dsctl set -seqno 17 -epoch 11 -event-id "mysql-bin.000082:0000000014031577;-1" -source-id "ubuntu"
      • The -reset (in [Tungsten Replicator 5.1 Manual]) option has been added so that the current position can be reset and then set using dsctl set -reset without having to run two separate commands.

      Issues: CT-24

    • The availability and default configuration of some filters has been changed so that certain filters are now available in all configurations. This does not effect existing filter deployments.

      Issues: CT-84

    • The tungsten_provision_slave (in [Tungsten Replicator 5.1 Manual]) command could fail to complete properly due to a problem with the threads created during the provision process.

      Issues: CT-202

  • Backup and Restore

    • The trepctl backup (in [Tungsten Replicator 5.1 Manual]) operation could fail if the system ran out of disk space, or the storage.index file could not be written or become corrrupted. The backup system will now recreate the file if the information could be read properly.

      Issues: CT-122

  • Heterogeneous Replication

    • When creating DDL from an Oracle source for Hadoop using ddlscan (in [Tungsten Replicator 5.1 Manual]), the template that is used to create the metadata file was missing.

      Issues: CT-206

2.13. Tungsten Clustering 5.1.0 GA (26 April 2017)

Version End of Life. 26 October 2018

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • When SSL is enabled, the Connector automatically advertisies the ports and itself as SSL capable. With some clients, this triggers them to use SSL even if SSL has not been configured. This causes the connections to fail and not operate correctly.

    The configuration can be controlled by using the --connector-ssl-capable (in [Tungsten Clustering for MySQL 5.1 Manual]) option to tpm (in [Tungsten Clustering for MySQL 5.1 Manual]). By default, the connector will advertise as SSL capable.

    Issues: CT-140

Improvements, new features and functionality

  • Installation and Deployment

    • The list of supported Ruby versions has been updating to support Ruby up to and including Ruby 2.4.0.

      Issues: CT-138

Bug Fixes

  • Installation and Deployment

    • The rubygems extension to Ruby was loaded correctly causing some tools to fail to load correctly, or fail to use the Net/SSH tools correctly.

      Issues: CT-143

    • The tpm update command could fail when using Ruby 1.8.7.

      Issues: CT-165

Tungsten Clustering 5.1.0 Includes the following changes made in Tungsten Replicator 5.1.0

Tungsten Replicator 5.1.0 is a minor feature release and constains some significant improvements in the compatiblity and stability for Hadoop loading, JavaScript filters, heterogeneous filter compatibility and important bug fixes.

Improvements, new features and functionality

  • Installation and Deployment

    • The list of supported Ruby versions has been updating to support Ruby up to and including Ruby 2.4.0.

      Issues: CT-138

  • Heterogeneous Replication

    • The support for loading into Hadoop has been improved with better compatibility for recent Hadoop releases from the major Hadoop distributions.

      • MapR 5.2

      • Cloudera 5.8

      In addition to ensuring the basic compatibility of these tools, the continuent-tools-hadoop has been updated to support the use of the beeline as well as the hive command.

      Issues: CT-153, CT-155

      For more information, see The load-reduce-check Tool (in [Tungsten Replicator 5.1 Manual]).

    • The replicator and load-reduce-check (in [Tungsten Replicator 5.1 Manual]) command that is part of the continuent-tools-hadoop repository has been updated so that it can support loading and replication into Hadoop from Oracle. This includes creating suitable DDL templates and support for accessing Oracle via JDBC to load DDL information.

      Issues: CT-168

  • Filters

    • The JavaScript environment has been updated to include a standardized set of filter functionality. This is proivided and loaded as standard into all JavaScript filters. The core utilities are provided in the coreutils.js file.

      The current file provides three functions:

      • load — which loads an external JavaScript file.

      • readJSONFile — which loads an external JSON file into a variable.

      • JSON — provides JSON class including the ability to dump a JavaScript variable into a JSON string.

      Issues: CT-99

    • The thl (in [Tungsten Replicator 5.1 Manual]) has been improved to support -from (in [Tungsten Replicator 5.1 Manual]) and -to (in [Tungsten Replicator 5.1 Manual]) options for selecting the range. These act as synonyms for the existing -low (in [Tungsten Replicator 5.1 Manual]) and -high (in [Tungsten Replicator 5.1 Manual]) options and can be used with all commands.

      Issues: CT-111

    • A number of filters have been updated so that the THL metadata for the transaction includes whether a specific filter has been applied to the transaction in question. This is designed to make it easier to determine whether the filter has been applied, particularly in heterogeneous replication, and also to determine whether the incoming transaction are suitable to be applied to a targert that requires them. Currently the metadata is only added to the transactions and no enforcement is made.

      The following filters add this information:

      The format of the metadata is tungsten_filter_NAME=true.

      Issues: CT-157

Bug Fixes

  • Installation and Deployment

    • The rubygems extension to Ruby was loaded correctly causing some tools to fail to load correctly, or fail to use the Net/SSH tools correctly.

      Issues: CT-143

    • One of the checks built into tpm (in [Tungsten Replicator 5.1 Manual]), MySQLUnsupportedDataTypesCheck (in [Tungsten Replicator 5.1 Manual]) was spelt incorrectly, which meant that it was difficult to bypass and ultimately did not always correctly run or get ignored.

      Issues: CT-147

    • The tpm update command could fail when using Ruby 1.8.7.

      Issues: CT-165

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 5.1 Manual]) could fail if the innodb_log_home_dir and innodb_data_home_dir were set to a value different to the datadir option, and the --direct (in [Tungsten Replicator 5.1 Manual]) was used.

      Issues: CT-83, CT-141

  • Heterogeneous Replication

    • The Hadoop loader would previously load CSV files directly into the /users/tungsten within HDFS, completely ignoring the setting of thr replication user within the replicator. This has been corrected so that data can be loaded into the configured replication user.

      Issues: CT-134

    • By default the the Hadoop loader would default to use a directory structure that matched the SERVICENAME/SCHEMANAME/TABLENAME. This cause problems with the default DDL templates and the continuent-tools-hadoop tools which used only the schema and table name.

      Issues: CT-135

2.14. Tungsten Clustering 5.0.1 GA (23 February 2017)

Version End of Life. 30 June 2018

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • In previous releases, a client PING command would open a new connection to the MySQL server, execute a SELECT 1 and then returns the OK (or failure) to the client. This could introduce additional load and also affect the metrics if statement execution counts and connections were being monitored.

    This has been updated so that the PING request is sent verbatim through to the server by the connector.

    Issues: CT-1

  • The default security configuration for new installations is for security, including SSL and TLS and authentication, to be disabled. In 5.0.0 the default was to enable full security on all components which could lead to problems and difficulty when upgrading.

    Issues: CT-18

  • The manager (in [Tungsten Clustering for MySQL 5.0 Manual]) is no longer restarted when updating the configuration with tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) when using the --replace-tls-certificate (in [Tungsten Clustering for MySQL 5.0 Manual]) option.

    Issues: CT-120

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • When performing an upgrade of MySQL 5.6 to MySQL 5.7, and after running mysql_upgrade, the MySQL server must be restarted. Failure to do this could cause switch or failover operations to fail.

    Issues: CT-70

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Installation and Deployment

    • Support has been improved for CentOS 7, addressing some issues regarding the startup and deployment scripts used to manage MySQL and Continuent Tungsten

      Issues: CONT-211

    • tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) has been updated to cope with changes in the configuration and operation of MySQL 5.7.

      Issues: CONT-1060

    • When performing a persmissions check within tpm (in [Tungsten Clustering for MySQL 5.0 Manual]), changes to the way password and other information is confirmed has been updated to work correctly with MySQL 5.7. In particular, due to the way passwords are now stored and used, tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) will confirm the configured user and password by checking that login functions correctly.

      Issues: CONT-1578

    • During installation, tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) will no longer check the connector credentials if the connector has been configured to operate in bridge mode (in [Tungsten Clustering for MySQL 5.0 Manual]) if application specific credentials are not supplied. If the --application-user (in [Tungsten Clustering for MySQL 5.0 Manual]) and --application-password (in [Tungsten Clustering for MySQL 5.0 Manual]) options are provided, tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) will run the same checks even if bridge mode has been selected.

      Issues: CONT-1580, CONT-1581

  • Tungsten Connector

    • The connector has been updated to provide an acknowledgement to the MySQL protocol COM_CHANGE_USER command. This allows client connections that use connection pooling (such as PHP) and the change user command as a verification of an open connection to correctly received an acknowledgement that the connection is available.

      The option is disabled by default. To enable, set the treat.com.change.user.as.ping property to true during configuration with tpm (in [Tungsten Clustering for MySQL 5.0 Manual]).

      Issues: CONT-1380

      For more information, see Connector Change User as Ping (in [Tungsten Clustering for MySQL 5.0 Manual]).

  • Tungsten Manager

    • All the core tools now generate a detailed heap dump in the event of a failure. This will help during debugging and identifying any issues.

      Issues: CT-11

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

    • The Nagios tungsten_nagios_online (in [Tungsten Clustering for MySQL 5.0 Manual]) command would report nodes in the standby (in [Tungsten Clustering for MySQL 5.0 Manual]) role that were in the OFFLINE (in [Tungsten Replicator 6.0 Manual]) would indicate that the node was in a warning state.

      Issues: CONT-1487

    • The Zabbix related monitoring tools, zabbix_tungsten_services (in [Tungsten Clustering for MySQL 5.0 Manual]), zabbix_tungsten_progress (in [Tungsten Clustering for MySQL 5.0 Manual]), zabbix_tungsten_online (in [Tungsten Clustering for MySQL 5.0 Manual]), and zabbix_tungsten_latency (in [Tungsten Clustering for MySQL 5.0 Manual]) were not marked as executable.

      Issues: CONT-1493

    • The tpm update (in [Tungsten Clustering for MySQL 5.0 Manual]) would fail if the installation directory had been specified with a trailing slash.

      Issues: CONT-1499

    • If the cluster is put into maintenance mode, but the coordinator node, or the terminal session that put the cluster into maintenance mode fails, the cluster would stay in maintenance mode. The node is now tracked, and if the node goes away for any reason, the cluster will be returned to the mode it was in before being placed into maintenance node.

      Issues: CONT-1535

    • Running tpm connector (in [Tungsten Clustering for MySQL 5.0 Manual]) while multi_trepctl (in [Tungsten Clustering for MySQL 5.0 Manual]) is running on the same host would fail with the error:

      ERROR >> db2 >> There is already another Tungsten installation script running

      Issues: CONT-1572

  • Core Replicator

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

  • Tungsten Connector

    • The connector (in [Tungsten Clustering for MySQL 5.0 Manual]) would not retry and/or reconnect transactions that were automatically redirected to a slave. This has been corrected so that all slave-targeted requests are retried or reconnected and retried in the event of an error.

      Issues: CT-22

    • Automatic retry of query could fail due to interference of keep alive request while re-executing the query.

      Issues: CONT-1512

    • The Tungsten Connector would sometimes retry connectivity on connections that had been killed. The logic has been updated. The default behavior remains the same:

      • Reconnect closed connections

      • Retry autocommitted reads

      The behavior can be modified by using the --connector-autoreconnect-killed-connections (in [Tungsten Clustering for MySQL 5.0 Manual]). Setting to false disables the reconnection or retry of a connection outside of a planned switch or automatic failover. The default is true, reconnecting and retrying all connections.

      Issues: CONT-1514

  • Tungsten Manager

    • When deployed within a composite service, a race condition within the manager could cause the master replicator to start up in a shunned state.

      Issues: CT-2

    • The show slave status command when used through a Tungsten Connector connection could fail with the error Data truncation: BIGINT UNSIGNED value is out of range.

      Issues: CT-85

    • An entity called POLICY_MANAGER would appear in the output of ls resources (in [Tungsten Clustering for MySQL 5.0 Manual]). This could cause problems with monitoring tools which parsed the output. The check script has now been updated to ignore the resource in the output.

      Issues: CT-90

    • In the event of a mysqld restart, the cluster could recover into a state with multiple masters.

      Issues: CONT-1482

    • Recovering a standby (in [Tungsten Clustering for MySQL 5.0 Manual]) node would switch the role of the node once recovered to be a slave (in [Tungsten Clustering for MySQL 5.0 Manual]), instead of remaining as a standby (in [Tungsten Clustering for MySQL 5.0 Manual]).

      Issues: CONT-1486

    • The embedded Drools libraries have been updated to Drools 6.3. This addresses an issue in Drools which could lead to a memory leak.

      Issues: CONT-1547

    • The generated mysql_read_only script would use password on the command line, and could execute a query that returned multiple rows. Both issues could cause issues during executation, particularly for MySQL 5.6 and later.

      Issues: CONT-1570

Tungsten Clustering 5.0.1 Includes the following changes made in Tungsten Replicator 5.0.1

Tungsten Replicator 5.0.1 is a bugfix release that contains critical fixes and improvements from the Tungsten Replicator 5.0.0 release. Specifically, it changes the default security and other settings to make upgrades from previous releases easier, and other fixes and improvements to the Oracle support and command-line tools.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The default security configuration for new installations is for security, including SSL and TLS and authentication, to be disabled. In 5.0.0 the default was to enable full security on all components which could lead to problems and difficulty when upgrading.

    Issues: CT-18

  • The Ruby Net::SSH module, which has been bundled with Tungsten Replicator in past releases, is no longer included. This is due to the wide range of Ruby versions and deployment environments that we support, and differences in the Net::SSH module supported and used with different Ruby versions. In order to simplify the process and ensure that the platforms we support operate correctly, the Net::SSH module has been removed and will now need to be installed before deployment.

    To ensure you have the correct environment before deployment, ensure both the Net::SSH and Net::SCP Ruby modules are installed using gem:

    shell> gem install net-ssh
    shell> gem install net-scp

    Depending on your environment, you may also need to install the io-console module:

    shell> gem install io-console

    If during installation you get an error similar to this:

    mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

    It indicates that you do not have the Ruby development headers installed. Use your native package management interface (for example yum or apt and install the ruby-dev package. For example:

    shell> sudo apt install ruby-dev

    Issues: CT-88

  • The replicator (in [Tungsten Replicator 5.0 Manual]) is no longer restarted when updating the configuration with tpm (in [Tungsten Replicator 5.0 Manual]) when using the --replace-tls-certificate (in [Tungsten Replicator 5.0 Manual]) option.

    Issues: CT-120

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

Improvements, new features and functionality

  • Installation and Deployment

    • Tungsten Replicator is now certified for deployment on systems running Java 8.

      Issues: CT-27

  • Core Replicator

    • The replicator will now generate a detailed heap dump in the event of a failure. This will help during debugging and identifying any issues.

      Issues: CT-11

  • Filters

    • The Rhino JS, which is incorporated for use by the filtering and batch loading mechanisms, has been updated to Rhino 1.7R4. This addresses a number of different issues with the embedded library, including a performance issue that could lead to increased latency during filter operations.

      Issues: CT-21

Bug Fixes

  • Installation and Deployment

    • The Ruby Net::SSH libraries used by tpm (in [Tungsten Replicator 5.0 Manual]) have been updated to the latest version. This addresses issues with SSH and staging based deployments, including KEX algorithm errors.

      Issues: CT-16

    • On some platforms the keytool command could fail to be found, causing an error within the installation when generating certificates.

      Issues: CT-73

  • Command-line Tools

    • The tpasswd (in [Tungsten Replicator 5.0 Manual]) could create a log file with the wrong permissions.

      Issues: CT-117

  • Core Replicator

    • Checksums in MySQL could cause problems when parsing the MySQL binary log due to a change in the way the checksum information is recorded within the binary log. This would cause the replicator to become unable to come online.

      Issues: CT-72

Known Issues

  • Behavior Changes

    • Due to new requirements of the embedded and included Ruby Net::SSH module, the Ruby io-console module may need to be installed before installation or upgrade. This can be achieved using:

      shell> gem install io-console

2.15. Tungsten Clustering 5.0.0 GA (7 December 2015)

Version End of Life. 30 June 2018

VMware Continuent for Clustering 5.0.0 is a major release that incorporates the following changes:

  • The software release has been renamed. The filename now starts with vmware-continuent-clustering.

    The documentation has not been updated to reflect this change. While reading these examples you will see references to tungsten-replicator which will apply to your software release.

  • The connector now uses bridge-mode (in [Tungsten Clustering for MySQL 5.0 Manual]) by default for all new installations and upgrades that do not have read-write splitting configured.

  • Security, including file permissions and TLS/SSL is now enabled by default. For more information, see Deployment Security (in [Tungsten Clustering for MySQL 5.0 Manual]).

  • TLS/SSL is supported as the default encrypted communication channel. TLS uses either the v1.1, or v1.2 depending on the available Java environment used for execution. For TLS v1.2, use Java 8 or higher.

  • License keys are now required during installation. For more information, see Deploy License Keys (in [Tungsten Clustering for MySQL 5.0 Manual]).

  • Support for RHEL 7 and CentOS 7.

  • Basic support for MySQL 5.7.

  • Cleaner and simpler directory layout for the replicator.

Upgrading from previous versions should be fully tested before attempted in a production environment. The changes listed below affect tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) output and the requirements for operation.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Continuent Tungsten now enables security by default. Security includes:

    • Authentication between command-line tools (cctrl (in [Tungsten Clustering for MySQL 5.0 Manual])) and background services.

    • SSL/TLS between command-line tools and background services.

    • SSL/TLS between Tungsten Replicator and datasources.

    • SSL/TLS between Tungsten Connector and datasources.

    • File permissions and access by all components.

    The security changes require certificate files to be generated prior to operation. The tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) command can do that during upgrade if you are using a staging directory. Alternatively, you can create the certificates (in [Tungsten Clustering for MySQL 5.0 Manual]) and update your configuration with the corresponding argument. This is required if you are installing from an INI file. See Installing from a Staging Host with Manually Generated Certificates (in [Tungsten Clustering for MySQL 5.0 Manual]) or Installing via INI File with Manually Generated Certificates (in [Tungsten Clustering for MySQL 5.0 Manual]) for more information. This functionality may be disabled by adding --disable-security-controls (in [Tungsten Clustering for MySQL 5.0 Manual]) to your configuration.

    If you would like tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) to generate the necessary certificates from the staging directory. Run tpm update (in [Tungsten Clustering for MySQL 5.0 Manual]) with the --replace-tls-certificate (in [Tungsten Clustering for MySQL 5.0 Manual]) and --replace-jgroups-certificate options.

    staging-shell> ./tools/tpm update --replace-tls-certificate --replace-jgroups-certificate

    For more information, see Deployment Security (in [Tungsten Clustering for MySQL 5.0 Manual]).

  • Continuent Tungsten now requires license keys in order to operate.

    License keys are provided to all customers with an active support contract. Login to my.vmware.com to identify your support contract and the associated license keys. After collecting the license keys, they should be placed into /etc/tungsten/continuent.licenses or /opt/continuent/share/continuent.licenses. The /opt/continuent (in [Tungsten Clustering for MySQL 5.0 Manual]) path should be replaced with your value for --install-directory (in [Tungsten Clustering for MySQL 5.0 Manual]). Place each license on a new line in the file and make sure it is readable by the tungsten system user.

    If you are testing VMware Continuent or don't have your license key, talk with your sales contact for assistance. You may enable a trial-mode by using the license key TRIAL. This will not affect the runtime operation of VMware Continuent but may impact your ability to get rapid support.

    The tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) script will display a warning if license keys are not provided or if the provided license keys are not valid.

  • The connector will now use bridge-mode by default. This change will improve transparency and performance of the connector. The bridge-mode does not use the user.map (in [Tungsten Clustering for MySQL 5.0 Manual]) file which reflects other changes to take a more secure default deployment. A warning will be displayed during the validation process to tell you if bridge-mode is being enabled. It will not be enabled in the following cases:

    • The --connector-smartscale (in [Tungsten Clustering for MySQL 5.0 Manual]) option is set to true.

    • The user.map (in [Tungsten Clustering for MySQL 5.0 Manual]) file contains @direct (in [Tungsten Clustering 6.0 Manual]) entries.

    • The user.map (in [Tungsten Clustering for MySQL 5.0 Manual]) file contains @hostoption (in [Tungsten Clustering 6.0 Manual]) entries.

    • The --property=selective.rwsplitting (in [Tungsten Clustering for MySQL 5.0 Manual]) connector option is set to true.

    This change may be disabled by adding --connector-bridge-mode=false (in [Tungsten Clustering for MySQL 5.0 Manual]) to your configuration.

    Issues: CONT-1033

    For more information, see Using Bridge Mode (in [Tungsten Clustering for MySQL 5.0 Manual]).

  • Continuent Tungsten now includes RELEASE_NOTES in the package and displays a warning if they have not been reviewed.

    During some tpm (in [Tungsten Clustering for MySQL 5.0 Manual]) commands, the script will check to see if the release notes have been reviewed and accepted. This may be done by running tools/accept_release_notes from the staging directory. The script will display the information and prompt the user for acceptance. A hidden file will be created on the staging server to mark the release notes have been accepted and the warning will not be displayed.

    This process may be automated by calling tools/accept_release_notes -y prior to installation. The script will mark the release notes as accepted and the warning will not be displayed.

    Issues: CONT-1122

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Tungsten Connector

    • The SSL support within the Connector has been improved to support multiple aliases, enabling different certificates to be used for different components of the communication, for example, allowing a different certificate between MySQL and the Connector against the certificate for Connector to Client communication.

      Issues: CONT-1126

      For more information, see Deployment Security (in [Tungsten Clustering for MySQL 5.0 Manual]).

Bug Fixes

  • Core Replicator

    • During installation, a replicator source ID could be misconfigured causing problems during switch and failover operations.

      Issues: CONT-1002

  • Tungsten Connector

    • Following an automatic reconnection, the connector could retry a pending statement if it was a read or write.

      The connector will now detect between reads and writes and only retry the statement if it is a read. Any writes will raise an error to be handled by the application.

      Issues: CONT-1461

  • Tungsten Manager

    • The manager fails to read security.properties during startup. If this occurs, the manager will print a warning in tmsvc.log (in [Tungsten Clustering for MySQL 5.0 Manual]).

      A race condition was resolved to ensure the manager reads configuration files in the correct order.

      Issues: CONT-1070

Tungsten Clustering 5.0.0 Includes the following changes made in Tungsten Replicator 5.0.0

VMware Continuent for Replication 5.0.0 is a major release that incorporates the following changes:

  • The software release has been renamed. For most users of VMware Continuent for Replication, the filename will start with vmware-continuent-replication. If you are using an Oracle DBMS as the source and have purchased support for the latest version, the filename will start with vmware-continuent-replication-oracle-source.

    The documentation has not been updated to reflect this change. While reading these examples you will see references to tungsten-replicator which will apply to your software release.

  • New Oracle Extraction module that reads the Oracle Redo logs provided faster, more compatible, and more efficient method for extracting data from Oracle databases. For more information, see Oracle Replication using Redo Reader (in [Tungsten Replicator 5.0 Manual]).

  • Security, including file permissions and TLS/SSL is now enabled by default. For more information, see Deployment Security (in [Tungsten Replicator 5.0 Manual]).

  • License keys are now required during installation. For more information, see Deploy License Keys (in [Tungsten Replicator 5.0 Manual]).

  • Support for RHEL 7 and CentOS 7.

  • Basic support for MySQL 5.7.

  • Cleaner and simpler directory layout.

Upgrading from previous versions should be fully tested before attempted in a production environment. The changes listed below affect tpm (in [Tungsten Replicator 5.0 Manual]) output and the requirements for operation.

Behavior Changes

The following changes have been made to Continuent Replicator and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Tungsten Replicator now requires license keys in order to operate.

    License keys are provided to all customers with an active support contract. Login to my.vmware.com to identify your support contract and the associated license keys. After collecting the license keys, they should be placed into /etc/tungsten/continuent.licenses or /opt/continuent/share/continuent.licenses. The /opt/continuent (in [Tungsten Replicator 5.0 Manual]) path should be replaced with your value for --install-directory (in [Tungsten Replicator 5.0 Manual]). Place each license on a new line in the file and make sure it is readable by the tungsten system user.

    If you are testing VMware Continuent or don't have your license key, talk with your sales contact for assistance. You may enable a trial-mode by using the license key TRIAL. This will not affect the runtime operation of VMware Continuent but may impact your ability to get rapid support.

    The tpm (in [Tungsten Replicator 5.0 Manual]) script will display a warning if license keys are not provided or if the provided license keys are not valid.

  • Tungsten Replicator now enables security by default. Security includes:

    • Authentication between command-line tools (trepctl (in [Tungsten Replicator 5.0 Manual])) and background services.

    • SSL/TLS between command-line tools and background services.

    • SSL/TLS between Tungsten Replicator and datasources.

    • File permissions and access by all components.

    The security changes require a certificate file to be generated prior to operation. The tpm (in [Tungsten Replicator 5.0 Manual]) command can do that during upgrade if you are using a staging directory. Alternatively, you can create the certificate (in [Tungsten Replicator 5.0 Manual]) and update your configuration with the corresponding argument. This is required if you are installing from an INI file. See Installing from a Staging Host with Manually Generated Certificates (in [Tungsten Replicator 5.0 Manual]) or Installing via INI File with Manually Generated Certificates (in [Tungsten Replicator 5.0 Manual]) for more information. This functionality may be disabled by adding --disable-security-controls (in [Tungsten Replicator 5.0 Manual]) to your configuration.

    If you would like tpm (in [Tungsten Replicator 5.0 Manual]) to generate the necessary certificate from the staging directory. Run tpm update (in [Tungsten Replicator 5.0 Manual]) with the --replace-tls-certificate (in [Tungsten Replicator 5.0 Manual]) option.

    staging-shell> ./tools/tpm update --replace-tls-certificate

    For more information, see Deployment Security (in [Tungsten Replicator 5.0 Manual]).

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

  • Tungsten Replicator now includes RELEASE_NOTES in the package and displays a warning if they have not been reviewed.

    During some tpm (in [Tungsten Replicator 5.0 Manual]) commands, the script will check to see if the release notes have been reviewed and accepted. This may be done by running tools/accept_release_notes from the staging directory. The script will display the information and prompt the user for acceptance. A hidden file will be created on the staging server to mark the release notes have been accepted and the warning will not be displayed.

    This process may be automated by calling tools/accept_release_notes -y prior to installation. The script will mark the release notes as accepted and the warning will not be displayed.

    Issues: CONT-1122

Improvements, new features and functionality

  • Installation and Deployment

    • During installation, tpm (in [Tungsten Replicator 5.0 Manual]) writes the configuration log to /tmp/tungsten-configure.log. If the file exists, but is owned by a separate user the operation will fail with a Permission Denied error. The operation has now been updated to create a directory within /tmp (in [Tungsten Replicator 5.0 Manual]) with the name of the current user where the configuration log will be stored. For example, if the user is tungsten, the log will be written to /tmp/tungsten/tungsten-configure.log.

      Issues: CONT-1402

Bug Fixes

  • Installation and Deployment

    • During installation, a failed installation by tpm (in [Tungsten Replicator 5.0 Manual]), running tpm uninstall could also fail. The command now correctly uninstalls even a partial installation.

      Issues: CONT-1359

Known Issues

Tungsten Replicator 5.0.0 Includes the following changes made in Tungsten Replicator 5.0.0

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The Bristlecone load generator toolkit is no longer included with Release Notes by default.

    Issues: CONT-903

  • The scripts previously located within the scripts directory have now been relocated to the standard bin directory. This does not affect their availability if the env.sh (in [Tungsten Replicator 5.0 Manual]) script has been used to update your path. This includes, but is not limited to, the following commands:

    • ebs_snapshot.sh

    • file_copy_snapshot.sh

    • multi_trepctl

    • tungsten_get_position

    • tungsten_provision_slave

    • tungsten_provision_thl

    • tungsten_read_master_events

    • tungsten_set_position

    • xtrabackup.sh

    • xtrabackup_to_slave

    Issues: CONT-904

  • The backup (in [Tungsten Replicator 5.0 Manual]) and restore (in [Tungsten Replicator 5.0 Manual]) functionality in trepctl (in [Tungsten Replicator 5.0 Manual]) has been deprecated and will be removed in a future release.

    Issues: CONT-906

  • The batch loading scripts used by HP Vertica, Hadoop and Amazon Redshift appliers have been moved to the appliers/batch directory.

    Issues: CONT-907

  • The location of the JavaScript filters has been moved to new location in keeping with the rest of the configuration:

    • samples/extensions/javascript has moved to support/filters-javascript

    • samples/scripts/javascript-advanced has moved to support/filters-javascript

    The use of these filters has not changed but the default location for some filter configuration files has moved to support/filters-config. Check your current configuration before upgrading.

    Issues: CONT-908

  • The ddlscan (in [Tungsten Replicator 5.0 Manual]) templates have been moved to the support/ddlscan directory.

    Issues: CONT-909

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 5.0 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

  • The Vertica applier should write exceptions to a temporary file during replication.

    The applier statements will include the EXCEPTIONS attribute in each statement to assist in debugging. Review the replicator log or trepctl status (in [Tungsten Replicator 5.0 Manual]) output for more details.

    Issues: CONT-1169

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • Use of LOAD DATA commands requires the correct permissions to be given to the mysql user. One of the following must be done.

      • The tungsten system user must have the same default group as the mysql system user.

      • The mysql system user must be a member of the default group for tungsten system user.

      • The --file-protection-level (in [Tungsten Replicator 5.0 Manual]) option must be set to none to allow full visibility to all temporary files.

    • The replicator can hit a MySQL lock wait timeout when processing large transactions.

      Issues: CONT-1106

    • The replicator can run into OutOfMemory when handling very large Row-Based replication events. This can be avoided by setting --optimize-row-events=false (in [Tungsten Replicator 5.0 Manual]).

      Issues: CONT-1115

    • The replicator can fail during LOAD DATA commands or Vertica loading if the system permissions are not set correctly. If this is encountered, make sure the MySQL or Vertica system users are a member of the Tungsten system group. The issue may also be avoided by removing system file protections with --file-protection-level=none (in [Tungsten Replicator 5.0 Manual]).

      Issues: CONT-1460

Improvements, new features and functionality

  • Command-line Tools

    • The dsctl (in [Tungsten Replicator 5.0 Manual]) has been updated to provide help output when specifically requested with the -h or -help options.

      Issues: CONT-1003

      For more information, see dsctl help Command (in [Tungsten Replicator 5.0 Manual]).

Bug Fixes

  • Core Replicator

    • A master replicator could fail to finish extracting a fragmented transaction if disconnected during processing.

      Issues: CONT-1163

    • A slave replicator could fail to come ONLINE (in [Tungsten Replicator 6.0 Manual]) if the last THL file is empty.

      Issues: CONT-1164

    • The replicator applier and filters may fail with ORA-955 because the replicator did not check for metadata tables using uppercase table names.

      Issues: CONT-1375

    • The replicator incorrectly assigns LOAD DATA statements to the #UNKNOWN shard. This can happen when the entire length is above 200 characters.

      Issues: CONT-1431

2.16. Tungsten Clustering 4.0.0 Not yet released (Not yet released)

Improvements, new features and functionality

  • Command-line Tools

    • The dsctl (in [Continuent Tungsten 4.0 Manual]) command has been added. This enables easy getting, setting, and resetting of the current replication status information stored in the datasource.

      Issues: CONT-34

    • The tpm (in [Continuent Tungsten 4.0 Manual]) command has been updated to correctly configure clusters and replicators to support replication from a cluster directly to a datawarehouse.

      Issues: CONT-51

Bug Fixes

  • Installation and Deployment

    • During an update or upgrade configuration when components are being added or removed, older configuration could remain, leading to services and components being configured even though the service or component had been removed.

      Issues: CONT-155

    • The validation of values supplied to tpm (in [Continuent Tungsten 4.0 Manual]) for the --thl-log-retention (in [Continuent Tungsten 4.0 Manual]) has been updated. The option now requires a single letter suffix values (the first letter of day, hour, minute, seconds) to specify the quantifier for the value. The default value is 5d.

      Issues: CONT-177

    • The validation of values supplied to tpm (in [Continuent Tungsten 4.0 Manual]) for the --svc-applier-block-commit-interval (in [Continuent Tungsten 4.0 Manual]) has been updated. The option now accepts single letter suffix values (the first letter of day, hour, minute, seconds) to specify the quantifier for the value. Values iver 1000 or greater are assumed to be in seconds. The default value is 15s if batch-enabled (in [Continuent Tungsten 4.0 Manual]) is true, or 0 otherwise.

      Issues: CONT-181

    • tpm (in [Continuent Tungsten 4.0 Manual]) has been updated to confirm that row-based replication has been enabled when a heterogeneous cluster has been configured.

      Issues: CONT-193

  • Command-line Tools

    • The tungsten_set_position (in [Continuent Tungsten 4.0 Manual]) command would fail when executed between dataservices if the service names were different.

      Issues: CONT-24

    • Managers are now started in serial per dataservice, rather than started serially globally.

      Issues: CONT-27

  • Core Replicator

    • A RENAME TABLE operation within MySQL would not cause the metadata caches during replication. This could lead to invalid metadata being used during processing and filtering.

      Issues: CONT-158

  • Tungsten Connector

    • The requirement for Oracle MySQL Connector/J to be used as the MySQL JDBC connector has been removed. The JDBC interface now uses the Drizzle driver by default.

      Issues: CONT-48

  • Tungsten Manager

    • The built-in Drools library has been updated to resolve an issue with memory consumption.

      Issues: CONT-28

    • The network connectivity checks using either the echo or ping protocols have been updated, and additional checks are now performed by the tpm (in [Continuent Tungsten 4.0 Manual]) command during installation to ensure that one or other of the methods is available, configuring the apprioriate method during installation. If neither method is confirmed to work, installation will now fail with a warning.

      Issues: CONT-53, CONT-90

    • Concurrent operations within cctrl (in [Continuent Tungsten 4.0 Manual]) could generate an exception.

      Issues: CONT-165

    • Within a composite dataservice, failover would not be triggered if the master site was isolated from the relay.

      Issues: CONT-188

3. Continuent Tungsten Release Notes

3.1. Continuent Tungsten 4.0.8 GA (22 May 2017)

Version End of Life. 31 October 2018

Continuent Tungsten 4.0.8 which address a specific memory leak issue in the manager.

Bug Fixes

  • Tungsten Manager

    • A memory leak in the manager could cause the manager to restart after exhausting memory. The issue was most often seen when monitoring the system where the frequent update of status information.

      Issues: CT-211

3.2. Continuent Tungsten 4.0.7 GA (23 February 2017)

Version End of Life. 31 October 2018

Continuent Tungsten 4.0.7 is a bugfix release that contains a specific correction for the deployment with respect to the use of the Ruby Net::SSH module.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • In previous releases, a client PING command would open a new connection to the MySQL server, execute a SELECT 1 and then returns the OK (or failure) to the client. This could introduce additional load and also affect the metrics if statement execution counts and connections were being monitored.

    This has been updated so that the PING request is sent verbatim through through to server by the connector.

    Issues: CT-1

  • The Ruby Net::SSH module, which has been bundled with Continuent Tungsten in past releases, is no longer included. This is due to the wide range of Ruby versions and deployment environments that we support, and differences in the Net::SSH module supported and used with different Ruby versions. In order to simplify the process and ensure that the platforms we support operate correctly, the Net::SSH module has been removed and will now need to be installed before deployment.

    To ensure you have the correct environment before deployment, ensure both the Net::SSH and Net::SCP Ruby modules are installed using gem:

    shell> gem install net-ssh
    shell> gem install net-scp

    Depending on your environment, you may also need to install the io-console module:

    shell> gem install io-console

    If during installation you get an error similar to this:

    mkmf.rb can't find header files for ruby at /usr/lib/ruby/include/ruby.h

    It indicates that you do not have the Ruby development headers installed. Use your native package management interface (for example yum or apt and install the ruby-dev package. For example:

    shell> sudo apt install ruby-dev

    Issues: CT-88

3.3. Continuent Tungsten 4.0.6 GA (8 December 2016)

Version End of Life. 31 October 2018

Release Notes 4.0.6 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration (in [Continuent Tungsten 4.0 Manual]).

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration (in [Continuent Tungsten 4.0 Manual]).

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports (in [Continuent Tungsten 4.0 Manual]).

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration (in [Continuent Tungsten 4.0 Manual]) for more information.

Improvements, new features and functionality

  • Installation and Deployment

    • The release has been updated to correctly operate with CentOS v7.0 and higher. This was related to the changes made to the operation of the systemd tool used to manage startup and shutdown scripts.

      Issues: CONT-211, CONT-1552

    • When performing a persmissions check within tpm (in [Continuent Tungsten 4.0 Manual]), changes to the way password and other information is confirmed has been updated to work correctly with MySQL 5.7. In particular, due to the way passwords are now stored and used, tpm (in [Continuent Tungsten 4.0 Manual]) will confirm the configured user and password by checking that login functions correctly.

      Issues: CONT-1578

    • During installation, tpm (in [Continuent Tungsten 4.0 Manual]) will no longer check the connector credentials if the connector has been configured to operate in bridge mode (in [Continuent Tungsten 4.0 Manual]) if application specific credentials are not supplied. If the --application-user (in [Continuent Tungsten 4.0 Manual]) and --application-password (in [Continuent Tungsten 4.0 Manual]) options are provided, tpm (in [Continuent Tungsten 4.0 Manual]) will run the same checks even if bridge mode has been selected.

      Issues: CONT-1580

Bug Fixes

  • Installation and Deployment

    • If the cluster is put into maintenance mode, but the coordinator node, or the terminal session that put the cluster into maintenance mode fails, the cluster would stay in maintenance mode. The node is now tracked, and if the node goes away for any reason, the cluster will be returned to the mode it was in before being placed into maintenance node.

      Issues: CONT-1535

    • Running tpm connector (in [Continuent Tungsten 4.0 Manual]) while multi_trepctl (in [Continuent Tungsten 4.0 Manual]) is running on the same host would fail with the error:

      ERROR >> db2 >> There is already another Tungsten installation script running

      Issues: CONT-1572

  • Tungsten Connector

    • In the event of a statement being explicitly requested to execute on a slave and there being an error, it's possible that the Connector will not retry the statement. The behaviour has been updated to retry and/or reconnect to execute the statement on the slave.

      Issues: CT-22

  • Tungsten Manager

    • It was possible for a race condition within the manager to create a cluster that starts up with a shunned master service.

      Issues: CT-2

    • The generated mysql_read_only script would use password on the command line, and could execute a query that returned multiple rows. Both issues could cause issues during executation, particularly for MySQL 5.6 and later.

      Issues: CONT-1570

Continuent Tungsten 4.0.6 Includes the following changes made in Tungsten Replicator 4.0.6

Continuent Tungsten 4.0.6 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.5 release.

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 4.0 Manual]) command will now check for the super_read_only setting and warn if this setting is enabled.

    Issues: CONT-1039

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 4.0 Manual]) command will use the authentication_string field for validating passwords.

    Issues: CONT-1058

  • For compatibility with MySQL 5.7, the tpm (in [Tungsten Replicator 4.0 Manual]) command will now ignore the sys schema.

    Issues: CONT-1059

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • When running tpm update (in [Tungsten Replicator 4.0 Manual]) preperties set during the initial install could be reset or changed to their default value.

      Issues: CONT-1579

  • Command-line Tools

    • Running multi_trepctl (in [Tungsten Replicator 4.0 Manual]) in a multi-site, multi-master (MSMM) deployment could fail to report all of the running replication processes.

      Issues: CONT-1585

  • Core Replicator

    • There is a limit in the communication protocal for the replicator which limits the number of fragments within a single transaction in the THL to 32768. Although this is not a limit in the THL format, it is a limit in the protocol used to exchanged the THL information between replicators.

      The size of this value, and therefore, the maximum number of fragments cannot be increased without creating an incompatible change within the replicator. This creates a limit to the maximum size of a single transaction that can be replicated. Although this figure cannot be altered, the size of each individual fragment can be increased. The default setting is 1,000,000, creating a limit of approximately 32GB.

      To increase the fragment size, set the value of the property replicator.extractor.dbms.transaction_frag_size (in [Tungsten Replicator 4.0 Manual]). For example, increasing the value to 2,000,000 would increase the maximum THL transaction size to approximately 64GB.

      Care should be taken when increasing this value, as it also increases the amount of memory required to handle the transaction.

      Issues: CONT-1574

  • Filters

    • There is a known issue with the fixmysqlstrings.js filter. When translating BINARY or VARBINARY datatypes into a hex value, if the encoding set for the MySQL and replicator instance is not UTF-8, an implied character set conversion can take place. This leads to a corruption of the information when it is turned into a hex string. This is due to limitations of the internal datatypes available within the JavaScript environment used for the translation.

      Issues: CONT-1508

Improvements, new features and functionality

  • Installation and Deployment

    • Due to changes in the datatypes available in MySQL 5.7 and the supported datatypes within Continuent Tungsten, and coinciding with changes to the way this information is available, the tpm (in [Tungsten Replicator 4.0 Manual]) checks for compatibility may no longer highlight important option changes. For example, virtual columns and JSON columns in MySQL 5.7 are not replicated. During installation, if tpm (in [Tungsten Replicator 4.0 Manual]) identifies that MySQL 5.7 is in use, the following message will be reported:

      IMPORTANT: The replicator is unable to replicate tables that have
      columns defined as type JSON or that utilise VIRTUAL GENERATED values!
      The use of these features will cause replication to fail. If you want
      tpm to check for these add --mysql-allow-intensive-checks to the
      configuration. Be aware that the checks will query the
      information_schema and if you have thousands of tables this may affect
      other queries while the check runs. Otherwise, if you have confirmed
      manually that JSON or VIRTUAL GENERATED columns are not being used,
      you can skip this check by
      adding --skip-validation-check=MySQLUnsopportedDataTypesCheck to your
      configuration.

      To address this issue, when using tpm (in [Tungsten Replicator 4.0 Manual]) during an installation, more intensive checks for tables with unsupported types can be performed. For example, when checking the special column types used in all tables within an existing installation, tpm (in [Tungsten Replicator 4.0 Manual]) must check each table individually. As this can increase the load on the server during installagtion, tpm (in [Tungsten Replicator 4.0 Manual]) by default does not perform these checks. Instead, these checks can be enabled by using the --mysql-allow-intensive-checks option during configuration. Enabling this option provides for a much more detailed check, but may cause the installation process to take longer.

      Issues: CONT-1551, CONT-1576

  • Core Replicator

    • If the slave THL file ends with an event that was ultimately filtered, and the the replicator master and slave roles are then switched, the new master could generate an incorrect sequence number.

      Issues: CONT-1545

Bug Fixes

  • Installation and Deployment

    • The Ruby Net::SSH libraries used by tpm (in [Tungsten Replicator 4.0 Manual]) have been updated to the latest version. This addresses issues with SSH and staging based deployments, including KEX algorithm errors.

      Issues: CT-16

    • The built-in check for InnoDB did not work for MySQL 5.6 and could fail to identify InnoDB support on the MySQL server.

      Issues: CONT-1577

  • Core Replicator

    • Extraction from the MySQL binary log would fail if the binary log event ID is bigger than a Java Int. This could be triggered if a large (greater than 2B) transaction is inserted into the binary log.

      Issues: CONT-1541

3.4. Continuent Tungsten 4.0.5 GA (4 March 2016)

Version End of Life. 31 October 2018

Release Notes 4.0.5 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration (in [Continuent Tungsten 4.0 Manual]).

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration (in [Continuent Tungsten 4.0 Manual]).

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports (in [Continuent Tungsten 4.0 Manual]).

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration (in [Continuent Tungsten 4.0 Manual]) for more information.

Continuent Tungsten 4.0.5 Includes the following changes made in Tungsten Replicator 4.0.5

Continuent Tungsten 4.0.5 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.4 release.

Bug Fixes

  • Core Replicator

    • When incorporating user variables with an empty string as values into an SQL query using statement based replication, the replicator would fail to apply the statement and go offline.

      Issues: CONT-1555

3.5. Continuent Tungsten 4.0.4 GA (24 February 2016)

Version End of Life. 31 October 2018

Release Notes 4.0.4 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration (in [Continuent Tungsten 4.0 Manual]).

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration (in [Continuent Tungsten 4.0 Manual]).

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports (in [Continuent Tungsten 4.0 Manual]).

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration (in [Continuent Tungsten 4.0 Manual]) for more information.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Tungsten Connector

    • The connector has been updated to provide an acknowledgement to the MySQL protocol COM_CHANGE_USER command. This allows client connections that use connection pooling (such as PHP) and the change user command as a verification of an open connection to correctly received an acknowledgement that the connection is available.

      The option is disabled by default. To enable, set the treat.com.change.user.as.ping property to true during configuration with tpm (in [Continuent Tungsten 4.0 Manual]).

      Issues: CONT-1380

      For more information, see Connector Change User as Ping (in [Continuent Tungsten 4.0 Manual]).

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm (in [Continuent Tungsten 4.0 Manual]) would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

  • Core Replicator

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

  • Tungsten Connector

    • A connector running in bridge mode with auto reconnect enabled could try to reconnect to MySQL and attempt additional writes.

      Issues: CONT-1461

    • Automatic retry of query could fail due to interference of keep alive request while re-executing the query.

      Issues: CONT-1512

    • The Tungsten Connector would sometimes retry connectivity on connections that had been killed. The logic has been updated. The default behavior remains the same:

      • Reconnect closed connections

      • Retry autocommitted reads

      The behavior can be modified by using the --connector-autoreconnect-killed-connections. Setting to false disables the reconnection or retry of a connection outside of a planned switch or automatic failover. The default is true, reconnecting and retrying all connections.

      Issues: CONT-1514

  • Tungsten Manager

    • A cluster could go into a panic after a failover if the mysqld and then immediately became available, causing multiple masters to exist.

      Issues: CONT-1482

    • Recovering a node that had been marked as a standby (in [Continuent Tungsten 4.0 Manual]), the node would be recovered into a standard slave, not a standby.

      Issues: CONT-1486

    • The cluster would fail to failover if the interface was down on the master.

      Issues: CONT-1537

    • The embedded Drools libraries have been updated to Drools 6.3. This addresses an issue in Drools which could lead to a memory leak.

      Issues: CONT-1547

Continuent Tungsten 4.0.4 Includes the following changes made in Tungsten Replicator 4.0.4

Continuent Tungsten 4.0.4 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.3 release.

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • Due to a bug within the Drizzle JDBC driver when communicating with MySQL, using the optimizeRowEvents options could lead to significant memory usage and subsequent failure. To alleviate the problem. For more information, see Drizzle JDBC Issue 38.

      Issues: CONT-1115

Bug Fixes

  • Core Replicator

    • When events are filtered on a master, and a slave replicator reconnects to the master, it is possible to get the error server does not have seqno expected by client. The replicator has been updated to correctly supply the sequence number during reconnection.

      Issues: CONT-1384, CONT-1525

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

    • In some situations, statements that would be unsafe for parallel execution were not serializing into a single threaded execution properly during the applier phase of the target connection.

      Issues: CONT-1489

    • CSV files generated during batch loading into datawarehouses would be created within a directory structure within the /tmp. On long-running replictors, automated processes that would clean up the /tmp directory could delete the files causing replication to fail temporarily due to the missing directory.

      The location where staging CSV files are created has now been updated. Files are now stored within the $CONTINUENT_HOME/tmp/staging/$SERVICE directory, following the same naming structure. For example, if Continuent Tungsten has been installed in /opt/continuent (in [Tungsten Replicator 4.0 Manual]), then CSV files for the service alpha, CSV files for the first active applier channel will be stored in /opt/continuent/tmp/staging/alpha/staging0.

      Issues: CONT-1500

    • The timeout used to read information from the MySQL binary logs has been changed from a fixed period of 120 seconds to a configurable parameter. This can be set by using the --property=replicator.extractor.dbms.binlogReadTimeout=180 (in [Tungsten Replicator 4.0 Manual]) property during configuration tpm (in [Tungsten Replicator 4.0 Manual]).

      Issues: CONT-1528

    • When reconnecting within a multi-site multi-master deployment, the session level logging of updates would not be configured correctly in the re-opened session.

      Issues: CONT-1544

    • Within an SOR cluster, an isolated relay site would not resume replication correctly.

      Issues: CONT-1549

3.6. Continuent Tungsten 4.0.3 Not Released (NA)

Release Notes 4.0.3 is a bugfix release that contains critical fixes and improvements.

Due to an internal bug identified shortly before release, Continuent Tungsten 4.0.3 was never released to customers.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration (in [Continuent Tungsten 4.0 Manual]).

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration (in [Continuent Tungsten 4.0 Manual]).

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports (in [Continuent Tungsten 4.0 Manual]).

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration (in [Continuent Tungsten 4.0 Manual]) for more information.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Tungsten Connector

    • The connector has been updated to provide an acknowledgement to the MySQL protocol COM_CHANGE_USER command. This allows client connections that use connection pooling (such as PHP) and the change user command as a verification of an open connection to correctly received an acknowledgement that the connection is available.

      The option is disabled by default. To enable, set the treat.com.change.user.as.ping property to true during configuration with tpm (in [Continuent Tungsten 4.0 Manual]).

      Issues: CONT-1380

      For more information, see Connector Change User as Ping (in [Continuent Tungsten 4.0 Manual]).

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm (in [Continuent Tungsten 4.0 Manual]) would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

  • Core Replicator

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

  • Tungsten Connector

    • A connector running in bridge mode with auto reconnect enabled could try to reconnect to MySQL and attempt additional writes.

      Issues: CONT-1461

    • Automatic retry of query could fail due to interference of keep alive request while re-executing the query.

      Issues: CONT-1512

    • The Tungsten Connector would sometimes retry connectivity on connections that had been killed. The logic has been updated. The default behavior remains the same:

      • Reconnect closed connections

      • Retry autocommitted reads

      The behavior can be modified by using the --connector-autoreconnect-killed-connections. Setting to false disables the reconnection or retry of a connection outside of a planned switch or automatic failover. The default is true, reconnecting and retrying all connections.

      Issues: CONT-1514

  • Tungsten Manager

    • A cluster could go into a panic after a failover if the mysqld and then immediately became available, causing multiple masters to exist.

      Issues: CONT-1482

    • Recovering a node that had been marked as a standby (in [Continuent Tungsten 4.0 Manual]), the node would be recovered into a standard slave, not a standby.

      Issues: CONT-1486

    • The cluster would fail to failover if the interface was down on the master.

      Issues: CONT-1537

Continuent Tungsten 4.0.3 Includes the following changes made in Tungsten Replicator 4.0.3

Continuent Tungsten 4.0.3 is a bugfix release that contains critical fixes and improvements to the Continuent Tungsten 4.0.2 release.

Due to an internal bug identified shortly before release, Continuent Tungsten 4.0.3 was never released to customers.

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

      The error is transient and non-specific and deployment should be retried.

      Issues: CONT-1343

  • Core Replicator

    • Due to a bug within the Drizzle JDBC driver when communicating with MySQL, using the optimizeRowEvents options could lead to significant memory usage and subsequent failure. To alleviate the problem. For more information, see Drizzle JDBC Issue 38.

      Issues: CONT-1115

Bug Fixes

  • Installation and Deployment

    • When validating the existence of MyISAM tables within a MySQL database, tpm (in [Tungsten Replicator 4.0 Manual]) would use an incorrect method for identifying MyISAM tables. This could lead to MyISAM tables not being located, or legitimate system-related MyISAM tables triggering the alert.

      Issues: CONT-938

  • Command-line Tools

    • The tungsten_provision_thl (in [Tungsten Replicator 4.0 Manual]) command would not use the user specified --java-file-encoding (in [Tungsten Replicator 4.0 Manual]) setting, which could lead to data corruption during provisioning.

      Issues: CONT-1479

  • Core Replicator

    • A master replicator could fail to finish extracting a fragmented transaction if disconnected during processing.

      Issues: CONT-1163

    • A slave replicator could fail to come ONLINE (in [Tungsten Replicator 6.0 Manual]) if the last THL file is empty.

      Issues: CONT-1164

    • Binary data contained within an SQL variable and inserted into a table would not be converted correctly during replication.

      Issues: CONT-1412

    • The replicator incorrectly assigns LOAD DATA statements to the #UNKNOWN shard. This can happen when the entire length is above 200 characters.

      Issues: CONT-1431

    • In some situations, statements that would be unsafe for parallel execution were not serializing into a single threaded execution properly during the applier phase of the target connection.

      Issues: CONT-1489

    • CSV files generated during batch loading into datawarehouses would be created within a directory structure within the /tmp. On long-running replictors, automated processes that would clean up the /tmp directory could delete the files causing replication to fail temporarily due to the missing directory.

      The location where staging CSV files are created has now been updated. Files are now stored within the $CONTINUENT_HOME/tmp/staging/$SERVICE directory, following the same naming structure. For example, if Continuent Tungsten has been installed in /opt/continuent (in [Tungsten Replicator 4.0 Manual]), then CSV files for the service alpha, CSV files for the first active applier channel will be stored in /opt/continuent/tmp/staging/alpha/staging0.

      Issues: CONT-1500

  • Filters

    • The pkey (in [Tungsten Replicator 6.0 Manual]) filter could force table metadata to be updated when the update was not required.

      Issues: CONT-1162

    • When using the dropcolumn (in [Tungsten Replicator 6.0 Manual]) filter in combination with the colnames (in [Tungsten Replicator 6.0 Manual]), an issue could arise where differences in the incoming Schema and target schema could result in incorrect SQL statements. The solution is to reconfigure the colnames (in [Tungsten Replicator 6.0 Manual]) on the slave not to extract the schema information from the database but instead to use the incoming data from the source database and the translated THL.

      Issues: CONT-1495

3.7. Continuent Tungsten 4.0.2 GA (1 October 2015)

Version End of Life. 31 October 2018

Release Notes 4.0.2 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration (in [Continuent Tungsten 4.0 Manual]).

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration (in [Continuent Tungsten 4.0 Manual]).

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports (in [Continuent Tungsten 4.0 Manual]).

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration (in [Continuent Tungsten 4.0 Manual]) for more information.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Installation and Deployment

    • The tpm (in [Continuent Tungsten 4.0 Manual]) script can now properly update a master/slave cluster to a composite (SOR) cluster without intervention. Follow the instructions for tpm upgrade (in [Continuent Tungsten 4.0 Manual]) and add the --replace-release option. The extra option is not required if you are upgrading to a new version.

      Issues: CONT-47

    • The tpm (in [Continuent Tungsten 4.0 Manual]) script will display a warning if NTP does not appear to be running.

      Issues: CONT-110

Bug Fixes

  • Installation and Deployment

    • The tpm (in [Continuent Tungsten 4.0 Manual]) script could lock tables trying to inspect information_schema for MyISAM tables. The script will now look for MyISAM files in the datadir if possible.

      Issues: CONT-938

  • Core Replicator

    • The replicator could incorrectly parse binary logs that start with a timestamp on 1/1/1970 and cause errors on systems that use STRICT_TRANS_TABLES.

      Issues: CONT-869

    • The replicator could hang when transitioning from ONLINE (in [Tungsten Replicator 6.0 Manual]) to OFFLINE:ERROR (in [Tungsten Replicator 6.0 Manual]). This could happen during the first attempt or following multiple repeated attempts.

      Issues: CONT-1055

  • Tungsten Connector

    • The connector would incorrectly connect to a master when processing the BEGIN command on a read-only connection.

      Issues: CONT-895

    • The connector would incorrectly parse statements that begin with use database;....

      Issues: CONT-949

    • The connector might not forward all request errors to the application, which would in this case wait indefinitely for a response.

      Issues: CONT-975

    • The connector could lose track of the cluster policy and cause the application to hang if it doesn't communicate with a manager.

      Issues: CONT-999

    • The mechanism that keeps idle connections active could become hung by long running transactions.

      Issues: CONT-1047

  • Tungsten Manager

    • The connector could temporarily stop processing requests during the upgrade of an SOR deployment or restarting all managers for a dataservice.

      Issues: CONT-1012

    • The failure of multiple slave replicators could result in only one replicator being put back ONLINE (in [Tungsten Replicator 6.0 Manual]).

      Issues: CONT-1051

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Core Replicator

    • The replicator can hit a MySQL lock wait timeout when processing large transactions.

      Issues: CONT-1106

    • The replicator can run into OutOfMemory when handling very large Row-Based replication events. This can be avoided by setting --optimize-row-events=false (in [Continuent Tungsten 4.0 Manual]).

      Issues: CONT-1115

  • Tungsten Manager

    • The manager fails to read security.properties during startup. If this occurs, the manager will print a warning in tmsvc.log (in [Continuent Tungsten 4.0 Manual]).

      Issues: CONT-1070

3.8. Continuent Tungsten 4.0.1 GA (20 July 2015)

Version End of Life. 31 October 2018

Release Notes 4.0.1 is a bugfix release that contains critical fixes and improvements.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration (in [Continuent Tungsten 4.0 Manual]).

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration (in [Continuent Tungsten 4.0 Manual]).

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports (in [Continuent Tungsten 4.0 Manual]).

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration (in [Continuent Tungsten 4.0 Manual]) for more information.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Core Replicator

    • EBS snapshots have been updated to support MySQL table locks during operation.

      Issues: CONT-89

  • Tungsten Manager

    • The manager would incorrectly shun the entire remote service when the site appears to be unreachable, shunning the remote composite datasource including the physical datasources. This has been updated so that only the composite data source and not underlying physical data sources are shunned.

      Issues: CONT-199

    • The manager would not put relay replicators ONLINE (in [Tungsten Replicator 6.0 Manual]) after being restarted.

      Issues: CONT-545

Bug Fixes

  • Core Replicator

    • When running the trepctl reset (in [Continuent Tungsten 4.0 Manual]) command on a master, DDL statements could be placed into the binary log that would delete corresponding management tables within slaves. Binary logging for these operations is now suppressed for these operations.

      Issues: CONT-533

    • The timezone information for the trep_commit_seqno (in [Tungsten Replicator 6.0 Manual]) table would be incorrect when using parallel replication with a server timezone other than GMT.

      Issues: CONT-621

3.9. Continuent Tungsten 4.0.0 GA (17 April 2015)

Version End of Life. 31 October 2018

Release Notes 4.0 is a major release which is designed to provide integration between Release Notes 4.0 and Tungsten Replicator 4.0. Providing MySQL clustering support, while providing replication for MySQL, Oracle, and out to datawarehouses such as HP Vertica, Amazon Redshift and Hadoop.

For more information on replicating data out of a cluster, see Replicating Data from a Cluster into MySQL (in [Continuent Tungsten 4.0 Manual]).

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • For security purposes you should ensure that you secure the following areas of your deployment:

    • Ensure that you create a unique installation and deployment user, such as tungsten, and set the correct file permissions on installed directories. See Directory Locations and Configuration (in [Continuent Tungsten 4.0 Manual]).

    • When using ssh and/or SSL, ensure that the ssh key or certificates are suitably protected. See SSH Configuration (in [Continuent Tungsten 4.0 Manual]).

    • Use a firewall, such as iptables to protect the network ports that you need to use. The best solution is to ensure that only known hosts can connect to the required ports for Continuent Tungsten. For more information on the network ports required for Continuent Tungsten operation, see Network Ports (in [Continuent Tungsten 4.0 Manual]).

    • If possible, use authentication and SSL connectivity between hosts to protext your data and authorisation for the tools used in your deployment. See Deploying SSL Secured Replication and Administration (in [Continuent Tungsten 4.0 Manual]) for more information.

  • When using read-only connectors, and making use of explicit transactions (i.e. with autocommit disabled), queries may be routed to the master, rather than a slave.

  • Under certain circumstances, the rsync process can randomly fail during the installation/ deployment process when using the staging method of deployment. The error code returned by rsync may be 12 or 23.

    The error is transient and non-specific and deployment should be retried.

    Issues: CONT-1343

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Continuent Tungsten 4.0 Manual]) now correctly checks the functionality of the 'echo' protocol to validate 'echo'.

      Issues: CONT-90

    • Force a new directory under /opt/continuent/releases during tpm update (in [Continuent Tungsten 4.0 Manual]) if components are being added/removed.

      Issues: CONT-155

    • tpm configuration setting repl-thl-log-retention - tpm should check for a valid unit

      Issues: CONT-177

  • Tungsten Connector

    • Useless reverse DNS call at connection time can drastically affect performance.

      Issues: CONT-86

    • Add min and max to Connector statistics.

      Issues: CONT-107

  • Tungsten Manager

    • Start managers serially per-dataservice rather than globally. This prevents a race-condition.

      Issues: CONT-27

    • Add manager status command to cctrl (in [Continuent Tungsten 4.0 Manual]).

      Issues: CONT-168

Bug Fixes

  • Installation and Deployment

    • Installing an RPM package can fail if the mysql user doesn't exist.

      Issues: CONT-43

    • Update tpm (in [Continuent Tungsten 4.0 Manual]) to force the replication timezone to GMT.

      Issues: CONT-85

  • Command-line Tools

    • tungsten_set_position (in [Continuent Tungsten 4.0 Manual]) previously did not work within SOR deployments.

      Issues: CONT-24

    • The dsctl set (in [Continuent Tungsten 4.0 Manual]) command does not work properly for events with multiple fragments.

      Issues: CONT-194

  • Tungsten Connector

    • The MySQL Connector/J prerequisite has now been removed from all installations.

      Issues: CONT-48

    • The Connector could raise a Null Pointer Exception after upgrading from Release Notes 2.0.5.

      Issues: CONT-196

  • Tungsten Manager

    • Connector should not allow a data source role change without intermediary offline.

      Issues: CONT-23

    • Isolated relay site does not resume replication correctly.

      Issues: CONT-26

    • Java library call InetAddress.isReachable() can produce false positives

      Issues: CONT-53

    • Switch should rollback upon connector un-ability to apply the change

      Issues: CONT-105

    • Threshold for checking for manager memory leaks too low.

      Issues: CONT-161

    • The 'last man standing' logic within the manager fails to identify the correct host.

      Issues: CONT-163

    • Manager not setting datasource to failed - isolation via ifdown.

      Issues: CONT-164

    • Attempting concurrent operations in cctrl (in [Continuent Tungsten 4.0 Manual]) generates an exception.

      Issues: CONT-165

    • Manager hangs on jmx call if the interface that it uses is put 'ifdown'.

      Issues: CONT-166

    • Non-isolated nodes see the isolated one as online

      Issues: CONT-169

    • Rule that checks for node liveness is firing too frequently.

      Issues: CONT-170

    • Using the recover (in [Continuent Tungsten 4.0 Manual]) command does a gratuitous change of policy.

      Issues: CONT-171

    • After isolation is removed from the master site, it is not recovered to online.

      Issues: CONT-173

    • Monitor is attempting to query for non-existent table on witness host.

      Issues: CONT-174

    • Composite failover does not succeed - replication handshake failure.

      Issues: CONT-175

    • In a composite dataservice, failover does not happen if the master site is isolated from the relay.

      Issues: CONT-188

    • Composite master does not return to online after a failover within the physical master service.

      Issues: CONT-403

3.10. Continuent Tungsten 2.2.0 NYR (Not Yet Released)

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Within composite clusters, TCP/IP port 7 connectivity is now required between managers on each site to confirm availability.

Bug Fixes

  • Installation and Deployment

    • To ensure that the correct number of the managers and witnesses are configured within the system, tpm has been updated to check and identify potential issues with the configuration. The installation and checks operate as follows:

      • If there are an even number of members in the cluster (i.e. provided to --members option):

        • If witnesses are provided through --witnesses, continue normally.

        • If witnesses are not provided through --witnesses, an error is thrown and installation stops.

      • If there are an odd number of members in the cluster (i.e. provided to --members option):

        • If witnesses are provided through --witnesses, a warning is raised and the witness declaration is ignored.

        • If witnesses are not provided through --witnesses, installation continues as normal.

      The number of members is calculated as follows:

      • Explicitly through the --members option.

      • Implied, when --active-witnesses=false, then the list of hosts declared in --master and --slaves.

      • Implied, when --active-witnesses=true, then the list of hosts declared in --master and --slaves and --witnesses.

      Issues: TUC-2105

    • If ping traffic was denied during installation, then installation could hang while the ping check was performed. A timeout has now been added to ensure that the operation completes successfully.

      Issues: TUC-2107

  • Backup and Restore

    • When using xtrabackup 2.2.x, backups would fail if the innodb_log_file_size option within my.cnf was not specified. tpm has been updated to check the value and existence of this option during installation and to provide a warning if it is not set, or set to the default.

      Issues: TUC-2224

  • Tungsten Connector

    • The connector will now re-connect to a MySQL server in the event that an opened connection is found closed between two requests (generally following a wait_timeout expiration).

      Issues: TUC-2163

    • When initially starting up, the connector would open a connection to the configured master to retreive configuration information, but the connection would never be closed, leading to open unused connections.

      Issues: TUC-2166

    • The cluster status output by the tungsten cluster status within a multi-site cluster would fail to display the correct states of different data sources when an entire data service was offline.

      Issues: TUC-2185

    • When the connector has been configured into read-only mode, for example using --application-readonly-port=9999, the connector would mistakenly route statements starting set autocommit=0 to the master, instead of being routed to a slave.

      Issues: TUC-2198

    • When operating in bridge mode, the connector would retain the client connection when the server had closed the connection. The connector has been updated to close all client connections when the corresponding server connection is closed.

      Issues: TUC-2231

  • Tungsten Manager

    • The manager could enter a situation where after switching a relay on one physical service, remote site relay is incorrectly reconfigured to point at the new relay. This has been corrected so that reconfiguration no longer occurs in this situation.

      Issues: TUC-2164

    • Recovery from a composite cluster failover could create a composite split-brain situation.

      Issues: TUC-2178

    • A statement of record (SOR) cluster would be unable to recover a failed dataservice.

      Issues: TUC-2194

    • A composite datasource would not go into failsafe mode if all the managers within the cluster were stopped.

      Issues: TUC-2206

    • If a composite datasource becomes isolated due to a network partition, the failed datasource would not go into failsafe mode correctly.

      Issues: TUC-2207

    • If a witness became isolated from the rest of the cluster, the rules would not exclude the failed witness and this could lead to memory exhaustion.

      Issues: TUC-2214

  • Documentation

    • The descriptions and definitions of the archive and standby roles has been clarified in the documentation.

      For more information, see Replicator Roles (in [Tungsten Replicator 6.0 Manual]).

    • The documentation for the recovery of a multi-site multi-master installation has been updated to provide more information when covering.

      Issues: TUC-2175

      For more information, see Resetting a single dataservice (in [Tungsten Clustering for MySQL 5.3 Manual]).

3.11. Continuent Tungsten 2.0.5 GA (24 Dec 2014)

Version End of Life. 31 October 2018

Continuent Tungsten 2.0.5 is a bugfix release that contains critical improvements to the handling of times, dates, and timestamp values between servers, including during daylight savings time switches.

Improvements, new features and functionality

  • Installation and Deployment

    • An issue was discovered that altered the way different date and time values were extracted, stored in THL, and applied into target databases. The issue was related to the way the value was stored; the data was not normalized within Continuent Tungsten during replication, particularly if different timezones were used and applied across the replication deployment.

      Examples of the behaviour include:

      • MySQL converts TIMESTAMP values in statements to UTC. Tungsten did not replicate the master time zone, which meant that replicated statements would generate different TIMESTAMP values when replicated to a server with a different time zone from the master.

      • MySQL TIMESTAMP values are stored as UTC, which means that row changes are extracted in UTC. Tungsten did not set the Java VM or MySQL session time zone to UTC when applying such changes, which could result in inconsistent values being applied to replicas.

      • Changes between standard and daylight savings time (DST) result in a short period in which master DBMS servers have a different time zone from replicas. This resulted in errors in applying time-related data generated at the time of the switch.

      • Heterogeneous replication, for example from relational DBMS like MySQL to data warehouses, would result in unexpected conversions to time-related data, again due to inconsistencies in time zones.

      The replication has now been updated to normalize date and time values into UTC throughout the replication topology, including within the wrapper Java processes, databases and when storing the information in THL.

      • Replicator processes now default to UTC internally by setting the Java VM default time zone to UTC. This default can be changed by setting the replicator.time_zone property in the replicator services.propertiesx file but is not recommended other than for problem diagnosis or specialized testing.

      • Replicas store a time zone on statements and row changes extracted from MySQL.

      • Replicators use UTC as the session time zone when applying to MySQL replicas.

      • Replicators similarly default to UTC when applying transactions to data warehouses like Hadoop, Vertica, or Amazon Redshift.

      • The thl (in [Continuent Tungsten 2.0 Manual]) utility prints time-related data using the default GMT time zone. This can be altered using the -timezone (in [Continuent Tungsten 2.0 Manual]) option.

      Best Practices

      We recommend the following steps to ensure successful replication of time-related data.

      • Standardize all DBMS server and host time zones to UTC. This minimizes time zone inconsistencies between applications and data stores. The recommendation is particularly important when replicating between different DBMS types, such as MySQL to Hadoop.

      • Use the default time zone settings for Tungsten replicator. Do not change the time zones unless specifically recommended by VMware support.

      • If you cannot standardize on UTC at least ensure that time zones are set consistently on all hosts and applications.

      Arbitrary time zone settings create a number of corner cases for database management beyond replication. Standardizing on UTC helps minimize them, hence is strongly recommended.

      Upgrade from Older Replicator Versions

      New Tungsten replicators tag THL records with an option to show that the transaction was extracted from a time zone-aware replicator. If a replicator sees that this property is not available, it will automatically switch to the older behavior when applying such transactions to MySQL replicas. This ensures that there is as simple process to upgrade from older replicator versions, which is especially important for Release Notes clusters.

      There are two ways to upgrade a replication topology that extracts from MySQL to the new, time zone-aware behavior.

      • Put the master replicator offline, wait for slaves to catch up fully, then upgrade all replicators at once.

      • Upgrade slave replicators first, then upgrade the master. If the replicators are running in a Release Notes cluster, you must put the cluster in maintenance mode during the upgrade to prevent master failover.

      Important

      You should not upgrade a master Tungsten Replicator before the slave replicas. This can generate transactions that may not be correctly applied by the slaves, since they are not time zone-aware.

      For more information, see Understanding Replication of Date/Time Values (in [Continuent Tungsten 2.0 Manual]).

3.12. Continuent Tungsten 2.0.4 GA (9 Sep 2014)

Version End of Life. 31 October 2018

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

We recommend Java 7 for all Release Notes 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Improvements, new features and functionality

  • Tungsten Manager

    • Tungsten Manager: Improved monitoring fault-tolerance

      Under normal operating conditions, the Tungsten Manager on each DB server host will monitor the local Tungsten Replicator and the database server running on that host and relay the monitoring information thus collected to the other Tungsten Managers in the cluster. In previous releases, Continuent Tungsten was even able to continue to monitor database servers even if a manager on a given DB server node was not running.

      With this release, this functionality has been generalized to handle the monitoring of both database servers and Tungsten replication such that any time a Tungsten Manager is not running on a given DB server host, the remaining Tungsten Managers in the cluster will take over the monitoring activities for both database servers and Tungsten Replicators until the manager on that host resumes operations. This activity takes place automatically and does not require any special configuration or intervention from an administrator.

      The new functionality means that if you have configured Tungsten to fence replication failures and stops, and you stop all Tungsten services on a given node, the rest of the cluster will respond by fencing the associated data source to an OFFLINE (in [Tungsten Replicator 6.0 Manual]) or FAILED (in [Tungsten Clustering 6.0 Manual]) state.

      Full recovery of a failed node requires that a Tungsten Manager be running on the node.

    • Tungsten Connector/Tungsten Manager: Full support for 'relative latency'

      Support for the use and display of the relativeLatency (in [Continuent Tungsten 2.0 Manual]) has been expanded and improved. By default, absolute latency is used by the cluster to determine the configuration.

      When relative latency is used, the difference between the last commit time and the current time is displayed. This will show an increasing latency even on long running transactions, or in the event of a stalled replicator. To enable relative latency, use the --use-relative-latency=true (in [Continuent Tungsten 2.0 Manual]) option to tpm (in [Continuent Tungsten 2.0 Manual]) during configuration.

      The following changes to the operation of Continuent Tungsten have been added to this release when the use of relative latency is enabled:

      • The output of SHOW SLAVE STATUS has been updated to show the Seconds_Behind_Master value.

      • cctrl (in [Continuent Tungsten 2.0 Manual]) will output a new field, relative, showing the relative latency value.

      • The Tungsten Connector will use the value when the maxAppliedLatency option is used in the connection string to determine whether to route a connection to a master or a slave.

      For more information, see Latency or Relative Latency Display (in [Continuent Tungsten 2.0 Manual]).

    • Tungsten Manager: Automated Data Source Fencing Due to Replication Faults

      Release Notes can now be configured to effectively isolate data sources for which replication has stopped or exhibits an error condition. See the updated documentation on Replicator Fencing (in [Continuent Tungsten 2.0 Manual]) for further information.

      Issues: TUC-2240

      For more information, see Replicator Fencing (in [Continuent Tungsten 2.0 Manual]).

Bug Fixes

  • Installation and Deployment

    • The tpm (in [Continuent Tungsten 2.0 Manual]) command has been updated to support updated fencing mechanisms.

      Issues: TUC-2245

    • During an upgrade procedure, the process would mistake active witnesses for passive ones.

      Issues: TUC-2280

    • During an update using tpm (in [Continuent Tungsten 2.0 Manual]), the replicator could end up in the OFFLINE (in [Tungsten Replicator 6.0 Manual]) state.

      Issues: TUC-2282

    • When performing an update, particularly in environments such as Multi-Site, Multi-Master, the tpm (in [Continuent Tungsten 2.0 Manual]) command could fail to update the cluster correctly. This could leave the cluster in a diminished state, or fail to upgrade all the components. The tpm (in [Continuent Tungsten 2.0 Manual]) command has been updated as follows:

      • tpm (in [Continuent Tungsten 2.0 Manual]) will no longer attempt to upgrade a Tungsten Replicator with a Continuent Tungsten distribution, and vice versa.

      • When installing Tungsten Replicator, and the $CONTINUENT_PROFILES variable has been set, tpm (in [Continuent Tungsten 2.0 Manual]) will fail, warning that the $REPLICATOR_PROFILES variable should be set instead.

      Issues: TUC-2288, TUC-2292

  • Tungsten Connector

    • When changing connector properties, and reloading the configuration, the updated values would not be updated.

    • When using mysqldump with option --flush-logs, the connector would fail with an Unsupported command error.

      Issues: TUC-2209

    • When the option showRelativeSlaveStatus=true has been specified, the behavior of the connector for checking of latency with read/write splitting would not be used, instead the appliedLatency (in [Continuent Tungsten 2.0 Manual]) figure would be used instead.

      Issues: TUC-2243

    • The connection.close.idle.timeout would fail to be taken into account when the connector was running in bridge mode.

      Issues: TUC-2255

    • When the connector was running in bridge mode, and the connection was killed, the connections would not be correctly closed.

      Issues: TUC-2261

    • The Connector SmartScale would fail to round-robin through slaves when there was no discernable load on the cluster to provide load performance metrics.

      Issues: TUC-2272

    • SmartScale would wrongly load balance connections to a slave even during a switch operation.

      Issues: TUC-2273

    • The connector would update the high water setting before and after a write connection was used, creating additional overhead for connections, generating additional query overhead.

      Issues: TUC-2277

    • When using SmartScale, automatic sessions could be unnecessarily closed upon disconnection, causing slaves to miss valid queries.

      Issues: TUC-2286

  • Tungsten Manager

    • The checker.tungstenreplicator.properties and checker.mysqlserver.properties files would fail to be created correctly on active witnesses.

      Issues: TUC-2250, TUC-2251

    • The manager would fail to show the correct status for the replicator when getting status information by proxy.

      Issues: TUC-2254

    • Under some conditions, the manager would shut down the router gateway due to an invalid membership alarm but would not restart the connector. This would cause all new connections to hang indefinitely.

      Issues: TUC-2278

    • When performing a reset of the replicator service, recovery of the failed service would fail.

      Issues: TUC-2290

  • Other Issues

    • The check_tungsten.sh script could fail to locate the tungsten.cfg or read the correct values from the file.

      Issues: TUC-2263

3.13. Continuent Tungsten 2.0.3 GA (1 Aug 2014)

Version End of Life. 31 October 2018

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

We recommend Java 7 for all Release Notes 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Within composite clusters, TCP/IP port 7 connectivity is now required between managers on each site to confirm availability.

Known Issue

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • The default behavior of the manager is to not fence a datasource for which a replicator has stopped or gone into an error state. This was implemented to prevent reducing the overall availability of the deployed service. There are cases and deployments where clusters should not operate with replicators in stopped or error states. This could be configure by changing the following properties to true according to the master or slave role requirements:

    policy.fence.slaveReplicator=false 
    policy.fence.masterReplicator=false

    If they are set to true, the manager should fence the datasource by setting it to a 'failed' state. When this happens, and the datasource is a master, failover will occur. If the datasource is a slave, the datasource will just stay in the failed state indefinitely or until the replicator is back in the online state, in which case the datasource will be recovered to online.

    At present the setting of these properties are not honored.

    Issues: TUC-2241

Improvements, new features and functionality

  • Tungsten Connector

    • The default buffer sizes for the Using Bridge Mode (in [Continuent Tungsten 2.0 Manual]) have been updated to 262144 (256KB).

Bug Fixes

  • Installation and Deployment

    • To ensure that the correct number of the managers and witnesses are configured within the system, tpm (in [Continuent Tungsten 2.0 Manual]) has been updated to check and identify potential issues with the configuration. The installation and checks operate as follows:

      • If there are an even number of members in the cluster (i.e. provided to --members (in [Continuent Tungsten 2.0 Manual]) option):

        • If witnesses are provided through --witnesses (in [Continuent Tungsten 2.0 Manual]), continue normally.

        • If witnesses are not provided through --witnesses (in [Continuent Tungsten 2.0 Manual]), an error is thrown and installation stops.

      • If there are an odd number of members in the cluster (i.e. provided to --members (in [Continuent Tungsten 2.0 Manual]) option):

        • If witnesses are provided through --witnesses (in [Continuent Tungsten 2.0 Manual]), a warning is raised and the witness declaration is ignored.

        • If witnesses are not provided through --witnesses (in [Continuent Tungsten 2.0 Manual]), installation continues as normal.

      The number of members is calculated as follows:

      • Explicitly through the --members (in [Continuent Tungsten 2.0 Manual]) option.

      • Implied, when --active-witnesses=false (in [Continuent Tungsten 2.0 Manual]), then the list of hosts declared in --master (in [Continuent Tungsten 2.0 Manual]) and --slaves (in [Continuent Tungsten 2.0 Manual]).

      • Implied, when --active-witnesses=true (in [Continuent Tungsten 2.0 Manual]), then the list of hosts declared in --master (in [Continuent Tungsten 2.0 Manual]) and --slaves (in [Continuent Tungsten 2.0 Manual]) and --witnesses (in [Continuent Tungsten 2.0 Manual]).

      Issues: TUC-2105

    • If ping traffic was denied during installation, then installation could hang while the ping check was performed. A timeout has now been added to ensure that the operation completes successfully.

      Issues: TUC-2107

  • Backup and Restore

    • When using xtrabackup 2.2.x, backups would fail if the innodb_log_file_size option within my.cnf was not specified. tpm (in [Continuent Tungsten 2.0 Manual]) has been updated to check the value and existence of this option during installation and to provide a warning if it is not set, or set to the default.

      Issues: TUC-2224

  • Tungsten Connector

    • The connector will now re-connect to a MySQL server in the event that an opened connection is found closed between two requests (generally following a wait_timeout expiration).

      Issues: TUC-2163

    • When initially starting up, the connector would open a connection to the configured master to retreive configuration information, but the connection would never be closed, leading to open unused connections.

      Issues: TUC-2166

    • The cluster status output by the tungsten cluster status (in [Continuent Tungsten 2.0 Manual]) within a multi-site cluster would fail to display the correct states of different data sources when an entire data service was offline.

      Issues: TUC-2185

    • When the connector has been configured into read-only mode, for example using --application-readonly-port=9999 (in [Continuent Tungsten 2.0 Manual]), the connector would mistakenly route statements starting set autocommit=0 to the master, instead of being routed to a slave.

      Issues: TUC-2198

    • When operating in bridge mode, the connector would retain the client connection when the server had closed the connection. The connector has been updated to close all client connections when the corresponding server connection is closed.

      Issues: TUC-2231

  • Tungsten Manager

    • The manager could enter a situation where after switching a relay on one physical service, remote site relay is incorrectly reconfigured to point at the new relay. This has been corrected so that reconfiguration no longer occurs in this situation.

      Issues: TUC-2164

    • Recovery from a composite cluster failover could create a composite split-brain situation.

      Issues: TUC-2178

    • A statement of record (SOR) cluster would be unable to recover a failed dataservice.

      Issues: TUC-2194

    • A composite datasource would not go into failsafe mode if all the managers within the cluster were stopped.

      Issues: TUC-2206

    • If a composite datasource becomes isolated due to a network partition, the failed datasource would not go into failsafe mode correctly.

      Issues: TUC-2207

    • If a witness became isolated from the rest of the cluster, the rules would not exclude the failed witness and this could lead to memory exhaustion.

      Issues: TUC-2214

  • Documentation

    • The descriptions and definitions of the archive (in [Continuent Tungsten 2.0 Manual]) and standby (in [Continuent Tungsten 2.0 Manual]) roles has been clarified in the documentation.

      For more information, see Understanding Datasource Roles (in [Continuent Tungsten 2.0 Manual]).

    • The documentation for the recovery of a multi-site multi-master installation has been updated to provide more information when covering.

      Issues: TUC-2175

      For more information, see Resetting a single dataservice (in [Continuent Tungsten 2.0 Manual]).

3.14. Continuent Tungsten 2.0.2 GA (19 May 2014)

Version End of Life. 31 October 2018

This is a recommended release for all customers as it contains important updates and improvements to the stability of the manager component, specifically with respect to stalls and memory usage that would cause manager failures.

In addition, we recommend Java 7 for all Release Notes 2.0 installations. Continuent are aware of issues within Java 6 that cause memory leaks which may lead to excessive memory usage within the manager. This can cause the manager to run out of memory and restart, without affecting the operation of the dataservice. These problems do not exist within Java 7.

Improvements, new features and functionality

  • Installation and Deployment

    • The default Java garbage collection (GC) used within the Connector, Replicator and Manager has been reconfigured to use parallel garbage collection. The default GC could produce CPU starvation issues during execution.

      Issues: TUC-2101

  • Tungsten Connector

    • Keep-alive functionality has been added to the Connector. When enabled, connections to the database server are kept alive, even when there is no client activity.

      Issues: TUC-2103

      For more information, see Connector Keepalive (in [Continuent Tungsten 2.0 Manual]).

Bug Fixes

  • Tungsten Manager

    • The embedded JGroups service, which manages the communication and management of the manager service has been updated to the latest version. This improves the stability of the service, and removes some of the memory leaks causing manager stalls.

    • A number of issues the memory management on the Manager service, particularly with respect to the included JGroups support have been rectified. These issues caused the manager to use increased amounts of memory that could lead to the manager to stall.

Continuent Tungsten 2.0.2 Includes the following changes made in Tungsten Replicator 2.2.1

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The tpm (in [Tungsten Replicator 2.2 Manual]) tool and configuration have been updated to support both older Oracle SIDs and the new JDBC URL format for Oracle service IDs. When configuring an Oracle service, use --datasource-oracle-sid (in [Tungsten Replicator 2.2 Manual]) for older service specifications, and --datasource-oracle-service (in [Tungsten Replicator 2.2 Manual]) for newer JDBC URL installations.

    Issues: 817

Improvements, new features and functionality

  • Installation and Deployment

    • When using the --enable-heterogeneous-master (in [Tungsten Replicator 2.2 Manual]) option to tpm (in [Tungsten Replicator 2.2 Manual]), the MySQL service is now checked to ensure that ROW-based replication has been enabled.

      Issues: 834

  • Command-line Tools

    • The thl (in [Tungsten Replicator 2.2 Manual]) command has been expanded to support an additional output format, -specs (in [Tungsten Replicator 2.2 Manual]), which adds the field specifications for row-based THL output.

      Issues: 801

      For more information, see thl list -specs Command (in [Tungsten Replicator 2.2 Manual]).

  • Oracle Replication

    • Templates have been added to the suite of DDL translation templates supported by ddlscan (in [Tungsten Replicator 2.2 Manual]) to support Oracle to MySQL replication. Two templates are included:

      • ddl-oracle-mysql provides standard translation of DDL when replicating from Oracle to MySQL

      • ddl-oracle-mysql-pk-only provides standard translation of DDL including automatic selection of a primary key from the available unique indexes if no explicit primary key is defined within Oracle DDL when replicating to MySQL

      Issues: 787

    • ddlscan (in [Tungsten Replicator 2.2 Manual]) has been updated to support parsing of a file containing a list of tables to be parsed for DDL information. The file should be formatted as a CSV file, but only the first argument, table name, will be extracted. Lines starting with a # (hash) character are ignored.

      The file is in the same format as used by setupCDC.sh (in [Tungsten Replicator 2.2 Manual]).

      To use the file, supply the -tableFile (in [Tungsten Replicator 2.2 Manual]) parameter to the command.

      Issues: 832

  • Core Replicator

    • The replicator has been updated to support autorecovery from transient failures that would normally cause the replicator to go OFFLINE (in [Tungsten Replicator 6.0 Manual]) while in either the ONLINE (in [Tungsten Replicator 6.0 Manual]) or GOING-ONLINE:SYNCHRONIZING (in [Tungsten Replicator 6.0 Manual]) state. This enables the replicator to recover from errors such as MySQL restarts, or transient connection errors.

      The period, number of attempted recovery operations, and the delay before a recovery is considered successful are configurable through individual properties.

      Issues: 784

      For more information, see Deploying Automatic Replicator Recovery (in [Tungsten Replicator 2.2 Manual]).

    • The way VARCHAR values were stored and represented within the replicator has been updated which improves performance significantly.

      Issues: 804

    • If the binary logs for MySQL were flushed and purged (using FLUSH LOGS and PURGE BINARY LOGS), and then the replicator is restarted, the replicator would fail to identify and locate the newly created logs with an MySQLExtractException.

      Issues: 851

  • Documentation

    • The deployment and recovery procedures for Multi-site/Multi-master deployments have been documented.

      Issues: 797

      For more information, see Deploying Multisite/Multimaster Clusters (in [Tungsten Clustering for MySQL 5.3 Manual]).

Bug Fixes

  • Installation and Deployment

    • tpm (in [Tungsten Replicator 2.2 Manual]) would incorrectly identify options that accepted true/false values, which could cause incorrect interpretations, or subsequent options on the command-line to be used as true/false indications.

      Issues: 310

    • Removing an existing parallel replication configuration (in [Tungsten Replicator 2.2 Manual]) using tpm (in [Tungsten Replicator 2.2 Manual]) would cause the replicator to fail due to a mismatch in the status table and current configuration.

      Issues: 867

  • Command-line Tools

    • The tungsten_provision_slave (in [Tungsten Replicator 2.2 Manual]) tool would fail to correctly re-provision a master within a fan-in or multi-master configuration. When re-provisioning, the service should be reset with trepctl reset (in [Tungsten Replicator 2.2 Manual]).

      Issues: 709

    • Errors when executing tungsten_provision_slave (in [Tungsten Replicator 2.2 Manual]) that have been generated by the underlying mysqldump or xtrabackup are now redirected to STDOUT.

      Issues: 802

    • The tungsten_provision_slave (in [Tungsten Replicator 2.2 Manual]) tool would re-provision using a slave in a OFFLINE:ERROR (in [Tungsten Replicator 6.0 Manual]) state, even through this could create a second, invalid, slave deployment. Reprovisioning from a slave in the ERROR state is now blocked, unless the -f (in [Tungsten Replicator 2.2 Manual]) or --force (in [Tungsten Replicator 2.2 Manual]) option is used.

      Issues: 860

      For more information, see The tungsten_provision_slave Script (in [Tungsten Replicator 2.2 Manual]).

  • Oracle Replication

    • Tuning for the CDC extraction from Oracle has been updated to support both a minimum sleep time parameter, minSleepTime, and the increment value used when increasing the sleep time between updates, sleepAddition.

      Issues: 239

      For more information, see Tuning CDC Extraction (in [Tungsten Replicator 2.2 Manual]).

    • The URLs used for connecting to Oracle RAC SCAN addresses were not correct and were incompatible with non-RAC installations. The URL format has been updated to use a URL format that is compatible with both Oracle RAC and non-RAC installations.

      Issues: 479

  • Core Replicator

    • When a timeout occurred on the connection to MySQL for the channel assignment service (part of parallel applier), the replicator would go offline, rather than retrying the connection. The service has now been updated to retry the connection if a timeout occurs. The default reconnect timeout is 120 seconds.

      Issues: 783

    • A slave replicator would incorrectly set the restart sequence number when reading from a master if the slave THL directory was cleared. This would cause slave replicators to fail to restart correctly.

      Issues: 794

    • Unsigned integers are extracted from the source database in a non-platform independent method. This would cause the Oracle applier to incorrectly attempt to apply negative values in place of their unsigned equivalents. The Oracle applier has been updated to correctly translate these values for types identified as unsigned to the correct value. When viewing these values are viewed within the THL, they will still be identified as a negative value.

      Issues: 798

      For more information, see thl list Command (in [Tungsten Replicator 2.2 Manual]).

    • Replication would fail when processing binlog entries containing the statement INSERT INTO ... WHERE... when operating in mixed mode.

      Issues: 807

  • Filters

    • The mysqlsessionsupport (in [Tungsten Replicator 6.0 Manual]) filter would cause replication to fail when the default thread_id was set to -1, for example when STRICT_ALL_TABLES SQL mode had been enabled. The replicator has been updated to interpret -1 as 0 to prevent this error.

      Issues: 821

    • The rename (in [Tungsten Replicator 6.0 Manual]) filter has been updated so that renaming of only the schema name for STATEMENT events. Previously, only ROW events would be renamed by the filter.

      Issues: 842

3.15. Continuent Tungsten 2.0.1 GA (3 January 2014)

Version End of Life. 31 October 2018

Important

The final approved build for Release Notes 2.0.1 is build 1003. Earlier builds do not have the full set of features and functionality, and includes a number of key fixes not in earlier builds of the same release. In particular, updated support for passive witnesses was not available in earlier builds.

Continuent Tungsten 2.0.1 is the first generally available release of Release Notes 2.0, which offers major improvements to Continuent's industry-leading database-as-a-service offering. Release Notes 2.0.1 contains all the improvements incorporated in Version 1.5.4, and the fixes and new features included within Tungsten Replicator 2.2.0, as well as the following features:

  • Cluster Management

    • An improved manager that simplifies recovery of your cluster.

    • New tools to make provisioning and recovery of replication issues.

    • Improved witness host and decision engine to provide better quorum for preventing split-brain and prevent multiple live masters.

    • SSL-based encryption and authentication for cluster management through all command-line tools.

  • Connector

    • SSL support enables SSL and non-SSL clients, and SSL and non-SSL connectivity between the connector and database servers.

    • Support for setting the maximum latency for slaves when redirecting queries.

  • Installation and Deployment

    • Improved tpm installation tool that eases deployment and configuration of all clusters, including multi-master and multi-site/multi-master.

    • INI file based installation through tpm that enables easier installation, including through Puppet and other script-based solutions.

  • Core Replication

    • Includes all Tungsten Replicator 2.2.0 features, including low-impact, low-latency replication, advanced filtering

    • Supports MySQL (5.0, 5.1, 5.5, 5.6), MariaDB (5.5) and Percona Server (5.5).

    • Supports replication to and from MySQL and Oracle, and Oracle to Oracle.

    • Data loading to Vertica and data warehouses, and real-time publishing to MongoDB.

    • SSL-based encryption for exchanging replication data.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • When using the xtrabackup method for performing backups, the default is to use the xtrabackup-full operation to perform a full backup.

    Issues: TUC-1327

  • The default load balancer used for load-balancing connections within the Connector has been updated to use the RO_RELAXED (in [Continuent Tungsten 2.0 Manual]) QoS balancer. This takes account of the HighWater mark when redirecting queries and compares the applied sequence number rather than relying only on the latency.

    Issues: TUC-1589

  • Current strategy for preventing split-brain by using a witness host is not workable for many customers. The witness host configuration and checks have been changed to prevent these problems.

    Issues: TUC-1650

  • Failover could be rolled back because of a failure to release a Virtual IP. The failure has been updated to trigger a warning, not a rollback of failover.

    Issues: TUC-1666

  • An 'UnknownHostException' would cause a failover. The behavior has been updated to result in a suspect DB server.

    Issues: TUC-1667

  • A new type of witness host has been added. A new active witness supports a manager-only based installation. The active witness is able to take part in decisions about failure in the event of datasource and/or network connectivity issues.

    As a result, the following changes apply for all witness host selection and installation:

    • Witnesses must be on the same network subnet as the existing managers.

    • Dataservices must have at least three managers to provide status check during failure.

    • Active witnesses can be created; these install only the manager on target hosts to act witnesses to check network connectivity to the configured dataserver and connectors configured within the service.

    Issues: TUC-1854

    For more information, see Host Types (in [Continuent Tungsten 2.0 Manual]).

  • Failover does not occur if the manager is not running, on the master host, before the time that the database server is stopped.

    Issues: TUC-1900

  • Read-only MySQL slaves no longer work.

    Issues: TUC-1903

Improvements, new features and functionality

  • Installation and Deployment

    • tpm (in [Continuent Tungsten 2.0 Manual]) has been updated to support configuration of the maximum applied latency for the connector using either the --connector-max-slave-latency (in [Continuent Tungsten 2.0 Manual]) or --connector-max-applied-latency (in [Continuent Tungsten 2.0 Manual]) options.

      Issues: TUC-733

    • Installer should provide a way to setup RO_RELAXED (in [Continuent Tungsten 2.0 Manual]) (read-only with no SQL checking) connectors.

      Issues: TUC-954

    • Post-installation notes do not specify hosts that can run cctrl (in [Continuent Tungsten 2.0 Manual]).

      Issues: TUC-1118

    • Create a tpm cook command that masks the tungsten-cookbook script

      Issues: TUC-1182

    • The tpm (in [Continuent Tungsten 2.0 Manual]) validation has been updated to provided warnings when the sync_binlog and innodb_flush_log_at_trx_commit MySQL options are set incorrectly.

      Issues: TUC-1656

    • A new tpm (in [Continuent Tungsten 2.0 Manual]) command has been added to list different connector connection commands/syntax.

      Issues: TUC-1661

    • Add default path to security files, to facilitate their retrieval.

      Issues: TUC-1676

    • Support a --dataservice-witnesses value of "none"

      Issues: TUC-1715

    • The tpm (in [Continuent Tungsten 2.0 Manual]) command should not be accessible on installed data sources.

      Issues: TUC-1717

    • Allow tpm configuration that is compatible with puppet/chef/etc

      Issues: TUC-1735

    • Auto-generated properties line should go at the top of the files.

      Issues: TUC-1739

    • Add tpm switch for rrIncludeMaster router properties.

      Issues: TUC-1744

    • During installation, the security.access_file.location property should be changed to security.rmi.jmxremote.access_file.location.

      Issues: TUC-1805

    • Split the cross machine checks out of MySQLPermissionsCheck.

      Issues: TUC-1838

    • The installation of Multi-Site Multi-Master deployments has been simplified.

      Issues: TUC-1923

      For more information, see Deploying Multisite/Multimaster Clusters (in [Continuent Tungsten 2.0 Manual]).

  • Command-line Tools

    • A completion script for command-line completion within bash has been added to the installation. The file is located in tools/.tpm.complete within the installation directory.

      Issues: TUC-1591

    • Write scripts to coordinate backups across an entire cluster.

      Issues: TUC-1641

    • cctrl (in [Continuent Tungsten 2.0 Manual]) should not report that recover is an expert command

      Issues: TUC-1839

    • An option, -a, --authenticate has been added to the tpasswd (in [Continuent Tungsten 2.0 Manual]) utility to validate an existing password entry.

      Issues: TUC-1916

  • Cookbook Utility

    • Tungsten cookbook should run manager|replicator|connector dump before collecting logs.

      Issues: TUC-1660

    • Cookbook has been updated to support both active and passive witnesses.

      Issues: TUC-1942

    • Cookbook has been updated to allow backups from masters to be used.

      Issues: TUC-1943

  • Backup and Restore

    • The datasource_backup.sh script has been updated to limit running only on the COORDINATOR and to find a non-MASTER datasource.

      Issues: TUC-1684

  • MySQL Replication

    • Add support for MySQL 5.6

      Issues: TUC-1624

  • Tungsten Connector

    • Support for MySQL 4.0 passwords within the connector has been included. This provides support for both old MySQL versions and older versions of the MySQL protocol used by some libraries and clients.

      Issues: TUC-784

    • Connector must forbid zero keepAliveTimeout.

      Issues: TUC-1714

    • In SOR deployments only, Connector logs show relay data service being added twice.

      Issues: TUC-1720

    • Change default delayBeforeOfflineIfNoManager router property to 30s and constrain it to max 60s in the code.

      Issues: TUC-1752

    • Router Manager connection timeout should be a property.

      Issues: TUC-1754

    • Add client IP and port when logging connector message.

      Issues: TUC-1810

    • Make tungsten cluster status (in [Continuent Tungsten 2.0 Manual]) more sql-like and reduce the amount of information displayed.

      Issues: TUC-1814

    • Connector client side SSL support for MySQL

      Issues: TUC-1825

  • Tungsten Manager

    • cctrl (in [Continuent Tungsten 2.0 Manual]) should show if a given data source is secured.

      Issues: TUC-1816

    • The datasource hostname recover command should not invoke the expert warning.

      Issues: TUC-1840

  • Manager API

    • Smarter enabling of the Manager API

      Issues: TUC-1621

    • Support has been added to specify the addresses for the Manager API to listen on.

      Issues: TUC-1643

    • The Manager API has been updated with a method to list all the available dataservices.

      Issues: TUC-1674

    • Add DataServiceState and DataSource into the payload when applicable

      Issues: TUC-1701

    • Add classes to the Ruby libraries that handle API calls

      Issues: TUC-1707

    • Add an API call that prints the manager live properties

      Issues: TUC-1713

  • Platform Specific Deployments

    • Add Java wrapper support for FreeBSD.

      Issues: TUC-1632

    • Commit FreeBSD fixes to Java sockets and port binding.

      Issues: TUC-1633

  • Documentation

    • Document among the prerequisites that Tungsten installers do not support mysqld_multi.

      Issues: TUC-1679

  • Other Issues

    • Write a tpm test wrapper for the cookbook testing scripts.

      Issues: TUC-1396

    • Document the process of sending emails based on specific log4j messages

      Issues: TUC-1500

    • The check_tungsten.sh script has been updated to check and restart enterprise load balancers that use the xinetd service.

      Issues: TUC-1573

    • Expand zabbix monitoring to match nagios checks.

      Issues: TUC-1638

    • Turn SET NAMES log message into DEBUG.

      Issues: TUC-1644

    • Remove old/extra/redundant configuration files.

      Issues: TUC-1721

    • Backport critical 1.5.4 manager changes to 2.0.1

      Issues: TUC-1855

Bug Fixes

  • Installation and Deployment

    • Tungsten can't install if the 'mysql' client is not in the path.

      Issues: TUC-999

    • An extra -l flag when running sudo command would be added to the configuration.

      Issues: TUC-1025

    • Installer will not easily work when installing SOR data services one host at a time.

      Issues: TUC-1036

    • The tpm (in [Continuent Tungsten 2.0 Manual]) did not verify that the permissions for the tungsten DB user allow for cross-database host access.

      Issues: TUC-1146

    • Specifying a Symbolic link for the Connector/J creates a circular reference.

      Issues: TUC-1567

    • Replication of DATETIME values with a Daylight Savings Time (DST) would replicate incorrect values. Installation of a replication service where there are different timezones for the Java environment and the MySQL environment may cause incorrect replication.

      Issues: 542, TUC-1593

    • The replicator service would not be imported into the cluster directory - causes subsequent failures in switch and other operations.

      Issues: TUC-1594

    • tpm (in [Continuent Tungsten 2.0 Manual]) would fail to skip the GlobalHostAddressesCheck (in [Continuent Tungsten 2.0 Manual]) when performing a tpm configure (in [Continuent Tungsten 2.0 Manual]) followed by tpm validate (in [Continuent Tungsten 2.0 Manual]).

      Issues: TUC-1599

    • tpm (in [Continuent Tungsten 2.0 Manual]) does not recognize datasources when they start with capital letter.

      Issues: TUC-1655

    • Installation of multiple replicator with tpm (in [Continuent Tungsten 2.0 Manual]) fails.

      Issues: TUC-1680

    • The check for Java version fails when OpenJDK does not say "java".

      Issues: TUC-1681

    • The installer did not make sure that witness servers are in the same network as the cluster.

      Issues: TUC-1705

    • tpm (in [Continuent Tungsten 2.0 Manual]) does not install if there is a Tungsten Replicator installer already running.

      Issues: TUC-1712

    • Errors during installation of composite dataservice.

      Issues: TUC-1726

    • The tpm (in [Continuent Tungsten 2.0 Manual]) command returns an ssh error when attempting to install a composite data service.

      Issues: TUC-1727

    • Running tpm (in [Continuent Tungsten 2.0 Manual]) with no arguments raises an error.

      Issues: TUC-1788

    • Installation fails with Ruby 1.9.

      Issues: TUC-1800

    • tpm (in [Continuent Tungsten 2.0 Manual]) will not throw an error if the user gives the connectorj-path as the path to a symlink instead of a real file.

      Issues: TUC-1815

    • tpm (in [Continuent Tungsten 2.0 Manual]) does not check dependencies of security options.

      Issues: TUC-1818

    • When checking process limits during installation, the check would fail the installation process instead of providing a warning.

      Issues: TUC-1822

    • During tpm (in [Continuent Tungsten 2.0 Manual]) validation wrongly complains about a witness not being in the same subnet.

      Issues: TUC-1848

    • During installation, tpm (in [Continuent Tungsten 2.0 Manual]) could install SSL support for the connector even though the MySQL server has not been configured for SSL connectivity.

      Issues: TUC-1909

    • Running tpm update (in [Continuent Tungsten 2.0 Manual]) would cause the master replicator to become a slave during the update when the master had changed from the configuration applied using --dataservice-master-host (in [Continuent Tungsten 2.0 Manual]).

      Issues: TUC-1921

    • tpm (in [Continuent Tungsten 2.0 Manual]) could allow meaningless specifications of active witnesses.

      Issues: TUC-1941

    • tpm (in [Continuent Tungsten 2.0 Manual]) has been updated to provide the correct link to the documentation for further information.

      Issues: TUC-1947

    • Performing tpm reset (in [Continuent Tungsten 2.0 Manual]) would remove all the files within the cluster-home/conf directories, instead of only the files for services tpm (in [Continuent Tungsten 2.0 Manual]) was aware of.

      Issues: TUC-1949

    • tpm (in [Continuent Tungsten 2.0 Manual]) would require the --active-witnesses (in [Continuent Tungsten 2.0 Manual]) or --enable-active-witnesses (in [Continuent Tungsten 2.0 Manual]) option, when other witness types are available for configuration.

      Issues: TUC-1951

    • tpm (in [Continuent Tungsten 2.0 Manual]) would check the same witness subnet when using active witnesses, which do not need to be installed on the same subnet.

      Issues: TUC-1953

    • A tpm update (in [Continuent Tungsten 2.0 Manual]) operation would not recognize active witnesses properly.

      Issues: TUC-1975

    • A tpm uninstall operation would complain about missing databases in connector tests.

      Issues: TUC-1978

    • tpm (in [Continuent Tungsten 2.0 Manual]) would not remove the connector.ro.properties file if the configuration is updated to not have --application-readonly-port (in [Continuent Tungsten 2.0 Manual]).

      Issues: TUC-1981

    • tpm (in [Continuent Tungsten 2.0 Manual]) would enable installation when MariaDB 10.0 was installed, even though this is not a supported configuration.

      Issues: TUC-1987

    • The method used to compare whether hosts were on the same subnet would fail to identify hosts correctly.

      Issues: TUC-1995

  • Command-line Tools

    • Running cctrl (in [Continuent Tungsten 2.0 Manual]) on a host which only had the connector server would not report a useful error. This has now been updated to show a warning message. error

      Issues: TUC-1642

    • The check_tungsten command had different command line arguments from check_tungsten.sh.

      Issues: TUC-1675

    • Nagios check scripts not picking up shunned datasources

      Issues: TUC-1689

    • cctrl (in [Continuent Tungsten 2.0 Manual]) could output the status of a host with a null value in place of the correct hostname.

      Issues: TUC-1893

    • Using the recover datasource command within a composite service would fail, even though datasource recover (in [Continuent Tungsten 2.0 Manual]) would work.

      Issues: TUC-1912

    • The check_tungsten_latency (in [Continuent Tungsten 2.0 Manual]) --perslave-perfdata (in [Continuent Tungsten 2.0 Manual]) option would not include information for relay hosts.

      Issues: TUC-1915

    • A large error message could be found included within the status block of ls (in [Continuent Tungsten 2.0 Manual]) output within cctrl (in [Continuent Tungsten 2.0 Manual]). The error message information has been redirected to the error log.

      Issues: TUC-1931

    • Performing switch (in [Continuent Tungsten 2.0 Manual]) operations within a composite service using active witnesses could raise an error and fail.

      Issues: TUC-1946

    • cctrl (in [Continuent Tungsten 2.0 Manual]) would be unable to create a composite datasource after dropping it.

      Issues: TUC-1956

    • Backwards compatibility for the recover using (in [Continuent Tungsten 2.0 Manual]) has been incorporated.

      Issues: TUC-1971

  • Cookbook Utility

    • The tungsten-cookbook tests fails and does not print current status.

      Issues: TUC-1623

    • The tungsten-cookbook uses resolveip instead of standard name resolution tools.

      Issues: TUC-1646

    • The tungsten-cookbook tool sometimes misunderstands the result of composite recovery.

      Issues: TUC-1662

    • Cookbook gets warnings when used with a MySQL 5.6 client.

      Issues: TUC-1673

    • The cookbook does not wait for a database server to be offline properly.

      Issues: TUC-1685

    • tungsten-cookbook does not check the status of the relay server after a composite recovery.

      Issues: TUC-1695

    • tungsten-cookbook does not check all the components of a datasource when testing a server.

      Issues: TUC-1696

    • tungsten-cookbook does not collect the configuration files under cluster-home.

      Issues: TUC-1697

    • Cookbook should not specify witness hosts in default configuration files etc.

      Issues: TUC-1734

    • Tungsten cookbook fails the replicator test.

      Issues: TUC-1827

    • Using a backup that has been copied across servers within cookbook could overwrite or replace existing backup files, which would then make the backup file appear as older than it should be, making it unavailable in restore operations.

      Issues: TUC-1936

  • Backup and Restore

    • The mysqldump backup option cannot restore if slow_query_log was on during the backup process.

      Issues: TUC-586

    • Using xtrabackup during restore fails if MySQL is running as user 'anything-but-mysql' and without root access.

      Issues: TUC-1005

    • When using mysqldump restore, the operation failed to disable slow and general logging before applying the restore.

      Issues: TUC-1330

    • Backup fails when using the xtrabackup-full agent.

      Issues: TUC-1612

    • Recovery hangs with composite data service.

      Issues: TUC-1657

    • Performing a restore with xtrabackup fails.

      Issues: TUC-1672

    • The datasource backup (in [Continuent Tungsten 2.0 Manual]) operation could fail due to a Ruby error.

      Issues: TUC-1686

    • Restore with xtrabackup fails.

      Issues: TUC-1716

    • Issues when recovering a failed physical dataservice.

      Issues: TUC-1793

    • Backup with xtrabackup fails if datadir is not defined in my.cnf.

      Issues: TUC-1821

    • When using xtrabackup restore fails.

      Issues: TUC-1846

    • After a restore, datasource is welcomed and put online, but never gets to the online state.

      Issues: TUC-1861

    • A restore that occurs immediately after a recover from dataserver failure always fails.

      Issues: TUC-1870

    • Master datasource backup generates superficial failure message but succeeds anyway.

      Issues: TUC-1896

    • Restoration of a full backup would fail due to the inclusion of the xtrabackup_incremental_basedir directory.

      Issues: TUC-1919

    • Backup using xtrabackup 1.6.5 would fail.

      Issues: TUC-1920

    • When using the backup files copied from another server, the replicator could mistakenly use the wrong backup files when performing a restore.

      Issues: TUC-1948

  • Core Replicator

    • Master failure causes partial commits on the slave with single channel parallel apply.

      Issues: TUC-1625

    • Slave applier can fail to log error when DBMS fails due to exception in cleanup.

      Issues: TUC-1626

    • Replication would fail on slave due to null characters created when inserting ___SERVICE___ comments.

      Issues: TUC-1627

    • LOAD (LOCAL) DATA INFILE would fail if the request starts with white spaces.

      Issues: TUC-1639

    • Datasource with a replicator in GOING-ONLINE:RESTORING (in [Tungsten Replicator 6.0 Manual]) shows up with a replicator state=UNKNOWN.

      Issues: TUC-1658

    • An insecure slave can replicate from secure master.

      Issues: TUC-1677

    • Replicator does not drop client connection to master and reconnect within the same time frame as in previous releases.

      Issues: TUC-1688

  • Filters

    • Primary key filter should be able to renew its internal connection after some timeout.

      Issues: TUC-1803

  • Tungsten Connector

    • TSR Session not updated when the database name changes (with sessionId set to DATABASE)

      Issues: TUC-761

    • Router gateway can prevent manager startup if the connector is started before the manager

      Issues: TUC-850

    • The Tungsten show processlist command would throw NPE errors.

      Issues: TUC-1136

    • Selective read/write splitting (SQL-Based routing) has been updated to ensure that it is backwards compatible with previous read/write splitting configurations.

      Issues: TUC-1489

    • Router must go into fail-safe mode if it loses connectivity to a manager during a critical command.

      Issues: TUC-1549

    • Use of the SET NAMES command were not forwarded to attached read-only connections.

      Issues: TUC-1569

    • When using haproxy through a connector connection, the initial query would be rejected.

      Issues: TUC-1581

    • When the dataservices.properties (in [Continuent Tungsten 2.0 Manual]) file is empty, the connector would hang. The operation has now been updated to exit with an exception if the file cannot be found.

      Issues: TUC-1586

    • When in a SOR deployment, the Connector will never return connection requests with RO_RELAXED (in [Continuent Tungsten 2.0 Manual]) and affinity set to relay node only site.

      Issues: TUC-1620

    • Affinity not honored when using direct connections.

      Issues: TUC-1628

    • Connector queries for SHOW SLAVE STATUS return incorrect slave latency of 0 intermittently.

      Issues: TUC-1645

    • The Tungsten Connector does not know it's PID following upgrade to JSW 3.5.17.

      Issues: TUC-1665

    • An attempt to load a driver listener class can cause the connector to hang, at startup.

      Issues: TUC-1669

    • Read connections allocated by connector get 'stale' and are closed by MySQL server due to wait_timeout - causes app 'transparency' issues.

      Issues: TUC-1671

    • Broken connections returned to the c3p0 pool - further use of these will show errors.

      Issues: TUC-1683

    • Router disconnects from a manager in the middle of a switch (in [Continuent Tungsten 2.0 Manual]) command - writes continue to offline master.

      Issues: TUC-1692

    • Connector sessionId passed in database name not retained

      Issues: TUC-1704

    • When using USE DB within a connector after the database had previously been dropped would be incorrectly ignored.

      Issues: TUC-1718

    • The connector tungsten flush privileges (in [Continuent Tungsten 2.0 Manual]) command causes a temporary outage (denies new connection requests).

      Issues: TUC-1730

    • Database context not changed to the correct database when qos=DATABASE is in use.

      Issues: TUC-1779

    • Connector should require a valid manager to operate even when in maintenance mode.

      Issues: TUC-1781

    • Connector allows connections to an offline/on-hold composite dataservice.

      Issues: TUC-1787

    • Router notifications are being sent to routers via GCS. This is unnecessary since a manager only updates routers that are connected to it.

      Issues: TUC-1790

    • Pass through not handling correctly multiple results in 1.5.4.

      Issues: TUC-1792

    • SmartScale will fail to create a database and use immediately.

      Issues: TUC-1836

    • The connector could hang during installation test.

      Issues: TUC-1847

    • Under certain circumstances, SSL-configuration for the Connector would be unable to start properly.

      Issues: TUC-1869

      For more information, see Configuring Connector SSL (in [Continuent Tungsten 2.0 Manual]).

    • Specify where to load security properties from in the connector.

      Issues: TUC-1872

    • A SET NAMES operation would not survive a switch (in [Continuent Tungsten 2.0 Manual]) or failover (in [Continuent Tungsten 2.0 Manual]) operation.

      Issues: TUC-1879

    • The connector command within cctrl (in [Continuent Tungsten 2.0 Manual]) has been disabled unless the connector and manager are installed on the same host.

      To support the removed functionality, the following changes to the router (in [Continuent Tungsten 2.0 Manual]) command have been made:

      • The * wildcard can be used for connectors within the router (in [Continuent Tungsten 2.0 Manual]) command within cctrl (in [Continuent Tungsten 2.0 Manual]). For example, router * online will place all available connectors online.

      • The built-in command-line completion provides the names of the connectors in addition to the * (wildcard) character for the router (in [Continuent Tungsten 2.0 Manual]) command.

      Issues: TUC-1918

    • Using cursors within stored procedures through the connector would cause a hang in the connector service.

      Issues: TUC-1950

    • The connector would hang when working in a cluster with active witnesses.

      Issues: TUC-1954

    • When specifying the affinity within a connection, the maxAppliedLatency configuration would be ignored.

      Issues: TUC-1960

    • The connector would check for changes to the user.map (in [Continuent Tungsten 2.0 Manual]) frequently, causing lag on high-load servers. The configuration has been updated to allow checking only every 10s.

      Issues: TUC-1972

    • Passing the qos option within a database name would not work when smart scale was enabled.

      Issues: TUC-1982

  • Tungsten Manager

    • The datasource restore (in [Continuent Tungsten 2.0 Manual]) command may fail when using xtrabackup if the file ownership for the backup files is wrong.

      Issues: TUC-1226

    • Dataservice has different "composite" status depending on how its status is called.

      Issues: TUC-1614

    • The switch (in [Continuent Tungsten 2.0 Manual]) command does not validate command line correctly.

      Issues: TUC-1618

    • Composite recovery would fail because a replicator that was previously a master tries to re-apply a transaction that it had previously committed.

      Issues: TUC-1634

    • cctrl (in [Continuent Tungsten 2.0 Manual]) would let you shun the master datasource.

      Issues: TUC-1637

    • During a failover, the master could be left in read-only mode.

      Issues: TUC-1648

    • On occasion, the manager would fail to restart after being hung.

      Issues: TUC-1649

    • The ping command in cctrl (in [Continuent Tungsten 2.0 Manual]) wrongly identifies witness server as unreachable.

      Issues: TUC-1652

    • The failure of primary data source could go unhanded due to a manager restart.

      Issues: TUC-1659

    • The manager reports composite recovery completion although the operation has failed.

      Issues: TUC-1663

    • A transient error can cause a confused state.

      Issues: TUC-1678

    • Composite recovery could fail, but the manager says it was complete.

      Issues: TUC-1694

    • The internal Call to OpenReplicatorManager.status() during transition from online to offline results in a NullPointerException.

      Issues: TUC-1708

    • Relay does not fail over when the database server is stopped.

      Issues: TUC-1711

    • The cctrl (in [Continuent Tungsten 2.0 Manual]) would raise an error when running a backup from a master.

      Issues: TUC-1789

    • Tungsten manager may report false host failures due to a temporary problem with name resolution.

      Issues: TUC-1797

    • cctrl (in [Continuent Tungsten 2.0 Manual]) could report a manager as ONLINE (in [Tungsten Replicator 6.0 Manual]) even though the datasource would in fact be OFFLINE (in [Tungsten Replicator 6.0 Manual]).

      Issues: TUC-1804

    • The manager would not see a secured replicator.

      Issues: TUC-1806

    • Slave replicators never come online after a switch when using secure thl.

      Issues: TUC-1807

    • cctrl (in [Continuent Tungsten 2.0 Manual]) complains of missing security file when security is not enabled.

      Issues: TUC-1808

    • Switch in relay site fails and takes offline all nodes.

      Issues: TUC-1809

    • A switch in the relay site sets the relay to replicate from itself.

      Issues: TUC-1811

    • In a composite deployment, a switch in the primary site is not propagated to the relay.

      Issues: TUC-1813

    • cctrl (in [Continuent Tungsten 2.0 Manual]) exposes security passwords unnecessarily.

      Issues: TUC-1817

    • The master datasource is not available following the failover (in [Continuent Tungsten 2.0 Manual]) command.

      Issues: TUC-1841

    • The manager does not support a non-standard replicator RMI port.

      Issues: TUC-1842

    • In a multi-site deployment, automatic failover does not happen in maintenance mode, due to replicator issues.

      Issues: TUC-1845

    • During the recovery of a composite dataservice, the restore of a shunned master could fail because the previous and current roles did not match.

      Issues: TUC-1857

    • A stopped dataserver would not be detected if cluster was in maintenance mode when it was stopped.

      Issues: TUC-1860

    • Manager attempts to get status of remote replicator from the local service - causes a failure to catch up from a relay.

      Issues: TUC-1864

    • A switch (in [Continuent Tungsten 2.0 Manual]) operation could fail in single site deployment.

      Issues: TUC-1867

    • In a configuration with a relay of a composite site, if all active data datasources are unavailable, a switch (in [Continuent Tungsten 2.0 Manual]) operation would raise invalid exception messages.

      Issues: TUC-1875

    • recover using (in [Continuent Tungsten 2.0 Manual]) fails in the simplest case for 2.0.1.

      Issues: TUC-1876

    • Manager fails safe even if it is in the quorum set and primary partition.

      Issues: TUC-1878

    • Single command recover (in [Continuent Tungsten 2.0 Manual]) does not work - does not find datasources to recover even if they exist.

      Issues: TUC-1881

    • Failover causes old master node name to disappear from cctrl (in [Continuent Tungsten 2.0 Manual]) ls (in [Continuent Tungsten 2.0 Manual]) command.

      Issues: TUC-1894

    • ClusterManagementHandler can read/write datasources directly from the local disk - can cause cluster configuration information corruption.

      Issues: TUC-1899

    • Stopping managers does not cause membership validation rules to kick in. This can lead to an invalid group.

      Issues: TUC-1901

    • The manager rules could fail to fence a composite datasource for which all managers in the service are unreachable.

      Issues: TUC-1902

    • recover using (in [Continuent Tungsten 2.0 Manual]) in a master service could convert one of the datasources into a relay instead of a slave.

      Issues: TUC-1907

    • CREATE COMPOSITE DATASOURCE could result in an exception if the master datasource site was used.

      Issues: TUC-1911

    • The manager would throw a false alarm if the trep_commit_seqno (in [Tungsten Replicator 6.0 Manual]) table was empty. This was due to the manager being started before the replicator had created the required table.

      Issues: TUC-1917

    • Composite recovery within a cloud deployment could fail.

      Issues: TUC-1922

    • Errors could be raised when using the set master (in [Continuent Tungsten 2.0 Manual]) and recover using (in [Continuent Tungsten 2.0 Manual]) commands within cctrl (in [Continuent Tungsten 2.0 Manual]).

      Issues: TUC-1930

    • Composite recovery could fail in a site with multiple masters.

      Issues: TUC-1932

    • A failed master within a dataservice would cause the datasource names to disappear.

      Issues: TUC-1933

    • Running switch (in [Continuent Tungsten 2.0 Manual]) command after performing recovery could fail within a multi-site deployment.

      Issues: TUC-1934

    • Performing a switch (in [Continuent Tungsten 2.0 Manual]) operation when there are active witness could cause an error message indicating a fault, when in fact the operation completed successfully.

      Issues: TUC-1935

    • After performing a switch operation, a slave could report to the previous, not active, relay.

      Issues: TUC-1939

    • Running operations on active witness datasources would raise NullPointerException errors.

      Issues: TUC-1944, TUC-1945

    • Errors would be reported in the log when deserializing configuration information between the manager and connector.

      Issues: TUC-1963

    • Automatic failover would fail to run if an active witness was the coordinator for the dataservice.

      Issues: TUC-1964

    • Connectors would disappear after restarting the coordinator.

      Issues: TUC-1966

    • The coordinator would attempt to check database server liveness if a manager on a witness host goes away.

      Issues: TUC-1970

    • Composite recovery using a streaming backup results in a site with multiple masters.

      Issues: TUC-1992

    • Installing a composite dataservice would create two master services.

      Issues: TUC-1996

  • Manager API

    • API call for a single server does not report replicator status.

      Issues: TUC-1615

    • API "promote" command does not operate in a composite dataservice.

      Issues: TUC-1617

    • Some indispensable commands missing from manager API.

      Issues: TUC-1654

    • Manager API does not answer to /manager/status/svc_name without Accept header

      Issues: TUC-1690

    • The Manager API lets you shun a master.

      Issues: TUC-1706

    • The call to 'policy' API fails in composite dataservice.

      Issues: TUC-1725

  • Platform Specific Deployments

    • Windows service registration scripts won't work.

      Issues: TUC-1636

    • FreeBSD: Replicator hangs when going offline. Can cause switch to hang/abort.

      Issues: TUC-1668

  • Documentation

  • Other Issues

    • The shared libraries used by Continuent Tungsten have now been centralized in the cluster-home directory.

      Issues: TUC-1310

    • Some build warnings in Java 1.6 become errors in Java 1.7.

      Issues: TUC-1731

    • The test_connection_routing_and_isolation.rb test_tuc_98 test never selects the correct master.

      Issues: TUC-1780

    • During testing, a test that stops and restarts the replicator fails because a replicator that is actually running shows up, subsequently, as stopped.

      Issues: TUC-1895

    • The wrapper for the service was not honoring the configured wait period during a restart, which could cause a hang or failure when the service was restarted.

      Issues: TUC-1910, TUC-1913

Continuent Tungsten 2.0.1 Includes the following changes made in Tungsten Replicator 2.2.0

Release Notes 2.2.0 is a bug fix and feature release that contains a number of key improvements to the installation and management of the replicator:

  • tpm (in [Tungsten Replicator 2.2 Manual]) is now the default installation and deployment tool; use of tungsten-installer, configure, configure-service, and update are deprecated.

  • tpm (in [Tungsten Replicator 2.2 Manual]) incorporates support for both INI file and staging directory deployments. See tpm INI File Configuration (in [Tungsten Replicator 2.2 Manual]).

  • Deployments are possible using standard Linux RPM and PKG deployments. See Using the RPM and DEB package files (in [Tungsten Replicator 2.2 Manual]).

  • tpm (in [Tungsten Replicator 2.2 Manual]) has been improved to handle heterogeneous deployments more easily.

  • New command-line tools have been added to make recovery easier during a failure. See The tungsten_provision_slave Script (in [Tungsten Replicator 2.2 Manual]), The tungsten_read_master_events Script (in [Tungsten Replicator 2.2 Manual]), The tungsten_set_position Script (in [Tungsten Replicator 2.2 Manual]).

  • Improvements to the core replicator, including identification and recovery from failure.

  • New multi_trepctl (in [Tungsten Replicator 2.2 Manual]) tool for monitoring multiple hosts/services.

Behavior Changes

The following changes have been made to Release Notes and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • The thl info (in [Tungsten Replicator 2.2 Manual]) command has been updated so that the output also displays the lowest and highest THL file, sizes and dates.

    Issues: 471

    For more information, see thl info Command (in [Tungsten Replicator 2.2 Manual]).

  • The following commands to trepctl (in [Tungsten Replicator 2.2 Manual]) have been deprecated and will be removed in a future release:

    Issues: 672

    For more information, see trepctl load Command (in [Tungsten Replicator 2.2 Manual]), trepctl unload Command (in [Tungsten Replicator 2.2 Manual]), Starting and Stopping Tungsten Replicator (in [Tungsten Replicator 2.2 Manual]).

  • The tpm (in [Tungsten Replicator 2.2 Manual]) command has been updated to be the default method for installing deployments using the cookbook. To use the old tungsten-installer command, set the USE_OLD_INSTALLER environment variable.

    Issues: 691

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • Installations for Amazon RDS must use tungsten-installer; support is not currently available for tpm (in [Tungsten Replicator 2.2 Manual]).

Improvements, new features and functionality

  • Installation and Deployment

  • Command-line Tools

    • A new command-line tool, tungsten_set_position (in [Tungsten Replicator 2.2 Manual]), has been created. This enables the position of either a master or slave to be set with respect to reading local or remote events. This provides easier control over during the recovery of a slave or master in the event of a failure.

      Issues: 684

      For more information, see The tungsten_set_position Script (in [Tungsten Replicator 2.2 Manual]), Managing Transaction Failures (in [Tungsten Replicator 2.2 Manual]).

    • A new command-line tool, tungsten_provision_slave (in [Tungsten Replicator 2.2 Manual]), has been created. This allows for an automated backup of an existing host and restore of that data to a new host. The script can be used to provision new slaves based on existing slave configurations, or to recover a slave that has failed.

      Issues: 689

      For more information, see The tungsten_provision_slave Script (in [Tungsten Replicator 2.2 Manual]), Managing Transaction Failures (in [Tungsten Replicator 2.2 Manual]).

    • A new command-line tool, tungsten_read_master_events (in [Tungsten Replicator 2.2 Manual]), has been created. This enables events from the MySQL binary log to be viewed based on the THL event ID.

      Issues: 694

      For more information, see The tungsten_read_master_events Script (in [Tungsten Replicator 2.2 Manual]), Managing Transaction Failures (in [Tungsten Replicator 2.2 Manual]).

    • The trepctl properties (in [Tungsten Replicator 2.2 Manual]) command has been updated to support a -values option that outputs only the values for filtered properties.

      Issues: 719

      For more information, see trepctl properties Command (in [Tungsten Replicator 2.2 Manual]).

    • The multi_trepctl (in [Tungsten Replicator 2.2 Manual]) command has been added. The tool enables status and other output from multiple hosts and/or services, providing a simpler way of monitoring a typical Continuent Tungsten installation.

      Issues: 756

      For more information, see The multi_trepctl Command (in [Tungsten Replicator 2.2 Manual]).

  • Oracle Replication

    • The ddlscan (in [Tungsten Replicator 2.2 Manual]) tool and the ddl-mysql-oracle.vm template have been modified to support custom included templates on a per table basis.

      The tool has also been updated to support additional paths for searching for velocity templates using the -path (in [Tungsten Replicator 2.2 Manual]) option.

      Issues: 723

  • Core Replicator

    • The block commit process has been updated to support different configurations. Two new parameters have been added, which affect the block commit size, and enable transactions to be committed to a slave in blocks either based on the number of events, or the time interval since the last commit occurred.

      Issues: 677, 699

      For more information, see Block Commit (in [Tungsten Replicator 2.2 Manual]).

  • Filters

    • The dropcolumn (in [Tungsten Replicator 6.0 Manual]) JavaScript filter has been added. The filter enables individual columns to be removed from the THL so that personal identification information (PII) can be removed on a slave.

      Issues: 716

      For more information, see dropcolumn.js Filter (in [Tungsten Replicator 2.2 Manual]).

Bug Fixes

  • Installation and Deployment

    • When performing a Vertica deployment, tpm (in [Tungsten Replicator 2.2 Manual]) would fail to create the correct configuration parameters. In addition, error messages and warnings would be generated that did not apply to Vertica installations. tpm (in [Tungsten Replicator 2.2 Manual]) has been updated to simplify the Vertica installation process.

      Issues: 688, 781

      For more information, see Installing Vertica Replication (in [Tungsten Replicator 2.2 Manual]).

    • tpm (in [Tungsten Replicator 2.2 Manual]) would allow parallel replication to be configured in heterogeneous environments where parallel replication was not supported. During deployment, tpm (in [Tungsten Replicator 2.2 Manual]) now reports an error if parallel configuration parameters are supplied for datasource types other than MySQL or Oracle.

      Issues: 733

    • When configuring a single host to support a parallel, multi-channel deployment, tpm (in [Tungsten Replicator 2.2 Manual]) would report that this operation was not supported. tpm (in [Tungsten Replicator 2.2 Manual]) has now been updated to support single host parallel apply configurations.

      Issues: 737

    • Configuring an installation with a preferred path for MySQL deployments using the --preferred-path (in [Tungsten Replicator 2.2 Manual]) option would not set the PATH variable correctly, this would lead to the tools from an incorrect directory being used when performing backup or restore operations. tpm (in [Tungsten Replicator 2.2 Manual]) has been updated to correctly set the environment during execution.

      Issues: 752

  • Command-line Tools

    • When using the -sql (in [Tungsten Replicator 2.2 Manual]) option to the thl (in [Tungsten Replicator 2.2 Manual]), additional metadata and options would be displayed. The tool has now been updated to only output the corresponding SQL.

      Issues: 264

    • DATETIME values could be displayed incorrectly in the THL when using the thl (in [Tungsten Replicator 2.2 Manual]) tool to show log contents.

      Issues: 676

    • An incorrect RMI port could be used within a deployment if a non-standard RMI port was specified during installation, affecting the operation of trepctl (in [Tungsten Replicator 2.2 Manual]). The precedence for selecting the RMI port to use has been updated to use the -port (in [Tungsten Replicator 2.2 Manual]), the system property, and then service properties for the selected service and/or trepctl (in [Tungsten Replicator 2.2 Manual]) executable.

      Issues: 695

  • Backup and Restore

    • During installation, tpm (in [Tungsten Replicator 2.2 Manual]) would fail to check the version for Percona XtraBackup when working with built-in InnoDB support in MySQL. The check has now been updated and validation will fail if XtraBackup 2.1 or later is used with a MySQL 5.1 and built-in InnoDB support.

      Issues: 671

    • When using xtrabackup during a restore operation, the restore would fail. The problem was due to a difference in the interface for XtraBackup 2.1.6.

      Issues: 778

  • Oracle Replication

    • When performing an Oracle deployment, tpm (in [Tungsten Replicator 2.2 Manual]) would apply incorrect parameters and filters and check MySQL specific environment information. The following changes have been made:

      • The colnames (in [Tungsten Replicator 6.0 Manual]) filter is no longer added to Oracle master (extractor) deployments.

      • Incorrect schema value would be defined for the replicator schema.

      The check for mysqldump is still performed on an Oracle master host; use --preferred-path (in [Tungsten Replicator 2.2 Manual]) to set a valid location, or disable the MySQLDumpCheck validation check.

      Issues: 685

  • Core Replicator

    • DECIMAL values could be extracted from the MySQL binary log incorrectly when using statement based logging.

      Issues: 650

    • A null pointer exception could be raised by the master, which would lead to the slave failing to connect to the master correctly. The slave will now retry the connection.

      Issues: 698

    • A slave replicator could fail when synchronizing the THL if the master goes offline. This was due to network interrupts during a failure not being recognised properly.

      Issues: 714

    • In certain circumstances, a replicator could apply transactions that had been generated by itself. This could happen during a failover, leading to events written to the THL, but without the trep_commit_seqno (in [Tungsten Replicator 6.0 Manual]) table having been updated. To fix this problem, consistency checks on the THL contents are now performed during startup. In addition, all replicators now write their currently assigned role to a file within the configuration directory of the running replication service, called static-servicename.role.

      When the replicator goes online, a static-servicename.role file is examined. If the current role identified in that file was a master, and the current role of the replicator is a slave, then the THL consistency checks are enabled. These check the following situations:

      • If the trep_commit_seqno (in [Tungsten Replicator 6.0 Manual]) is out of sync with the contents of the THL provided that the last THL record exists and matches the source-id of the transaction.

      • If the current log position is different to the THL position, and assuming that THL position exists, then an error will be raised and the replicator will go offline. This behavior can be overridden by using the trepctl online -force (in [Tungsten Replicator 2.2 Manual]) command.

      Once the checks have been completed, the new role for the replicator is updated in the static-servicename.role file.

      Important

      The static-servicename.role file must be deleted, or the THL files must be deleted, when restoring a backup. This is to ensure that the correct current log position is identified.

      Issues: 735

    • An UnsupportedEncodingException error could occur when extracting statement based replication events if the MySQL character set did not match a valid Java character set used by the replicator.

      Issues: 743

    • When using Row-based replication, replicating into a table on the slave that did not exist, a Null-Pointer Exception would be raised. The replicator now correctly raises an SQL error indicating that the table does not exist.

      Issues: 747

    • During a master failure under load, the number of transactions making it to the slave before the master replicator fails.

      Issues: 753

    • Upgrading a replicator and changing the hostname could cause the replicator to skip events in the THL. This was due to the way in which the source-id of events in the slave replicator checks the information compared to the remote THL read from the master. This particularly affect standalone replicators. The fix adds a new property, replicator.repositionOnSourceIdChange. This is a boolean value, and specifies whether the replicator should try to reposition to the correct location in the THL when the source ID has been modified.

      Issues: 754

    • Running trepctl reset (in [Tungsten Replicator 2.2 Manual]) on a service deployed in an multi-master (all master) configuration would not correctly remove the schema from the database.

      Issues: 758

    • Replication of temporary tables with the same name, but within different sessions would cause a conflict in the slave.

      Issues: 772

  • Filters

    • The pkey (in [Tungsten Replicator 6.0 Manual]) would not renew connections to the master to determine the primary key information. When replication had been running for a long time, the active connection would be dropped, but never renewed. The filter has been updated to re-connect on failure.

      Issues: 670

      For more information, see PrimaryKey Filter (in [Tungsten Replicator 2.2 Manual]).

3.16. Continuent Tungsten 1.5.4 GA (Not yet released)

Release Notes 1.5.4 is a maintenance release that adds important bug fixes to the Tungsten 1.5.3 release currently in use by most Tungsten customers. It contains the following key improvements:

  • Introduces quorum into Tungsten clusters to help avoid split brain problems due to network partitions. Cluster members vote whenever a node becomes unresponsive and only continue operating if they are in the majority. This feature greatly reduces the chances of multiple live masters.

  • Enables automatic restart of managers after network hangs that disrupt communications between managers. This feature enables clusters to ride out transient problems with physical hosts such as storage becoming inaccessible or high CPU usage that would otherwise cause cluster members to lose contact with each other, thereby causing application outages or manager non-responsiveness.

  • Adds "witness-only managers" which replace the previous witness hosts. Witness-only managers participate in quorum computation but do not manage a DBMS. This feature allows 2 node clusters to operate reliably across Amazon availability zones and any environment where managers run on separate networks.

  • Numerous minor improvements to cluster configuration files to eliminate and/or document product settings for simpler and more reliable operation.

Continuent recommends that customers who are awaiting specific fixes for 1.5.3 release consider upgrade to Release Notes 1.5.4 as soon as it is generally available. All other customers should consider upgrade to Release Notes 2.0.1 as soon as it is convenient. In addition, we recommend all new projects start out with version 2.0.1.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • Failover could be rolled back because of a failure to release a Virtual IP. The failure has been updated to trigger a warning, not a rollback of failover.

    Issues: TUC-1666

  • An 'UnknownHostException' would cause a failover. The behavior has been updated to result in a suspect DB server.

    Issues: TUC-1667

  • Failover does not occur if the manager is not running, on the master host, before the time that the database server is stopped.

    Issues: TUC-1900

Improvements, new features and functionality

  • Installation and Deployment

    • tpm should validate connector defaults that would fail an upgrade.

      Issues: TUC-1850

    • Improve tpm error message when running from wrong directory.

      Issues: TUC-1853

  • Tungsten Connector

    • Add support for MySQL cursors in the connector.

      Issues: TUC-1411

    • Connector must forbid zero keepAliveTimeout.

      Issues: TUC-1714

    • In SOR deployments only, Connector logs show relay data service being added twice.

      Issues: TUC-1720

    • Change default delayBeforeOfflineIfNoManager router property to 30s and constrain it to max 60s in the code.

      Issues: TUC-1752

    • Router Manager connection timeout should be a property.

      Issues: TUC-1754

    • Reject server version that don't start with a number.

      Issues: TUC-1776

    • Add client IP and port when logging connector message.

      Issues: TUC-1810

    • Make tungsten cluster status more sql-like and reduce the amount of information displayed.

      Issues: TUC-1814

    • Allow connections without a schema name.

      Issues: TUC-1829

  • Other Issues

    • Remove old/extra/redundant configuration files.

      Issues: TUC-1721

Bug Fixes

  • Installation and Deployment

    • Within tpm the witness host was previously required and was not validated

      Issues: TUC-1733

    • Ruby tests should abort if installation fails

      Issues: TUC-1736

    • Test witness hosts on startup of the manager and have the manager exit if there are any invalid witness hosts.

      Issues: TUC-1773

    • Installation fails with Ruby 1.9.

      Issues: TUC-1800

    • When using tpm to start from a specific event, the correct directory would not be used for the selected method.

      Issues: TUC-1824

    • When specifying a witness host check with tpm, the check works for IP addresses but fails when using host names.

      Issues: TUC-1833

    • Cluster members do not reliably form a group following installation.

      Issues: TUC-1852

    • Installation fails with Ruby 1.9.1.

      Issues: TUC-1868

  • Command-line Tools

    • Nagios check scripts not picking up shunned datasources

      Issues: TUC-1689

  • Cookbook Utility

    • Cookbook should not specify witness hosts in default configuration files etc.

      Issues: TUC-1734

  • Backup and Restore

    • Restore with xtrabackup empties the data directory and then fails.

      Issues: TUC-1849

    • A recovered datasource does not always come online when in automatic policy mode

      Issues: TUC-1851

    • Restore on datasource in slave dataservice fails to reload.

      Issues: TUC-1856

    • After a restore, datasource is welcomed and put online, but never gets to the online state.

      Issues: TUC-1861

    • A restore that occurs immediately after a recover from dataserver failure always fails.

      Issues: TUC-1870

  • Core Replicator

    • LOAD (LOCAL) DATA INFILE would fail if the request starts with white spaces.

      Issues: TUC-1639

    • Null values are not correctly handled in keys for row events

      Issues: TUC-1823

  • Tungsten Connector

    • Connector fails to send back full result of stored procedure called by prepared statement (pass through mode on).

      Issues: TUC-36

    • Router gateway can prevent manager startup if the connector is started before the manager

      Issues: TUC-850

    • The Tungsten show processlist command would throw NPE errors.

      Issues: TUC-1136

    • The default SQL Router properties uses the wrong load balancer

      Issues: TUC-1437

    • Router must go into fail-safe mode if it loses connectivity to a manager during a critical command.

      Issues: TUC-1549

    • When in a SOR deployment, the Connector will never return connection requests with RO_RELAXED and affinity set to relay node only site.

      Issues: TUC-1620

    • Affinity not honored when using direct connections.

      Issues: TUC-1628

    • An attempt to load a driver listener class can cause the connector to hang, at startup.

      Issues: TUC-1669

    • Broken connections returned to the c3p0 pool - further use of these will show errors.

      Issues: TUC-1683

    • The connector tungsten flush privileges command causes a temporary outage (denies new connection requests).

      Issues: TUC-1730

    • Connector should require a valid manager to operate even when in maintenance mode.

      Issues: TUC-1781

    • Session variables support for row replication

      Issues: TUC-1784

    • Connector allows connections to an offline/on-hold composite dataservice.

      Issues: TUC-1787

    • Router notifications are being sent to routers via GCS. This is unnecessary since a manager only updates routers that are connected to it.

      Issues: TUC-1790

    • Pass through not handling correctly multiple results in 1.5.4.

      Issues: TUC-1792

    • SmartScale will fail to create a database and use immediately.

      Issues: TUC-1836

  • Tungsten Manager

    • A manager that cannot see itself as a part of a group should fail safe and restart

      Issues: TUC-1722

    • Retry of tests for networking failure does not work in the manager/rules

      Issues: TUC-1723

    • The 'vip check' command produces a scary message in the manager log if a VIP is not defined

      Issues: TUC-1772

    • Restored Slave did not change to correct master

      Issues: TUC-1794

    • If a manager leaves a group due to a brief outage, and does not restart, it remains stranded from the rest of the group but 'thinks' it's still a part of the group. This contributed to the main cause of hanging/restarts during operations.

      Issues: TUC-1830

    • Failover of relay aborts when relay host reboots, leaving data sources of slave service in shunned or offline state.

      Issues: TUC-1832

    • The recover command completes but cannot welcome the datasource, leading to a failure in tests.

      Issues: TUC-1837

    • After failover on primary master, relay datasource points to wrong master and has invalid role.

      Issues: TUC-1858

    • A stopped dataserver would not be detected if cluster was in maintenance mode when it was stopped.

      Issues: TUC-1860

    • Manager attempts to get status of remote replicator from the local service - causes a failure to catch up from a relay.

      Issues: TUC-1864

    • Using the recover using command can result in more than one service in a composite service having a master and if this happens, the composite service will have two masters.

      Issues: TUC-1874

    • Using the recover using command, the operation recovers a datasource to a master when it should recover it to a relay.

      Issues: TUC-1882

    • ClusterManagementHandler can read/write datasources directly from the local disk - can cause cluster configuration information corruption.

      Issues: TUC-1899

  • Platform Specific Deployments

    • FreeBSD: Replicator hangs when going offline. Can cause switch to hang/abort.

      Issues: TUC-1668