1.29. Tungsten Clustering 6.0.3 GA (5 September 2018)

Version End of Life. 31 July 2020

Release 6.0.3 is a bugfix release.

Improvements, new features and functionality

  • Installation and Deployment

    • tpm now outputs a note and recommendation for performing backups of your cluster when installation has been completed.

      Issues: CT-730

  • Command-line Tools

    • The tungsten_prep_upgrade command has been updated to support an explicit host definition for the MySQL host in place of defaulting to the localhost (127.0.0.1). Use the --host option.

      Issues: CT-656

    • A new Nagios compatible check script has been added to the release, check_tungsten_policy, which returns the currently active policy mode.

      Issues: CT-675

      For more information, see The check_tungsten_policy Command.

  • Tungsten Connector

    • When receiving an error within MySQLPacket, the Connector now prints out the full content of the underlying error message.

      Issues: CT-636

    • The connector has been updated to allow dataservice selection to be deterministic and ordered rather than random by configuration. The updated configuration enables the connector to be set to use an ordered list of clusters within a composite solution.

      To set the order of the service selected during operation, the information must be set within the user.map. The configuration is based on an ordered, comma-separated list of services to use which are then selected in order. The specification operates on the following rules:

      • List of service names in order

      • If the service name has a dash prefix it is always explicitly excluded from the list of available datasources

      • If a datasource is not specified, it will always be picked last

      For example, in a setup made of three data service, usa, asia and europe, using affinity usa,asia,-europe will select data sources in data service usa. If usa is not available, in asia. If asia is not availble, then connection will not succeed since europe has been negated.

      Issues: CT-650

  • Tungsten Manager

    • The router gateway which provides communication between the manager and connector could shutdown even when quorum was available in a two-node cluster.

      Issues: CT-676

Bug Fixes

  • Installation and Deployment

    • tpm would fail during installation if the current directory was not writable by the current user.

      Issues: CT-564

    • Composite Active/Active cluster installations would fail if the hostname contains two or more hyphens or periods.

      Issues: CT-682, CT-695

    • tpm would fail to set properties within the defaults section of the configuration within Composite Active/Active clusters.

      Issues: CT-683

  • Command-line Tools

    • Using tpm diag, the command would ignore options on the command-line, including --net-ssh-option.

      Issues: CT-610

    • Using tpm connector at the command-line would fail if the core MySQL configuration file (i.e. /etc/my.cnf) did not exist.

      Issues: CT-641

  • Tungsten Connector

    • The connector would fail to set reusable network addresses during configuration which could delay or slow startup until the address/port become available again.

      Issues: CT-694

    • When operating in bridge mode, the connector would fail to check whether the driver was in enabled/disabled mode, which could cause upgrades to fail as part of a graceful shutdown/update operation.

      Issues: CT-696

    • Multiple connectors within a cluster could all connect to the same manager within a given service, increasing the load on the single manager.

      Issues: CT-717

    • The Tungsten Connector could mistakenly get the Primary data source of the wrong data service within Composite Active/Active deployments during configuration.

      Issues: CT-719

  • Tungsten Manager

    • Performing a switch operation within a Composite Active/Active cluster with three or more clusters when the cluster was in MAINTENANCE mode and the cross-site replicators are offline would lead to an unrecoverable cluster failure.

      Issues: CT-589

    • During a switch operation on a Composite Active/Active cluster when the cluster has been put into maintenance mode, the manager will put the cross-site replicators back into the online state.

      Issues: CT-591

    • When using the connector, the connector --cluster-status --json command would output header and footer information in place of bare JSON which would then cause JSON parsing to fail.

      Issues: CT-685

    • A memory leak within the manager, particularly in Composite Active/Active deployments, could cause the Java VM to consume more and more CPU cycles and then restart.

      Issues: CT-673, CT-691

    • During a relay failover within a Composite Active/Passive or Composite Active/Active deployment, if the communications had also failed between sites when the failover occured the manager would be unable to determine the correct Primary of the remote site.

      Issues: CT-703

    • Within Composite Active/Active deployments, during a cascading MySQL failure and switch operation across sites, the secondary site could misconfigure the cross-site relay.

      Issues: CT-713

    • A memory leak was identified in the router manager component that manages the communicating between the manager and the connector.

      Issues: CT-715

    • In a deployment, single cluster or Composite Active/Active where there is either the potential for high-latency across sites, or high latency within a site due to high loads on the connectors, the manager could mis-identify this high latency as a failure. This would trigger a quorum validation. These would be reported as network hangs, even though the result of the quorum check would be valid.

      To address this, the processing of router notifications processed by the connector and all other operations have been separated. This reduces the change of a heartbeat gap between hosts and therefore the connectors are available to the managers even under high loads or latency.

      Issues: CT-725

Tungsten Clustering 6.0.3 Includes the following changes made in Tungsten Replicator 6.0.3

Release 6.0.3 is a bugfix release.

Improvements, new features and functionality

  • Core Replicator

    • The output from thl list now includes the name of the file for the correspnding THL event. For example:

      SEQ# = 0 / FRAG# = 0 (last frag)
      - FILE = thl.data.0000000001
      - TIME = 2018-08-29 12:40:57.0
      - EPOCH# = 0
      - EVENTID = mysql-bin.000050:0000000000000508;-1
      - SOURCEID = demo-c11
      - METADATA = [mysql_server_id=5;dbms_type=mysql;tz_aware=true;is_metadata=true;service=alpha;shard=tungsten_alpha;heartbeat=MASTER_ONLINE]
      - TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
      - OPTIONS = [foreign_key_checks = 1, unique_checks = 1, time_zone = '+00:00', ##charset = US-ASCII]

      Issues: CT-550

    • The replicator has been updated to support the new character sets supported by MySQL 5.7 and MySQL 8.0, including the UTF-8-mb4 series.

      Issues: CT-700, CT-970

Bug Fixes

  • Installation and Deployment

    • During installation, tpm attempts to find the system commands such as service and systemctl used to start and stop databases. If these were not in the PATH, tpm would fail to find a start/stop for the configured database. In addition to looking for these tools in the PATH tpm also explicitly looks in the /sbin, /bin, /usr/bin and /usr/sbin directories.

      Issues: CT-722

  • Command-line Tools

    • Using tpm diag, the command would ignore options on the command-line, including --net-ssh-option.

      Issues: CT-610

    • When running tpm diag, the operation would fail if the /etc/mysql directory does not exist.

      Issues: CT-724

    • Due to the operating taking a long time or timing out, the capture of the output from lsof has been removed from running tpm diag.

      Issues: CT-731

  • Core Replicator

    • The LOAD DATA INFILE would fail to be executed and replicated properly.

      Issues: CT-10, CT-652

    • The trepsvc.log displayed information without highlighting the individual services reporting the entries making it difficult to identify individual log entries.

      Issues: CT-659

    • When replicating data that included timestamps, the replicator would update the timestamp value to the time within the commit from the incoming THL. When using statement based replication times would be correctly replicated, but if using a mixture of statement and row based replication, the timestamp value would not be set back to the default time when switching between statement and row based events. This would not cause problems in the applied host, except when log_slave_updates was enabled. In this case, all row-based events after a statement based event would have the same timestamp value applied.

      Issues: CT-686