B.5. Continuent Tungsten 2.0.1 GA (3 January 2014)

Important

The final approved build for Continuent Tungsten 2.0.1 is build 1003. Earlier builds do not have the full set of features and functionality, and includes a number of key fixes not in earlier builds of the same release. In particular, updated support for passive witnesses was not available in earlier builds.

Continuent Tungsten 2.0.1 is the first generally available release of Continuent Tungsten 2.0, which offers major improvements to Continuent's industry-leading database-as-a-service offering. Continuent Tungsten 2.0.1 contains all the improvements incorporated in Version 1.5.4, and the fixes and new features included within Tungsten Replicator 2.2.0, as well as the following features:

  • Cluster Management

    • An improved manager that simplifies recovery of your cluster.

    • New tools to make provisioning and recovery of replication issues.

    • Improved witness host and decision engine to provide better quorum for preventing split-brain and prevent multiple live masters.

    • SSL-based encryption and authentication for cluster management through all command-line tools.

  • Connector

    • SSL support enables SSL and non-SSL clients, and SSL and non-SSL connectivity between the connector and database servers.

    • Support for setting the maximum latency for slaves when redirecting queries.

  • Installation and Deployment

    • Improved tpm installation tool that eases deployment and configuration of all clusters, including multi-master and multi-site/multi-master.

    • INI file based installation through tpm that enables easier installation, including through Puppet and other script-based solutions.

  • Core Replication

    • Includes all Tungsten Replicator 2.2.0 features, including low-impact, low-latency replication, advanced filtering

    • Supports MySQL (5.0, 5.1, 5.5, 5.6), MariaDB (5.5) and Percona Server (5.5).

    • Supports replication to and from MySQL and Oracle, and Oracle to Oracle.

    • Data loading to Vertica and data warehouses, and real-time publishing to MongoDB.

    • SSL-based encryption for exchanging replication data.

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

  • When using the xtrabackup method for performing backups, the default is to use the xtrabackup-full operation to perform a full backup.

    Issues: TUC-1327

  • The default load balancer used for load-balancing connections within the Connector has been updated to use the RO_RELAXED QoS balancer. This takes account of the HighWater mark when redirecting queries and compares the applied sequence number rather than relying only on the latency.

    Issues: TUC-1589

  • Current strategy for preventing split-brain by using a witness host is not workable for many customers. The witness host configuration and checks have been changed to prevent these problems.

    Issues: TUC-1650

  • Failover could be rolled back because of a failure to release a Virtual IP. The failure has been updated to trigger a warning, not a rollback of failover.

    Issues: TUC-1666

  • An 'UnknownHostException' would cause a failover. The behavior has been updated to result in a suspect DB server.

    Issues: TUC-1667

  • A new type of witness host has been added. A new active witness supports a manager-only based installation. The active witness is able to take part in decisions about failure in the event of datasource and/or network connectivity issues.

    As a result, the following changes apply for all witness host selection and installation:

    • Witnesses must be on the same network subnet as the existing managers.

    • Dataservices must have at least three managers to provide status check during failure.

    • Active witnesses can be created; these install only the manager on target hosts to act witnesses to check network connectivity to the configured dataserver and connectors configured within the service.

    Issues: TUC-1854

    For more information, see Section 2.1, “Host Types”.

  • Failover does not occur if the manager is not running, on the master host, before the time that the database server is stopped.

    Issues: TUC-1900

  • Read-only MySQL slaves no longer work.

    Issues: TUC-1903

Improvements, new features and functionality

  • Installation and Deployment

    • tpm has been updated to support configuration of the maximum applied latency for the connector using either the --connector-max-slave-latency or --connector-max-applied-latency options.

      Issues: TUC-733

    • Installer should provide a way to setup RO_RELAXED (read-only with no SQL checking) connectors.

      Issues: TUC-954

    • Post-installation notes do not specify hosts that can run cctrl.

      Issues: TUC-1118

    • Create a tpm cook command that masks the tungsten-cookbook script

      Issues: TUC-1182

    • The tpm validation has been updated to provided warnings when the sync_binlog and innodb_flush_log_at_trx_commit MySQL options are set incorrectly.

      Issues: TUC-1656

    • A new tpm command has been added to list different connector connection commands/syntax.

      Issues: TUC-1661

    • Add default path to security files, to facilitate their retrieval.

      Issues: TUC-1676

    • Support a --dataservice-witnesses value of "none"

      Issues: TUC-1715

    • The tpm command should not be accessible on installed data sources.

      Issues: TUC-1717

    • Allow tpm configuration that is compatible with puppet/chef/etc

      Issues: TUC-1735

    • Auto-generated properties line should go at the top of the files.

      Issues: TUC-1739

    • Add tpm switch for rrIncludeMaster router properties.

      Issues: TUC-1744

    • During installation, the security.access_file.location property should be changed to security.rmi.jmxremote.access_file.location.

      Issues: TUC-1805

    • Split the cross machine checks out of MySQLPermissionsCheck.

      Issues: TUC-1838

    • The installation of Multi-Site Multi-Master deployments has been simplified.

      Issues: TUC-1923

      For more information, see Section 3.2, “Deploying Multisite/Multimaster Clusters”.

  • Command-line Tools

    • A completion script for command-line completion within bash has been added to the installation. The file is located in tools/.tpm.complete within the installation directory.

      Issues: TUC-1591

    • Write scripts to coordinate backups across an entire cluster.

      Issues: TUC-1641

    • cctrl should not report that recover is an expert command

      Issues: TUC-1839

    • An option, -a, --authenticate has been added to the tpasswd utility to validate an existing password entry.

      Issues: TUC-1916

  • Cookbook Utility

    • Tungsten cookbook should run manager|replicator|connector dump before collecting logs.

      Issues: TUC-1660

    • Cookbook has been updated to support both active and passive witnesses.

      Issues: TUC-1942

    • Cookbook has been updated to allow backups from masters to be used.

      Issues: TUC-1943

  • Backup and Restore

    • The datasource_backup.sh script has been updated to limit running only on the COORDINATOR and to find a non-MASTER datasource.

      Issues: TUC-1684

  • MySQL Replication

    • Add support for MySQL 5.6

      Issues: TUC-1624

  • Tungsten Connector

    • Support for MySQL 4.0 passwords within the connector has been included. This provides support for both old MySQL versions and older versions of the MySQL protocol used by some libraries and clients.

      Issues: TUC-784

    • Connector must forbid zero keepAliveTimeout.

      Issues: TUC-1714

    • In SOR deployments only, Connector logs show relay data service being added twice.

      Issues: TUC-1720

    • Change default delayBeforeOfflineIfNoManager router property to 30s and constrain it to max 60s in the code.

      Issues: TUC-1752

    • Router Manager connection timeout should be a property.

      Issues: TUC-1754

    • Add client IP and port when logging connector message.

      Issues: TUC-1810

    • Make tungsten cluster status more sql-like and reduce the amount of information displayed.

      Issues: TUC-1814

    • Connector client side SSL support for MySQL

      Issues: TUC-1825

  • Tungsten Manager

    • cctrl should show if a given data source is secured.

      Issues: TUC-1816

    • The datasource hostname recover command should not invoke the expert warning.

      Issues: TUC-1840

  • Manager API

    • Smarter enabling of the Manager API

      Issues: TUC-1621

    • Support has been added to specify the addresses for the Manager API to listen on.

      Issues: TUC-1643

    • The Manager API has been updated with a method to list all the available dataservices.

      Issues: TUC-1674

    • Add DataServiceState and DataSource into the payload when applicable

      Issues: TUC-1701

    • Add classes to the Ruby libraries that handle API calls

      Issues: TUC-1707

    • Add an API call that prints the manager live properties

      Issues: TUC-1713

  • Platform Specific Deployments

    • Add Java wrapper support for FreeBSD.

      Issues: TUC-1632

    • Commit FreeBSD fixes to Java sockets and port binding.

      Issues: TUC-1633

  • Documentation

    • Document among the prerequisites that Tungsten installers do not support mysqld_multi.

      Issues: TUC-1679

  • Other Issues

    • Write a tpm test wrapper for the cookbook testing scripts.

      Issues: TUC-1396

    • Document the process of sending emails based on specific log4j messages

      Issues: TUC-1500

    • The check_tungsten.sh script has been updated to check and restart enterprise load balancers that use the xinetd service.

      Issues: TUC-1573

    • Expand zabbix monitoring to match nagios checks.

      Issues: TUC-1638

    • Turn SET NAMES log message into DEBUG.

      Issues: TUC-1644

    • Remove old/extra/redundant configuration files.

      Issues: TUC-1721

    • Backport critical 1.5.4 manager changes to 2.0.1

      Issues: TUC-1855

Bug Fixes

  • Installation and Deployment

    • Tungsten can't install if the 'mysql' client is not in the path.

      Issues: TUC-999

    • An extra -l flag when running sudo command would be added to the configuration.

      Issues: TUC-1025

    • Installer will not easily work when installing SOR data services one host at a time.

      Issues: TUC-1036

    • The tpm did not verify that the permissions for the tungsten DB user allow for cross-database host access.

      Issues: TUC-1146

    • Specifying a Symbolic link for the Connector/J creates a circular reference.

      Issues: TUC-1567

    • Replication of DATETIME values with a Daylight Savings Time (DST) would replicate incorrect values. Installation of a replication service where there are different timezones for the Java environment and the MySQL environment may cause incorrect replication.

      Issues: 542, TUC-1593

    • The replicator service would not be imported into the cluster directory - causes subsequent failures in switch and other operations.

      Issues: TUC-1594

    • tpm would fail to skip the GlobalHostAddressesCheck when performing a tpm configure followed by tpm validate.

      Issues: TUC-1599

    • tpm does not recognize datasources when they start with capital letter.

      Issues: TUC-1655

    • Installation of multiple replicator with tpm fails.

      Issues: TUC-1680

    • The check for Java version fails when OpenJDK does not say "java".

      Issues: TUC-1681

    • The installer did not make sure that witness servers are in the same network as the cluster.

      Issues: TUC-1705

    • tpm does not install if there is a Tungsten Replicator installer already running.

      Issues: TUC-1712

    • Errors during installation of composite dataservice.

      Issues: TUC-1726

    • The tpm command returns an ssh error when attempting to install a composite data service.

      Issues: TUC-1727

    • Running tpm with no arguments raises an error.

      Issues: TUC-1788

    • Installation fails with Ruby 1.9.

      Issues: TUC-1800

    • tpm will not throw an error if the user gives the connectorj-path as the path to a symlink instead of a real file.

      Issues: TUC-1815

    • tpm does not check dependencies of security options.

      Issues: TUC-1818

    • When checking process limits during installation, the check would fail the installation process instead of providing a warning.

      Issues: TUC-1822

    • During tpm validation wrongly complains about a witness not being in the same subnet.

      Issues: TUC-1848

    • During installation, tpm could install SSL support for the connector even though the MySQL server has not been configured for SSL connectivity.

      Issues: TUC-1909

    • Running tpm update would cause the master replicator to become a slave during the update when the master had changed from the configuration applied using --dataservice-master-host.

      Issues: TUC-1921

    • tpm could allow meaningless specifications of active witnesses.

      Issues: TUC-1941

    • tpm has been updated to provide the correct link to the documentation for further information.

      Issues: TUC-1947

    • Performing tpm reset would remove all the files within the cluster-home/conf directories, instead of only the files for services tpm was aware of.

      Issues: TUC-1949

    • tpm would require the --active-witnesses or --enable-active-witnesses option, when other witness types are available for configuration.

      Issues: TUC-1951

    • tpm would check the same witness subnet when using active witnesses, which do not need to be installed on the same subnet.

      Issues: TUC-1953

    • A tpm update operation would not recognize active witnesses properly.

      Issues: TUC-1975

    • A tpm uninstall operation would complain about missing databases in connector tests.

      Issues: TUC-1978

    • tpm would not remove the connector.ro.properties file if the configuration is updated to not have --application-readonly-port.

      Issues: TUC-1981

    • tpm would enable installation when MariaDB 10.0 was installed, even though this is not a supported configuration.

      Issues: TUC-1987

    • The method used to compare whether hosts were on the same subnet would fail to identify hosts correctly.

      Issues: TUC-1995

  • Command-line Tools

    • Running cctrl on a host which only had the connector server would not report a useful error. This has now been updated to show a warning message. error

      Issues: TUC-1642

    • The check_tungsten command had different command line arguments from check_tungsten.sh.

      Issues: TUC-1675

    • Nagios check scripts not picking up shunned datasources

      Issues: TUC-1689

    • cctrl could output the status of a host with a null value in place of the correct hostname.

      Issues: TUC-1893

    • Using the recover datasource command within a composite service would fail, even though datasource recover would work.

      Issues: TUC-1912

    • The check_tungsten_latency --perslave-perfdata option would not include information for relay hosts.

      Issues: TUC-1915

    • A large error message could be found included within the status block of ls output within cctrl. The error message information has been redirected to the error log.

      Issues: TUC-1931

    • Performing switch operations within a composite service using active witnesses could raise an error and fail.

      Issues: TUC-1946

    • cctrl would be unable to create a composite datasource after dropping it.

      Issues: TUC-1956

    • Backwards compatibility for the recover using has been incorporated.

      Issues: TUC-1971

  • Cookbook Utility

    • The tungsten-cookbook tests fails and does not print current status.

      Issues: TUC-1623

    • The tungsten-cookbook uses resolveip instead of standard name resolution tools.

      Issues: TUC-1646

    • The tungsten-cookbook tool sometimes misunderstands the result of composite recovery.

      Issues: TUC-1662

    • Cookbook gets warnings when used with a MySQL 5.6 client.

      Issues: TUC-1673

    • The cookbook does not wait for a database server to be offline properly.

      Issues: TUC-1685

    • tungsten-cookbook does not check the status of the relay server after a composite recovery.

      Issues: TUC-1695

    • tungsten-cookbook does not check all the components of a datasource when testing a server.

      Issues: TUC-1696

    • tungsten-cookbook does not collect the configuration files under cluster-home.

      Issues: TUC-1697

    • Cookbook should not specify witness hosts in default configuration files etc.

      Issues: TUC-1734

    • Tungsten cookbook fails the replicator test.

      Issues: TUC-1827

    • Using a backup that has been copied across servers within cookbook could overwrite or replace existing backup files, which would then make the backup file appear as older than it should be, making it unavailable in restore operations.

      Issues: TUC-1936

  • Backup and Restore

    • The mysqldump backup option cannot restore if slow_query_log was on during the backup process.

      Issues: TUC-586

    • Using xtrabackup during restore fails if MySQL is running as user 'anything-but-mysql' and without root access.

      Issues: TUC-1005

    • When using mysqldump restore, the operation failed to disable slow and general logging before applying the restore.

      Issues: TUC-1330

    • Backup fails when using the xtrabackup-full agent.

      Issues: TUC-1612

    • Recovery hangs with composite data service.

      Issues: TUC-1657

    • Performing a restore with xtrabackup fails.

      Issues: TUC-1672

    • The datasource backup operation could fail due to a Ruby error.

      Issues: TUC-1686

    • Restore with xtrabackup fails.

      Issues: TUC-1716

    • Issues when recovering a failed physical dataservice.

      Issues: TUC-1793

    • Backup with xtrabackup fails if datadir is not defined in my.cnf.

      Issues: TUC-1821

    • When using xtrabackup restore fails.

      Issues: TUC-1846

    • After a restore, datasource is welcomed and put online, but never gets to the online state.

      Issues: TUC-1861

    • A restore that occurs immediately after a recover from dataserver failure always fails.

      Issues: TUC-1870

    • Master datasource backup generates superficial failure message but succeeds anyway.

      Issues: TUC-1896

    • Restoration of a full backup would fail due to the inclusion of the xtrabackup_incremental_basedir directory.

      Issues: TUC-1919

    • Backup using xtrabackup 1.6.5 would fail.

      Issues: TUC-1920

    • When using the backup files copied from another server, the replicator could mistakenly use the wrong backup files when performing a restore.

      Issues: TUC-1948

  • Core Replicator

    • Master failure causes partial commits on the slave with single channel parallel apply.

      Issues: TUC-1625

    • Slave applier can fail to log error when DBMS fails due to exception in cleanup.

      Issues: TUC-1626

    • Replication would fail on slave due to null characters created when inserting ___SERVICE___ comments.

      Issues: TUC-1627

    • LOAD (LOCAL) DATA INFILE would fail if the request starts with white spaces.

      Issues: TUC-1639

    • Datasource with a replicator in GOING-ONLINE:RESTORING (in [Tungsten Replicator 2.2 Manual]) shows up with a replicator state=UNKNOWN.

      Issues: TUC-1658

    • An insecure slave can replicate from secure master.

      Issues: TUC-1677

    • Replicator does not drop client connection to master and reconnect within the same time frame as in previous releases.

      Issues: TUC-1688

  • Filters

    • Primary key filter should be able to renew its internal connection after some timeout.

      Issues: TUC-1803

  • Tungsten Connector

    • TSR Session not updated when the database name changes (with sessionId set to DATABASE)

      Issues: TUC-761

    • Router gateway can prevent manager startup if the connector is started before the manager

      Issues: TUC-850

    • The Tungsten show processlist command would throw NPE errors.

      Issues: TUC-1136

    • Selective read/write splitting (SQL-Based routing) has been updated to ensure that it is backwards compatible with previous read/write splitting configurations.

      Issues: TUC-1489

    • Router must go into fail-safe mode if it loses connectivity to a manager during a critical command.

      Issues: TUC-1549

    • Use of the SET NAMES command were not forwarded to attached read-only connections.

      Issues: TUC-1569

    • When using haproxy through a connector connection, the initial query would be rejected.

      Issues: TUC-1581

    • When the dataservices.properties file is empty, the connector would hang. The operation has now been updated to exit with an exception if the file cannot be found.

      Issues: TUC-1586

    • When in a SOR deployment, the Connector will never return connection requests with RO_RELAXED and affinity set to relay node only site.

      Issues: TUC-1620

    • Affinity not honored when using direct connections.

      Issues: TUC-1628

    • Connector queries for SHOW SLAVE STATUS return incorrect slave latency of 0 intermittently.

      Issues: TUC-1645

    • The Tungsten Connector does not know it's PID following upgrade to JSW 3.5.17.

      Issues: TUC-1665

    • An attempt to load a driver listener class can cause the connector to hang, at startup.

      Issues: TUC-1669

    • Read connections allocated by connector get 'stale' and are closed by MySQL server due to wait_timeout - causes app 'transparency' issues.

      Issues: TUC-1671

    • Broken connections returned to the c3p0 pool - further use of these will show errors.

      Issues: TUC-1683

    • Router disconnects from a manager in the middle of a switch command - writes continue to offline master.

      Issues: TUC-1692

    • Connector sessionId passed in database name not retained

      Issues: TUC-1704

    • When using USE DB within a connector after the database had previously been dropped would be incorrectly ignored.

      Issues: TUC-1718

    • The connector tungsten flush privileges command causes a temporary outage (denies new connection requests).

      Issues: TUC-1730

    • Database context not changed to the correct database when qos=DATABASE is in use.

      Issues: TUC-1779

    • Connector should require a valid manager to operate even when in maintenance mode.

      Issues: TUC-1781

    • Connector allows connections to an offline/on-hold composite dataservice.

      Issues: TUC-1787

    • Router notifications are being sent to routers via GCS. This is unnecessary since a manager only updates routers that are connected to it.

      Issues: TUC-1790

    • Pass through not handling correctly multiple results in 1.5.4.

      Issues: TUC-1792

    • SmartScale will fail to create a database and use immediately.

      Issues: TUC-1836

    • The connector could hang during installation test.

      Issues: TUC-1847

    • Under certain circumstances, SSL-configuration for the Connector would be unable to start properly.

      Issues: TUC-1869

      For more information, see Section 2.7.4, “Configuring Connector SSL”.

    • Specify where to load security properties from in the connector.

      Issues: TUC-1872

    • A SET NAMES operation would not survive a switch or failover operation.

      Issues: TUC-1879

    • The connector command within cctrl has been disabled unless the connector and manager are installed on the same host.

      To support the removed functionality, the following changes to the router command have been made:

      • The * wildcard can be used for connectors within the router command within cctrl. For example, router * online will place all available connectors online.

      • The built-in command-line completion provides the names of the connectors in addition to the * (wildcard) character for the router command.

      Issues: TUC-1918

    • Using cursors within stored procedures through the connector would cause a hang in the connector service.

      Issues: TUC-1950

    • The connector would hang when working in a cluster with active witnesses.

      Issues: TUC-1954

    • When specifying the affinity within a connection, the maxAppliedLatency configuration would be ignored.

      Issues: TUC-1960

    • The connector would check for changes to the user.map frequently, causing lag on high-load servers. The configuration has been updated to allow checking only every 10s.

      Issues: TUC-1972

    • Passing the qos option within a database name would not work when smart scale was enabled.

      Issues: TUC-1982

  • Tungsten Manager

    • The datasource restore command may fail when using xtrabackup if the file ownership for the backup files is wrong.

      Issues: TUC-1226

    • Dataservice has different "composite" status depending on how its status is called.

      Issues: TUC-1614

    • The switch command does not validate command line correctly.

      Issues: TUC-1618

    • Composite recovery would fail because a replicator that was previously a master tries to re-apply a transaction that it had previously committed.

      Issues: TUC-1634

    • cctrl would let you shun the master datasource.

      Issues: TUC-1637

    • During a failover, the master could be left in read-only mode.

      Issues: TUC-1648

    • On occasion, the manager would fail to restart after being hung.

      Issues: TUC-1649

    • The ping command in cctrl wrongly identifies witness server as unreachable.

      Issues: TUC-1652

    • The failure of primary data source could go unhanded due to a manager restart.

      Issues: TUC-1659

    • The manager reports composite recovery completion although the operation has failed.

      Issues: TUC-1663

    • A transient error can cause a confused state.

      Issues: TUC-1678

    • Composite recovery could fail, but the manager says it was complete.

      Issues: TUC-1694

    • The internal Call to OpenReplicatorManager.status() during transition from online to offline results in a NullPointerException.

      Issues: TUC-1708

    • Relay does not fail over when the database server is stopped.

      Issues: TUC-1711

    • The cctrl would raise an error when running a backup from a master.

      Issues: TUC-1789

    • Tungsten manager may report false host failures due to a temporary problem with name resolution.

      Issues: TUC-1797

    • cctrl could report a manager as ONLINE even though the datasource would in fact be OFFLINE.

      Issues: TUC-1804

    • The manager would not see a secured replicator.

      Issues: TUC-1806

    • Slave replicators never come online after a switch when using secure thl.

      Issues: TUC-1807

    • cctrl complains of missing security file when security is not enabled.

      Issues: TUC-1808

    • Switch in relay site fails and takes offline all nodes.

      Issues: TUC-1809

    • A switch in the relay site sets the relay to replicate from itself.

      Issues: TUC-1811

    • In a composite deployment, a switch in the primary site is not propagated to the relay.

      Issues: TUC-1813

    • cctrl exposes security passwords unnecessarily.

      Issues: TUC-1817

    • The master datasource is not available following the failover command.

      Issues: TUC-1841

    • The manager does not support a non-standard replicator RMI port.

      Issues: TUC-1842

    • In a multi-site deployment, automatic failover does not happen in maintenance mode, due to replicator issues.

      Issues: TUC-1845

    • During the recovery of a composite dataservice, the restore of a shunned master could fail because the previous and current roles did not match.

      Issues: TUC-1857

    • A stopped dataserver would not be detected if cluster was in maintenance mode when it was stopped.

      Issues: TUC-1860

    • Manager attempts to get status of remote replicator from the local service - causes a failure to catch up from a relay.

      Issues: TUC-1864

    • A switch operation could fail in single site deployment.

      Issues: TUC-1867

    • In a configuration with a relay of a composite site, if all active data datasources are unavailable, a switch operation would raise invalid exception messages.

      Issues: TUC-1875

    • recover using fails in the simplest case for 2.0.1.

      Issues: TUC-1876

    • Manager fails safe even if it is in the quorum set and primary partition.

      Issues: TUC-1878

    • Single command recover does not work - does not find datasources to recover even if they exist.

      Issues: TUC-1881

    • Failover causes old master node name to disappear from cctrl ls command.

      Issues: TUC-1894

    • ClusterManagementHandler can read/write datasources directly from the local disk - can cause cluster configuration information corruption.

      Issues: TUC-1899

    • Stopping managers does not cause membership validation rules to kick in. This can lead to an invalid group.

      Issues: TUC-1901

    • The manager rules could fail to fence a composite datasource for which all managers in the service are unreachable.

      Issues: TUC-1902

    • recover using in a master service could convert one of the datasources into a relay instead of a slave.

      Issues: TUC-1907

    • CREATE COMPOSITE DATASOURCE could result in an exception if the master datasource site was used.

      Issues: TUC-1911

    • The manager would throw a false alarm if the trep_commit_seqno table was empty. This was due to the manager being started before the replicator had created the required table.

      Issues: TUC-1917

    • Composite recovery within a cloud deployment could fail.

      Issues: TUC-1922

    • Errors could be raised when using the set master and recover using commands within cctrl.

      Issues: TUC-1930

    • Composite recovery could fail in a site with multiple masters.

      Issues: TUC-1932

    • A failed master within a dataservice would cause the datasource names to disappear.

      Issues: TUC-1933

    • Running switch command after performing recovery could fail within a multi-site deployment.

      Issues: TUC-1934

    • Performing a switch operation when there are active witness could cause an error message indicating a fault, when in fact the operation completed successfully.

      Issues: TUC-1935

    • After performing a switch operation, a slave could report to the previous, not active, relay.

      Issues: TUC-1939

    • Running operations on active witness datasources would raise NullPointerException errors.

      Issues: TUC-1944, TUC-1945

    • Errors would be reported in the log when deserializing configuration information between the manager and connector.

      Issues: TUC-1963

    • Automatic failover would fail to run if an active witness was the coordinator for the dataservice.

      Issues: TUC-1964

    • Connectors would disappear after restarting the coordinator.

      Issues: TUC-1966

    • The coordinator would attempt to check database server liveness if a manager on a witness host goes away.

      Issues: TUC-1970

    • Composite recovery using a streaming backup results in a site with multiple masters.

      Issues: TUC-1992

    • Installing a composite dataservice would create two master services.

      Issues: TUC-1996

  • Manager API

    • API call for a single server does not report replicator status.

      Issues: TUC-1615

    • API "promote" command does not operate in a composite dataservice.

      Issues: TUC-1617

    • Some indispensable commands missing from manager API.

      Issues: TUC-1654

    • Manager API does not answer to /manager/status/svc_name without Accept header

      Issues: TUC-1690

    • The Manager API lets you shun a master.

      Issues: TUC-1706

    • The call to 'policy' API fails in composite dataservice.

      Issues: TUC-1725

  • Platform Specific Deployments

    • Windows service registration scripts won't work.

      Issues: TUC-1636

    • FreeBSD: Replicator hangs when going offline. Can cause switch to hang/abort.

      Issues: TUC-1668

  • Documentation

  • Other Issues

    • The shared libraries used by Continuent Tungsten have now been centralized in the cluster-home directory.

      Issues: TUC-1310

    • Some build warnings in Java 1.6 become errors in Java 1.7.

      Issues: TUC-1731

    • The test_connection_routing_and_isolation.rb test_tuc_98 test never selects the correct master.

      Issues: TUC-1780

    • During testing, a test that stops and restarts the replicator fails because a replicator that is actually running shows up, subsequently, as stopped.

      Issues: TUC-1895

    • The wrapper for the service was not honoring the configured wait period during a restart, which could cause a hang or failure when the service was restarted.

      Issues: TUC-1910, TUC-1913

Continuent Tungsten 2.0.1 Includes the following changes made in Tungsten Replicator 2.2.0

Continuent Tungsten 2.2.0 is a bug fix and feature release that contains a number of key improvements to the installation and management of the replicator:

Behavior Changes

The following changes have been made to Continuent Tungsten and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:

Known Issues

The following issues may affect the operation of Continuent Tungsten and should be taken into account when deploying or updating to this release.

  • Installation and Deployment

    • Installations for Amazon RDS must use tungsten-installer; support is not currently available for tpm.

Improvements, new features and functionality

Bug Fixes

  • Installation and Deployment

    • When performing a Vertica deployment, tpm would fail to create the correct configuration parameters. In addition, error messages and warnings would be generated that did not apply to Vertica installations. tpm has been updated to simplify the Vertica installation process.

      Issues: 688, 781

      For more information, see Installing Vertica Replication.

    • When configuring a single host to support a parallel, multi-channel deployment, tpm would report that this operation was not supported. tpm has now been updated to support single host parallel apply configurations.

      Issues: 737

    • Configuring an installation with a preferred path for MySQL deployments using the --preferred-path option would not set the PATH variable correctly, this would lead to the tools from an incorrect directory being used when performing backup or restore operations. tpm has been updated to correctly set the environment during execution.

      Issues: 752

  • Command-line Tools

    • When using the -sql option to the thl, additional metadata and options would be displayed. The tool has now been updated to only output the corresponding SQL.

      Issues: 264

    • DATETIME values could be displayed incorrectly in the THL when using the thl tool to show log contents.

      Issues: 676

    • An incorrect RMI port could be used within a deployment if a non-standard RMI port was specified during installation, affecting the operation of trepctl. The precedence for selecting the RMI port to use has been updated to use the -port, the system property, and then service properties for the selected service and/or trepctl executable.

      Issues: 695

  • Backup and Restore

    • During installation, tpm would fail to check the version for Percona XtraBackup when working with built-in InnoDB support in MySQL. The check has now been updated and validation will fail if XtraBackup 2.1 or later is used with a MySQL 5.1 and built-in InnoDB support.

      Issues: 671

    • When using xtrabackup during a restore operation, the restore would fail. The problem was due to a difference in the interface for XtraBackup 2.1.6.

      Issues: 778

  • Oracle Replication

    • When performing an Oracle deployment, tpm would apply incorrect parameters and filters and check MySQL specific environment information. The following changes have been made:

      • The colnames filter is no longer added to Oracle master (extractor) deployments.

      • Incorrect schema value would be defined for the replicator schema.

      The check for mysqldump is still performed on an Oracle master host; use --preferred-path to set a valid location, or disable the MySQLDumpCheck validation check.

      Issues: 685

  • Core Replicator

    • DECIMAL values could be extracted from the MySQL binary log incorrectly when using statement based logging.

      Issues: 650

    • A null pointer exception could be raised by the master, which would lead to the slave failing to connect to the master correctly. The slave will now retry the connection.

      Issues: 698

    • A slave replicator could fail when synchronizing the THL if the master goes offline. This was due to network interrupts during a failure not being recognised properly.

      Issues: 714

    • In certain circumstances, a replicator could apply transactions that had been generated by itself. This could happen during a failover, leading to events written to the THL, but without the trep_commit_seqno table having been updated. To fix this problem, consistency checks on the THL contents are now performed during startup. In addition, all replicators now write their currently assigned role to a file within the configuration directory of the running replication service, called static-servicename.role.

      When the replicator goes online, a static-servicename.role file is examined. If the current role identified in that file was a master, and the current role of the replicator is a slave, then the THL consistency checks are enabled. These check the following situations:

      • If the trep_commit_seqno is out of sync with the contents of the THL provided that the last THL record exists and matches the source-id of the transaction.

      • If the current log position is different to the THL position, and assuming that THL position exists, then an error will be raised and the replicator will go offline. This behavior can be overridden by using the trepctl online -force command.

      Once the checks have been completed, the new role for the replicator is updated in the static-servicename.role file.

      Important

      The static-servicename.role file must be deleted, or the THL files must be deleted, when restoring a backup. This is to ensure that the correct current log position is identified.

      Issues: 735

    • An UnsupportedEncodingException error could occur when extracting statement based replication events if the MySQL character set did not match a valid Java character set used by the replicator.

      Issues: 743

    • When using Row-based replication, replicating into a table on the slave that did not exist, a Null-Pointer Exception would be raised. The replicator now correctly raises an SQL error indicating that the table does not exist.

      Issues: 747

    • During a master failure under load, the number of transactions making it to the slave before the master replicator fails.

      Issues: 753

    • Upgrading a replicator and changing the hostname could cause the replicator to skip events in the THL. This was due to the way in which the source-id of events in the slave replicator checks the information compared to the remote THL read from the master. This particularly affect standalone replicators. The fix adds a new property, replicator.repositionOnSourceIdChange. This is a boolean value, and specifies whether the replicator should try to reposition to the correct location in the THL when the source ID has been modified.

      Issues: 754

    • Running trepctl reset on a service deployed in an multi-master (all master) configuration would not correctly remove the schema from the database.

      Issues: 758

    • Replication of temporary tables with the same name, but within different sessions would cause a conflict in the slave.

      Issues: 772

  • Filters

    • The PrimaryKeyFilter would not renew connections to the master to determine the primary key information. When replication had been running for a long time, the active connection would be dropped, but never renewed. The filter has been updated to re-connect on failure.

      Issues: 670

      For more information, see Section 10.4.29, “PrimaryKey Filter”.