4.5. Downgrading to an Earlier Release

If after upgrading Tungsten Clustering you are experiencing problems, and Continuent Support have suggested that you downgrade to an earlier version of Tungsten Clustering, follow these steps to revert your existing Tungsten Clustering installation.

  1. Redirect all users directly to the MySQL server on the master. This may require changing applications and clients to point directly to the MySQL servers. You cannot use Tungsten Connector to handle this for you, since the entire cluster, including the Tungsten Connector services, will be removed.

  2. Stop Tungsten services on all servers:

    shell> stopall
  3. Downgrading to Tungsten Clustering 2.0.x

    For Tungsten Clustering 2.0.x, the information stored in the database schema for the service, for example tungsten_alpha, can remain in place, unless you are changing the service name.

    Downgrading to Tungsten Clustering 1.5.x

    When downgrading to a release earlier than Tungsten Clustering 2.0, the schema used to hold information must be updated. You must rebuild the tungsten schema on all database servers in the updated cluster. This requires a number of different steps:

    First, disable logging the statements to the binary log; this information does not need to be replicated around the cluster, even after restart:

    mysql> SET SESSION SQL_LOG_BIN=0;

    Now delete the tungsten schema in preparation for it to be recreated. Within Tungsten Clustering 1.5.4, information about the replication state is stored in the tungsten schema; within Tungsten Clustering 2.0.1 the information is stored within a schema matching the service name, for example the service alpha would be stored in the schema tungsten_alpha.

    mysql> DROP SCHEMA IF EXISTS `tungsten`;
    mysql> CREATE SCHEMA `tungsten`;
    mysql> USE tungsten;

    Now create the tables to store the status information:

    mysql> CREATE TABLE `consistency` (
      `db` char(64) NOT NULL DEFAULT '',
      `tbl` char(64) NOT NULL DEFAULT '',
      `id` int(11) NOT NULL DEFAULT '0',
      `row_offset` int(11) NOT NULL,
      `row_limit` int(11) NOT NULL,
      `this_crc` char(40) DEFAULT NULL,
      `this_cnt` int(11) DEFAULT NULL,
      `master_crc` char(40) DEFAULT NULL,
      `master_cnt` int(11) DEFAULT NULL,
      `ts` timestamp NULL DEFAULT NULL,
      `method` char(32) DEFAULT NULL,
      PRIMARY KEY (`db`,`tbl`,`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    CREATE TABLE `heartbeat` (
      `id` bigint(20) NOT NULL DEFAULT '0',
      `seqno` bigint(20) DEFAULT NULL,
      `eventid` varchar(32) DEFAULT NULL,
      `source_tstamp` timestamp NULL DEFAULT NULL,
      `target_tstamp` timestamp NULL DEFAULT NULL,
      `lag_millis` bigint(20) DEFAULT NULL,
      `salt` bigint(20) DEFAULT NULL,
      `name` varchar(128) DEFAULT NULL,
      PRIMARY KEY (`id`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    CREATE TABLE `history` (
      `seqno` bigint(20) NOT NULL DEFAULT '0',
      `fragno` smallint(6) NOT NULL DEFAULT '0',
      `last_frag` char(1) DEFAULT NULL,
      `source_id` varchar(128) DEFAULT NULL,
      `type` tinyint(4) DEFAULT NULL,
      `epoch_number` bigint(20) DEFAULT NULL,
      `source_tstamp` timestamp NULL DEFAULT NULL,
      `local_enqueue_tstamp` timestamp NULL DEFAULT NULL,
      `processed_tstamp` timestamp NULL DEFAULT NULL,
      `status` tinyint(4) DEFAULT NULL,
      `comments` varchar(128) DEFAULT NULL,
      `eventid` varchar(128) DEFAULT NULL,
      `event` longblob,
      PRIMARY KEY (`seqno`,`fragno`),
      KEY `eventid` (`eventid`)
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
    
    CREATE TABLE `trep_commit_seqno` (
      `seqno` bigint(20) DEFAULT NULL,
      `fragno` smallint(6) DEFAULT NULL,
      `last_frag` char(1) DEFAULT NULL,
      `source_id` varchar(128) DEFAULT NULL,
      `epoch_number` bigint(20) DEFAULT NULL,
      `eventid` varchar(128) DEFAULT NULL,
      `applied_latency` int(11) DEFAULT NULL,
      `update_timestamp` timestamp NULL DEFAULT NULL
    ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

    Now import the current sequence number from the existing Tungsten Clustering trep_commit_seqno table:

    mysql> INSERT INTO tungsten.trep_commit_seqno 
        (seqno, fragno, last_frag, source_id, epoch_number, 
         eventid, applied_latency, update_timestamp) 
    SELECT seqno, fragno, last_frag, source_id, 
        epoch_number, eventid, applied_latency, update_timestamp
        FROM TUNGSTEN_SERVICE_SCHEMA.trep_commit_seqno;

    Check the sequence number:

    mysql> SELECT * FROM tungsten.trep_commit_seqno;

    If the sequence number doesn't match on all servers, update the tungsten schema on the master with the earliest information (i.e. lowest sequence number):

    mysql> SET SQL_LOG_BIN=0;
    mysql> UPDATE tungsten.trep_commit_seqno SET seqno=###,epoch_number=###,eventid=SSSSS;
  4. Extract the release of Tungsten Clustering that should be installed instead, and then using tpm fetch to retrieve the current configuration.

    shell> ./tools/tpm fetch --user=tungsten --hosts=host1,host2,host3,host4 \
        --release-directory=/opt/continuent

    Note

    In the event that the tpm fetch operation fails to detect the current configuration, run tpm reverse on one of the machines in the configured service. This will output the current configuration. If necessary, execute tpm reverse on multiple hosts to determine whether the information matches.

    If you execute the returned text from tpm reverse, it will configure the service within the local directory, and the installation can then be updated.

    Ensure that the current master is listed as the master within the configuration.

    Now update Tungsten Clustering to deploy the new release:

    shell> ./tools/tpm update
  5. Start all the services on the master:

    shell> startall

    Confirm that the current master is correct within trepctl and cctrl.

  6. Start the services on remaining servers:

    shell> startall
  7. If you were using a composite data service, you must recreate the composite dataservice configuration manually.

  8. Once all the services are back up and running, it is safe to point users and applications at Tungsten Connector and return to normal operations.