Chapter 5. Operations Guide

Table of Contents

5.1. The Continuent Tungsten Home Directory
5.2. Establishing the Shell Environment
5.3. Checking Dataservice Status
5.3.1. Latency or Relative Latency Display
5.3.2. Getting Detailed Information
5.3.3. Understanding Datasource Roles
5.3.4. Understanding Datasource States
5.3.5. Changing Datasource States
5.3.6. Datasource Statuses
5.3.7. Datasource States and Policy Mode Interactions
5.4. Policy Modes
5.4.1. Setting Policy Modes
5.5. Switching Master Hosts
5.5.1. Automatic Master Switch
5.5.2. Manual Master Switch
5.6. Datasource Recovery Steps
5.6.1. Recover a failed slave
5.6.2. Recover a failed master
5.7. Composite Cluster Switching, Failover and Recovery
5.7.1. Composite Cluster Site Switch
5.7.2. Composite Cluster Site Failover (Forced Switch)
5.7.3. Composite Cluster Site Recovery
5.7.4. Composite Cluster Relay Recovery
5.8. Managing Transaction Failures
5.8.1. Identifying a Transaction Mismatch
5.8.2. Skipping Transactions
5.9. Creating a Backup
5.9.1. Using a Different Backup Tool
5.9.2. Automating Backups
5.9.3. Using a Different Directory Location
5.9.4. Creating an External Backup
5.10. Restoring a Backup
5.10.1. Restoring a Specific Backup
5.10.2. Restoring an External Backup
5.10.3. Restoring from Another Slave
5.10.4. Manually Recovering from Another Slave
5.10.5. Rebuilding a Lost Datasource
5.10.6. Resetting an Entire Dataservice from Filesystem Snapshots
5.11. Migrating and Seeding Data
5.11.1. Migrating from MySQL Native Replication 'In-Place'
5.11.2. Migrating from MySQL Native Replication Using a New Service
5.11.3. Seeding Data through MySQL
5.12. Reset a Continuent Tungsten Dataservice
5.12.1. Reset a Single Site in a Multisite/Multimaster Topology
5.12.2. Reset All Sites in a Multisite/Multimaster topology
5.13. Replicator Fencing
5.13.1. Fencing a Slave Node Due to a Replication Fault
5.13.2. Fencing Master Replicators
5.14. Performing Database or OS Maintenance
5.14.1. Performing Maintenance on a Single Slave
5.14.2. Performing Maintenance on a Master
5.14.3. Performing Maintenance on an Entire Dataservice
5.14.4. Making Online Schema Changes
5.14.5. Upgrading or Updating your JVM
5.15. Performing Continuent Tungsten Maintenance
5.15.1. Changing Configuration
5.16. Monitoring Continuent Tungsten
5.16.1. Managing Log Files with logrotate
5.16.2. Monitoring Status Using cacti
5.16.3. Monitoring Status Using nagios

Continuent Tungsten™ has a wide range of tools and functionality available for checking and managing the status of a dataservice. The majority of the management and information structure is based around a small number of command-line utilities that provide a complete range of tools and information, either through a direct command-line, or secondary shell like interface.

When installing the dataservice using tpm , if requested, the login script for the staging user (for example .bashrc ) will have been updated to execute a script within the installation directory called . This configures the location of the installation, configuration, and adds the script and binary directories to the PATH so that the commands can be executed without having to use the full path to the tools.

If the script was not added to the login script automatically, or needs to be added to the current session, the script is located within the share directory of the installation directory. For example, /opt/continuent/share/ To load into the current session use source. See Section 5.2, “Establishing the Shell Environment” for more information.

shell> source /opt/continuent/share/

The main tool for controlling dataservices is cctrl. This provides a shell like interface for querying and managing the dataservice and includes shell-like features such as command history and editing. Commands can be executed using cctrl either interactively:

shell> cctrl
connect to 'alpha@host1'
alpha: session established
[LOGICAL:EXPERT] /alpha > ls

Or by supplying a command and piping that as input to the cctrl shell:

shell> echo 'ls' | cctrl