Chapter 3. Deployment: MySQL Topologies

Table of Contents

3.1. Deploying a Master/Slave Cluster
3.1.1. Prepare: Master/Slave Cluster
3.1.2. Install: Master/Slave Cluster
3.1.3. Best Practices: Master/Slave Cluster
3.2. Deploying Multisite/Multimaster Clusters
3.2.1. Prepare: Multisite/Multimaster Clusters
3.2.2. Install: Multisite/Multimaster Clusters
3.2.3. Best Practices: Multisite/Multimaster Clusters
3.2.4. Configuring Startup on Boot
3.2.5. Performing Schema Changes
3.2.6. Resetting a single dataservice
3.2.7. Resetting all dataservices
3.2.8. Adding a new Cluster/Dataservice
3.2.9. Enabling SSL for Replicators Only
3.2.10. Provisioning during live operations
3.2.11. Dataserver maintenance
3.3. Deploying a Composite (SOR) Cluster
3.3.1. Prepare: Composite (SOR) Cluster
3.3.2. Install: Composite (SOR) Cluster
3.3.3. Best Practices: Composite (SOR) Cluster
3.4. Deploying Tungsten Connector Only
3.5. Deploying Additional Datasources, Managers, or Connectors
3.5.1. Adding Datasources to an Existing Deployment
3.5.2. Adding Active Witnesses to an Existing Deployment
3.5.3. Adding Passive Witnesses to an Existing Deployment
3.5.4. Adding Connectors to an Existing Deployment
3.5.5. Adding a remote Composite Cluster
3.6. Replicating Data Into an Existing Dataservice
3.7. Replicating Data from a Cluster into MySQL
3.7.1. Prepare: Replicating Data from a Cluster into MySQL
3.7.2. Deploy: Replicating Data from a Cluster into MySQL
3.7.3. Best Practices: Replicating Data from a Cluster into MySQL
3.8. Replicating from a Cluster to a Datawarehouse
3.9. Migrating and Seeding Data
3.9.1. Migrating from MySQL Native Replication 'In-Place'
3.9.2. Migrating from MySQL Native Replication Using a New Service
3.9.3. Seeding Data through MySQL
3.9.4. Seeding Data through tungsten_provision_thl

Creating a Continuent Tungsten Dataservice using Continuent Tungsten combines a number of different components, systems, and functionality, to support a running database dataservice that is capable of handling database failures, complex replication topologies, and management of the client/database connection for both load balancing and failover scenarios.

How you choose to deploy depends on your requirements and environment. All deployments operate through the tpm command. tpm operates in two different modes:

  • tpm staging configuration — a tpm configuration is created by defining the command-line arguments that define the deployment type, structure and any additional parameters. tpm then installs all the software on all the required hosts by using ssh to distribute Continuent Tungsten and the configuration, and optionally automatically starts the services on each host. tpm manages the entire deployment, configuration and upgrade procedure.

  • tpm INI configuration — tpm uses an INI to configure the service on the local host. The INI file must be create on each host that will be part of the cluster. tpm only manages the services on the local host; in a multi-host deployment, upgrades, updates, and configuration must be handled separately on each host.

The following sections provide guidance and instructions for creating a number of different deployment scenarios using Continuent Tungsten.