Chapter 3. Deployment: MySQL Topologies

Table of Contents

3.1. Deploying a Master/Slave Cluster
3.1.1. Prepare: Master/Slave Cluster
3.1.2. Install: Master/Slave Cluster
3.1.3. Best Practices: Master/Slave Cluster
3.2. Deploying Composite Multimaster Clustering
3.2.1. Prepare: Composite Multimaster Clusters
3.2.2. Install: Composite Multimaster Clusters
3.2.3. Best Practices: Composite Multimaster Clusters
3.2.4. Configuring Startup on Boot
3.2.5. Performing Schema Changes
3.2.6. Resetting a single dataservice
3.2.7. Resetting all dataservices
3.2.8. Provisioning during live operations
3.2.9. Dataserver maintenance
3.2.10. Adding a Cluster to a Composite Multimaster Topology
3.3. Deploying a Composite Cluster
3.3.1. Prepare: Composite Cluster
3.3.2. Install: Composite Cluster
3.3.3. Best Practices: Composite Cluster
3.4. Deploying Tungsten Connector Only
3.5. Deploying Additional Datasources, Managers, or Connectors
3.5.1. Adding Datasources to an Existing Deployment
3.5.2. Adding Active Witnesses to an Existing Deployment
3.5.3. Adding Passive Witnesses to an Existing Deployment
3.5.4. Adding Connectors to an Existing Deployment
3.5.5. Adding a remote Composite Cluster
3.5.6. Converting from a single cluster to a composite cluster
3.6. Replicating Data Into an Existing Dataservice
3.7. Replicating Data Out of a Cluster
3.7.1. Prepare: Replicating Data Out of a Cluster
3.7.2. Deploy: Replicating Data Out of a Cluster
3.8. Replicating from a Cluster to a Datawarehouse
3.8.1. Replicating from a Cluster to a Datawarehouse - Prerequisites
3.8.2. Replicating from a Cluster to a Datawarehouse - Configuring the Cluster Nodes
3.8.3. Replicating from a Cluster to a Datawarehouse - Configuring the Cluster-Slave
3.9. Migrating and Seeding Data
3.9.1. Migrating from MySQL Native Replication 'In-Place'
3.9.2. Migrating from MySQL Native Replication Using a New Service
3.9.3. Seeding Data through MySQL
3.9.4. Seeding Data through tungsten_provision_thl

Creating a Tungsten Clustering (for MySQL) Dataservice using Tungsten Clustering combines a number of different components, systems, and functionality, to support a running database dataservice that is capable of handling database failures, complex replication topologies, and management of the client/database connection for both load balancing and failover scenarios.

How you choose to deploy depends on your requirements and environment. All deployments operate through the tpm command. tpm operates in two different modes:

  • tpm staging configuration — a tpm configuration is created by defining the command-line arguments that define the deployment type, structure and any additional parameters. tpm then installs all the software on all the required hosts by using ssh to distribute Tungsten Clustering and the configuration, and optionally automatically starts the services on each host. tpm manages the entire deployment, configuration and upgrade procedure.

  • tpm INI configuration — tpm uses an INI to configure the service on the local host. The INI file must be create on each host that will be part of the cluster. tpm only manages the services on the local host; in a multi-host deployment, upgrades, updates, and configuration must be handled separately on each host.

The following sections provide guidance and instructions for creating a number of different deployment scenarios using Tungsten Clustering.