If you have an existing cluster and you want to replicate the data out to a separate standalone server using Tungsten Replicator then you can create a cluster alias, and use a master/slave topology to replicate from the cluster. This allows for THL events from the cluster to be applied to a separate server for the purposes of backup or separate analysis.
During the installation process a cluster-alias and cluster-slave are declared. The cluster-alias describes all of the servers in the cluster and how they may be reached. The cluster-slave defines one or more servers that will replicate from the cluster.
The Tungsten Replicator will be installed on the cluster-slave server. That server will download THL data and apply them to the local server. If the cluster-slave has more than one server; one of them will be declared the relay (or master). The other members of the cluster-slave may also download THL data from that server.
If the relay for the cluster-slave fails; the other nodes will automatically start downloading THL data from a server in the cluster. If a non-relay server fails; it will not have any impact on the other members.
Identify the cluster to replicate from. You will need the master, slaves and THL port (if specified). Use tpm reverse from a cluster member to find the correct values.
If you are replicating to a non-MySQL server. Update the configuration
of the cluster to include
--enable-heterogeneous-service=true
prior to beginning. The same option must be included when installing
the Tungsten Replicator.
Identify all servers that will replicate from the cluster. If there is more than one, a relay server should be identified to replicate from the cluster and provide THL data to other servers.
Prepare each server according to the prerequisites for the DBMS platform it is serving. If you are working with multiple DBMS platforms; treat each platform as a different cluster-slave during deployment.
Make sure the THL port for the cluster is open between all servers.
Install the Tungsten Replicator package or download the Tungsten Replicator tarball, and unpack it:
shell>cd /opt/continuent/software
shell>tar zxf
tungsten-replicator-5.4.1-41.tar.gz
Change to the unpackaged directory:
shell> cd tungsten-replicator-5.4.1-41
Configure the replicator
Click the link below to switch examples between Staging and INI methods
Show Staging
Show INI
shell>./tools/tpm configure defaults \ --install-directory=/opt/continuent \ --profile-script=~/.bash_profile \ --replication-password=secret \ --replication-port=13306 \ --replication-user=tungsten \ --user=tungsten
shell>./tools/tpm configure alpha \ --master=host1 \ --slaves=host2,host3 \ --thl-port=2112 \ --topology=cluster-alias
shell>./tools/tpm configure beta \ --relay=host6 \ --relay-source=alpha \ --topology=cluster-slave
shell> vi /etc/tungsten/tungsten.ini
[defaults] install-directory=/opt/continuent profile-script=~/.bash_profile replication-password=secret replication-port=13306 replication-user=tungsten user=tungsten
[alpha] master=host1 slaves=host2,host3 thl-port=2112 topology=cluster-alias
[beta] relay=host6 relay-source=alpha topology=cluster-slave
Configuration group defaults
The description of each of the options is shown below; click the icon to hide this detail:
Click the icon to show a detailed description of each argument.
--install-directory=/opt/continuent
install-directory=/opt/continuent
Path to the directory where the active deployment will be installed. The configured directory will contain the software, THL and relay log information unless configured otherwise.
--profile-script=~/.bash_profile
profile-script=~/.bash_profile
Append commands to include env.sh in this profile script
The password to be used when connecting to the database using
the corresponding
--replication-user
.
The network port used to connect to the database server. The default port used depends on the database being configured.
For databases that required authentication, the username to use when connecting to the database using the corresponding connection method (native, JDBC, etc.).
System User
Configuration group alpha
The description of each of the options is shown below; click the icon to hide this detail:
Click the icon to show a detailed description of each argument.
The hostname of the master (extractor) within the current service. If the current host does not match this specification, then the deployment willby default be configured as a master/extractor.
What are the slaves for this dataservice?
Port to use for THL Operations
Replication topology for the dataservice Valid values are star,cluster-slave,master-slave,fan-in,clustered,cluster-alias,all-masters,direct
Configuration group beta
The description of each of the options is shown below; click the icon to hide this detail:
Click the icon to show a detailed description of each argument.
The hostname of the master (extractor) within the current service. If the current host does not match this specification, then the deployment willby default be configured as a master/extractor.
Dataservice name to use as a relay source
Replication topology for the dataservice Valid values are star,cluster-slave,master-slave,fan-in,clustered,cluster-alias,all-masters,direct
If you are replicating to a non-MySQL server. Include the
--enable-heterogeneous-service=true
option in the above command.
This dataservice cluster-alias
name MUST be the
same as the cluster dataservice name that you are replicating from.
Do not include
start-and-report=true
if you are
taking over for MySQL native replication. See
Section 7.10.1, “Migrating from MySQL Native Replication 'In-Place'” for next
steps after completing installation.
Once the configuration has been completed, you can perform the installation to set up the services using this configuration:
shell> ./tools/tpm install
During the installation and startup, tpm will notify you of any problems that need to be fixed before the service can be correctly installed and started. If the service starts correctly, you should see the configuration and current status of the service.
If the installation process fails, check the output of the
/tmp/tungsten-configure.log
file for
more information about the root cause.
The cluster should be installed and ready to use.