If you have an existing dataservice, data can be replicated from a standalone MySQL server into the service. The replication is configured by creating a service that reads from the standalone MySQL server and writes into the cluster through a connector attached to your dataservice. By writing through the connector, changes to the underlying dataservice topology can be handled.
Additionally, using a replicator that writes data into an existing data service can be used when migrating from an existing service into a new Tungsten Clustering service. For more information on initially provisioning the data for this type of operation, see Section 5.12.2, “Migrating from MySQL Native Replication Using a New Service”.
In order to configure this deployment, there are two steps:
Create a new replicator on the source server that extracts the data.
Create a new replicator that reads the binary logs directly from the external MySQL service through the connector
There are also the following requirements:
The host on which you want to replicate to must have Tungsten Replicator 5.3.0 or later.
Hosts on both the replicator and cluster must be able to communicate with each other.
Replicator must be able to connect as the
tungsten user to the databases
within the cluster.
When writing into the master through the connector, the user must be
given the correct privileges to write and update the MySQL server. For
this reason, the easiest method is to use the
tungsten user, and ensure that
that user has been added to the
tungsten secret alpha
Install the Tungsten Replicator package (see
Section 2.3.2, “Using the RPM and DEB package files”), or download the compressed
tarball and unpack it on
tar zxf tungsten-replicator-
Change to the Tungsten Replicator staging directory:
Configure the replicator on
First we configure the defaults and a cluster alias that points to the masters and slaves within the current Tungsten Clustering service that you are replicating from:
Click the link below to switch examples between Staging and INI methods
./tools/tpm configure alpha \ --master=host1 \ --install-directory=/opt/continuent \ --replication-user=tungsten \ --replication-password=password \ --enable-batch-service=true
[alpha] master=host1 install-directory=/opt/continuent replication-user=tungsten replication-password=password enable-batch-service=true
The description of each of the options is shown below; click the icon to hide this detail:
Click the icon to show a detailed description of each argument.
The hostname of the master (extractor) within the current service. If the current host does not match this specification, then the deployment willby default be configured as a master/extractor.
Path to the directory where the active deployment will be installed. The configured directory will contain the software, THL and relay log information unless configured otherwise.
For databases that required authentication, the username to use when connecting to the database using the corresponding connection method (native, JDBC, etc.).
The password to be used when connecting to the database using
--enable-batch-service=true (in [Tungsten Replicator 6.1 Manual])
enable-batch-service=true (in [Tungsten Replicator 6.1 Manual])
This option enables batch mode for a service, which ensures that replication services that are writing to a target database using batch mode in heterogeneous deployments (for example Hadoop, Amazon Redshift or Vertica). Setting this option enables the following settings on each host:
On a Master
is set to false.
pkey filter is
enabled (in the
filter options set to true. This ensures that rows have
the right primary key information.
On a Slave
This creates a configuration that specifies that the topology should read
directly from the source host,
writing directly to
alternative THL port is provided to ensure that the THL listener is not
operating on the same network port as the original.
Now install the service, which will create the replicator reading direct
If the installation process fails, check the output of the
/tmp/tungsten-configure.log file for
more information about the root cause.
Once the installation has been completed, you must update the position of the replicator so that it points to the correct position within the source database to prevent errors during replication. If the replication is being created as part of a migration process, determine the position of the binary log from the external replicator service used when the backup was taken. For example:
show master status;*************************** 1. row *************************** File: mysql-bin.000026 Position: 1311 Binlog_Do_DB: Binlog_Ignore_DB: 1 row in set (0.00 sec)
Use tungsten_set_position to update the replicator position to point to the master log position:
/opt/replicator/scripts/tungsten_set_position \ --seqno=0 --epoch=0 --service=beta \ --source-id=host3 --event-id=mysql-bin.000026:1311
Now start the replicator:
Replication status should be checked by explicitly using the servicename and/or RMI port:
/opt/replicator/tungsten/tungsten-replicator/bin/trepctl -service beta statusProcessing status command... NAME VALUE ---- ----- appliedLastEventId : mysql-bin.000026:0000000000001311;1252 appliedLastSeqno : 5 appliedLatency : 0.748 channels : 1 clusterName : beta currentEventId : mysql-bin.000026:0000000000001311 currentTimeMillis : 1390410611881 dataServerHost : host1 extensions : host : host3 latestEpochNumber : 1 masterConnectUri : thl://host3:2112/ masterListenUri : thl://host1:2113/ maximumStoredSeqNo : 5 minimumStoredSeqNo : 0 offlineRequests : NONE pendingError : NONE pendingErrorCode : NONE pendingErrorEventId : NONE pendingErrorSeqno : -1 pendingExceptionMessage: NONE pipelineSource : jdbc:mysql:thin://host3:13306/ relativeLatency : 8408.881 resourcePrecedence : 99 rmiPort : 10000 role : master seqnoType : java.lang.Long serviceName : beta serviceType : local simpleServiceName : beta siteName : default sourceId : host3 state : ONLINE timeInStateSeconds : 8408.21 transitioningTo : uptimeSeconds : 8409.88 useSSLConnection : false version : Tungsten Replicator 6.1.1 build 129 Finished status command...