If you are migrating an existing MySQL native replication deployment to use Tungsten Cluster or the standalone Tungsten Replicator the configuration of the must be updated to match the status of the Replica.
Deploy Tungsten Cluster using the model or system appropriate according
to Chapter 2, Deployment. Ensure that the Tungsten Cluster is not
started automatically by excluding the
--start
or
--start-and-report
options from the
tpm commands.
On each Replica
Confirm that native replication is working on all Replica nodes :
shell> echo 'SHOW SLAVE STATUS\G' | tpm mysql | \
egrep ' Master_Host| Last_Error| Slave_SQL_Running'
Master_Host: tr-ssl1
Slave_SQL_Running: Yes
Last_Error:
On the Primary and each Replica
Reset the Tungsten Replicator position on all servers :
shell>replicator start offline
shell>trepctl -service
alpha
reset -all -y
On the Primary
Login and start Tungsten Cluster services and put the Tungsten Replicator online:
shell>startall
shell>trepctl online
On the Primary
Put the cluster into maintenance mode using cctrl to prevent Tungsten Cluster automatically reconfiguring services:
cctrl > set policy maintenance
On each Replica
Record the current Replica log position (as reported by the
Master_Log_File
and
Exec_Master_Log_Pos
output
from SHOW SLAVE STATUS
.
Ideally, each Replica should be stopped at the same position:
shell> echo 'SHOW SLAVE STATUS\G' | tpm mysql | \
egrep ' Master_Host| Last_Error| Master_Log_File| Exec_Master_Log_Pos'
Master_Host: tr-ssl1
Master_Log_File: mysql-bin.000025
Last_Error: Error executing row event: 'Table 'tungsten_alpha.heartbeat' doesn't exist'
Exec_Master_Log_Pos: 181268
If you have multiple Replicas configured to read from this Primary, record the Replica position individually for each host. Once you have the information for all the hosts, determine the earliest log file and log position across all the Replicas, as this information will be needed when starting replication. If one of the servers does not show an error, it may be replicating from an intermediate server. If so, you can proceed normally and assume this server stopped at the same position as the host is replicating from.
On the Primary
Take the replicator offline and clear the THL:
shell>trepctl offline
shell>trepctl -service
alpha
reset -all -y
On the Primary
Start replication, using the lowest binary log file and log position from the Replica information determined previously.
shell> trepctl online -from-event 000025:181268
Tungsten Replicator will start reading the MySQL binary log from this position, creating the corresponding THL event data.
On each Replica
Disable native replication to prevent native replication being accidentally started on the Replica.
On MySQL 5.0 or MySQL 5.1:
shell> echo "STOP SLAVE; CHANGE MASTER TO MASTER_HOST='';" | tpm mysql
On MySQL 5.5 or later:
shell> echo "STOP SLAVE; RESET SLAVE ALL;" | tpm mysql
If the final position of MySQL replication matches the lowest across all Replicas, start Tungsten Cluster services :
shell>trepctl online
shell>startall
The Replica will start reading from the binary log position configured on the Primary.
If the position on this Replica is different, use trepctl online -from-event to set the online position according to the recorded position when native MySQL was disabled. Then start all remaining services with startall.
shell>trepctl online -from-event
shell>000025:188249
startall
Check that replication is operating correctly by using trepctl status on the Primary and each Replica to confirm the correct position.
Use cctrl to confirm that replication is operating correctly across the dataservice on all hosts.
Put the cluster back into automatic mode:
cctrl> set policy automatic
Update your applications to use the installed connector services rather than a direct connection.
Remove the master.info
file on
each Replica to ensure that when a Replica restarts, it does not connect
up to the Primary MySQL server again.
Once these steps have been completed, Tungsten Cluster should be operating as the replication service for your MySQL servers. Use the information in Chapter 6, Operations Guide to monitor and administer the service.