8.9.1. Migrating from MySQL Native Replication 'In-Place'

If you are migrating an existing MySQL native replication deployment to use Tungsten Replication the configuration of the Tungsten Replication replication must be updated to match the status of the slave.

  1. Deploy Tungsten Replication using the model or system appropriate according to Chapter 2, Deployment. Ensure that the Tungsten Replication is not started automatically by excluding the --start or --start-and-report options from the tpm commands.

  2. On each slave

    Confirm that native replication is working on all slave nodes :

    shell> echo 'SHOW SLAVE STATUS\G' | tpm mysql | \
    egrep ' Master_Host| Last_Error| Slave_SQL_Running' 
                      Master_Host: tr-ssl1
                Slave_SQL_Running: Yes
                       Last_Error:

  3. On the master and each slave

    Reset the Tungsten Replicator position on all servers :

    shell> replicator start offline
    shell> trepctl -service alpha reset -all -y

  4. On the master

    Login and start Tungsten Replication services and put the Tungsten Replicator online:

    shell> startall
    shell> trepctl online
  5. On each slave

    Record the current slave log position (as reported by the Master_Log_File and Exec_Master_Log_Pos output from SHOW SLAVE STATUS. Ideally, each slave should be stopped at the same position:

    shell> echo 'SHOW SLAVE STATUS\G' | tpm mysql | \
    egrep ' Master_Host| Last_Error| Master_Log_File| Exec_Master_Log_Pos' 
                      Master_Host: tr-ssl1
                  Master_Log_File: mysql-bin.000025
                       Last_Error: Error executing row event: 'Table 'tungsten_alpha.heartbeat' doesn't exist'
              Exec_Master_Log_Pos: 181268

    If you have multiple slaves configured to read from this master, record the slave position individually for each host. Once you have the information for all the hosts, determine the earliest log file and log position across all the slaves, as this information will be needed when starting Tungsten Replication replication. If one of the servers does not show an error, it may be replicating from an intermediate server. If so, you can proceed normally and assume this server stopped at the same position as the host is replicating from.

  6. On the master

    Take the replicator offline and clear the THL:

    shell> trepctl offline
    shell> trepctl -service alpha reset -all -y
  7. On the master

    Start replication, using the lowest binary log file and log position from the slave information determined in step 5.

    shell> trepctl online -from-event 000025:181268

    Tungsten Replicator will start reading the MySQL binary log from this position, creating the corresponding THL event data.

  8. On each slave

    1. Disable native replication to prevent native replication being accidentally started on the slave.

      On MySQL 5.0 or MySQL 5.1:

      shell> echo "STOP SLAVE; CHANGE MASTER TO MASTER_HOST='';" | tpm mysql

      On MySQL 5.5 or later:

      shell> echo "STOP SLAVE; RESET SLAVE ALL;" | tpm mysql
    2. If the final position of MySQL replication matches the lowest across all slaves, start Tungsten Replication services :

      shell> trepctl online
      shell> startall

      The slave will start reading from the binary log position configured on the master.

      If the position on this slave is different, use trepctl online -from-event to set the online position according to the recorded position when native MySQL was disabled. Then start all remaining services with startall.

      shell> trepctl online -from-event 000025:188249
      shell> startall

  9. Check that replication is operating correctly by using trepctl status on the master and each slave to confirm the correct position.

  10. Remove the master.info file on each slave to ensure that when a slave restarts, it does not connect up to the master MySQL server again.

Once these steps have been completed, Tungsten Replication should be operating as the replication service for your MySQL servers. Use the information in Chapter 8, Operations Guide to monitor and administer the service.