7.11. Using the Parallel Extractor

Version Support: 2.2.1 and later

The parallel extractor functionality was added in Tungsten Replicator 2.2.1, and supports only extraction from Oracle masters.

Product Specific: Supported only for Oracle Extraction

The -provision option is only supported for extraction from Oracle.

The parallel extractor reads information from the source database schema in chunks and then feeds this information into the THL data stream as row-based INSERT operations. When the slave connects, these are applied to the slave database as with a normal INSERT operations. The parallel extractor is particularly useful in heterogeneous environments such as Oracle to MySQL where the slave data does already exist on the slave.

The basic provisioning process operates in two stages:

  1. Provisioning data is extracted and inserted into the THL. One event is used to contain all of the data from a single table. If the table is too large to be contained in a single event, the data will be distributed over multiple events.

  2. Once provisioning has finished, data is extracted from the CDC as normal and added to the THL using the normal THL extraction thread.

This allows existing data to be extracted and processed through the replicator path, including filters within the applier. Once the initial data has been extracted, the change data to be applied. A diagram of the replication scheme at different stages is provided below:

Figure 7.1. Parallel Extractor: Extraction Sequence

Parallel Extractor: Extraction Sequence

The parallel extractor happens in a multi-threaded process that extracts multiple tables, and multiple ranges from a single table in parallel. A chunking thread identifies all the tables, and also identifies the keys and chunks that can be extracted from each table. It then coordinates the multiple threads:

  • Multiple chunks from the source tables are extracted in parallel

  • Multiple tables are extracted in parallel

For example, when reading from two different tables in a single schema, the process might look like the figure below:

Figure 7.2. Parallel Extractor: Extraction Operation

Parallel Extractor: Extraction Operation

Because multiple threads are used to read information from the tables, the process is very quick, although it implies additional load on the source database server, since the queries must load all of the data into memory.

To use the parallel extractor to provision data into the slave, the configuration must be performed as part of the installation process when configuring the master replicator for the first time, or when re-initialiazing the replicator on a master after a trepctl reset operation.

To setup provisioning with parallel extractor:

  1. Install master Tungsten Replicator using tpm, but do not enable automatic starting (i.e. do not use the --start or --start-and-report options).

  2. Install the slave replicator as normal.

  3. On the master:

    1. Start the replicator in OFFLINE mode using replicator start offline:

      shell> replicator start offline
    2. Put the replicator into the ONLINE state, using the -provision option:

      shell> trepctl online -provision

      If you have an identifiable reference number, such as a the system change number or MySQL event, then this can be specified on the command-line to the trepctl online -provision command:

      shell> trepctl online -provision 40748375

      During the provisioning process, the replicator will show the status GOING-ONLINE:PROVISIONING until all of the data has been read from the existing database.

    The master will now start to read the information currently stored and feed this information through a separate pipeline into the THL.

  4. On the slave, start the replicator, or put the replicator online. Statements from the master containing the provisioning information should be replicated into the slave.


If the replicator is placed offline while the parallel extractor is still extracting data, the extraction process will continue to run and insert data until the extraction process has been completed.

Once the provisioned data has been inserted, replication will continue from the position where changes started to occur after the replicator was installed.