A pipeline (or service) acts upon data.
Pipelines consist of a variable number of stages.
Every stage's workflow consists of three (3) actions, which are:
Extract: the source for extraction could be the mysql server binary logs on a master, and the local THL on disk for a slave
Filter: any configured filters are applied here
Apply: the apply target can be THL on disk on a master, and the database server on a slave
Stages can be customized with filters, and filters are invoked on a per-stage basis.
By default, there are two pipeline services defined:
Master replication service, which contains two (2) stages:
binlog-to-q: reads information from the MySQL binary log and stores the information within an in-memory queue.
q-to-thl: in-memory queue is written out to the THL file on disk.
Slave replication service, which contains three (3) stages:
remote-to-thl: remote THL information is read from a master datasource and written to a local file on disk.
thl-to-q: THL information is read from the file on disk and stored in an in-memory queue.
q-to-dbms: data from the in-memory queue is written to the target database.