Chapter 9. Replication Filters

Table of Contents

9.1. Enabling/Disabling Filters
9.2. Enabling Additional Filters
9.3. Filter Status
9.4. Filter Reference
9.4.1. ansiquotes.js Filter
9.4.2. BidiRemoteSlave (BidiSlave) Filter
9.4.3. breadcrumbs.js Filter
9.4.4. CaseTransform Filter
9.4.5. ColumnName Filter
9.4.6. ConvertStringFromMySQL Filter
9.4.7. DatabaseTransform (dbtransform) Filter
9.4.8. dbrename.js Filter
9.4.9. dbselector.js Filter
9.4.10. dbupper.js Filter
9.4.11. dropcolumn.js Filter
9.4.12. dropcomments.js Filter
9.4.13. dropmetadata.js Filter
9.4.14. dropstatementdata.js Filter
9.4.15. Dummy Filter
9.4.16. EnumToString Filter
9.4.17. EventMetadata Filter
9.4.18. foreignkeychecks.js Filter
9.4.19. Heartbeat Filter
9.4.20. insertsonly.js Filter
9.4.21. Logging Filter
9.4.22. MySQLSessionSupport (mysqlsessions) Filter
9.4.23. NetworkClient Filter
9.4.24. nocreatedbifnotexists.js Filter
9.4.25. OptimizeUpdates Filter
9.4.26. PrimaryKey Filter
9.4.27. PrintEvent Filter
9.4.28. Rename Filter
9.4.29. Replicate Filter
9.4.30. ReplicateColumns Filter
9.4.31. Row Add Database Name Filter
9.4.32. SetToString Filter
9.4.33. Shard Filter
9.4.34. shardbyseqno.js Filter
9.4.35. shardbytable.js Filter
9.4.36. SkipEventByType Filter
9.4.37. TimeDelay (delay) Filter
9.4.38. tosingledb.js Filter
9.4.39. truncatetext.js Filter
9.4.40. zerodate2null.js Filter
9.5. Standard JSON Filter Configuration
9.5.1. Rule Handling and Processing
9.5.2. Schema, Table, and Column Selection
9.6. JavaScript Filters
9.6.1. Writing JavaScript Filters
9.6.2. Installing Custom JavaScript Filters

Filtering operates by applying the filter within one, or more, of the stages configured within the replicator. Stages are the individual steps that occur within a pipeline, that take information from a source (such as MySQL binary log) and write that information to an internal queue, the transaction history log, or apply it to a database. Where the filters are applied ultimately affect how the information is stored, used, or represented to the next stage or pipeline in the system.

For example, a filter that removed out all the tables from a specific database would have different effects depending on the stage it was applied. If the filter was applied on the Extractor before writing the information into the THL, then no Applier could ever access the table data, because the information would never be stored into the THL to be transferred to the Targets. However, if the filter was applied on the Applier, then some Appliers could replicate the table and database information, while other Appliers could choose to ignore them. The filtering process also has an impact on other elements of the system. For example, filtering on the Extractor may reduce network overhead, albeit at a reduction in the flexibility of the data transferred.

In a standard replicator configuration with MySQL, the following stages are configured in the Extractor, as shown in Figure 9.1, “Filters: Pipeline Stages on Extractors”.

Figure 9.1. Filters: Pipeline Stages on Extractors

Filters: Pipeline Stages on Extractors


  • binlog-to-q Stage

    The binlog-to-q stage reads information from the MySQL binary log and stores the information within an in-memory queue.

  • q-to-thl Stage

    The in-memory queue is written out to the THL file on disk.

Within the Applier, the stages configured by default are shown in Figure 9.2, “Filters: Pipeline Stages on Appliers”.

Figure 9.2. Filters: Pipeline Stages on Appliers

Filters: Pipeline Stages on Appliers

  • remote-to-thl Stage

    Remote THL information is read from an trext; datasource and written to a local file on disk.

  • thl-to-q Stage

    The THL information is read from the file on disk and stored in an in-memory queue.

  • q-to-dbms Stage

    The data from the in-memory queue is written to the target database.

Filters can be applied during any configured stage, and where the filter is applied, alters the content and availability of the information. The staging and filtering mechanism can also be used to apply multiple filters to the data, altering content when it is read and when it is applied.

Where more than one filter is configured for a pipeline, each filter is executed in the order it appears in the configuration. For example, within the following fragment:


settostring is executed first, followed by enumtostring, pkey and finally colnames.

For certain filter combinations this order can be significant. Some filters rely on the information provided by earlier filters.