Tungsten Replicator does not automatically shut off triggers on Replicas. This can create problems on Replicas as the trigger will run twice. Typical symptoms are duplicate key errors, though other problems may appear.
There is no simple one-answer-fits-all solution as the behaviour of MySQL and Triggers will differ based on various conditions.
When using ROW
Based Binary Logging, MySQL will log all
data changes in the binary log, including any data changes performed as a result of a trigger firing
When using MIXED
Based Binary Logging...
if the Trigger is deemed to be non-deterministic
then MySQL will behave based on the ROW
Based
Logging rules and log all data changes, including any data changes performed
as a result of a trigger firing.
if the Tigger is deemed to be deterministic, then MySQL will behave
based on STATEMENT
Based Logging rules and
ONLY log the statement
issued by the client and NOT log any changes as a result of the trigger
firing
The mixed behaviour outlined above presents challenges for Tungsten Replicator because MySQL does not flag transactions as being the result of a trigger firing or a client application. Therefore, it is not possible for the replicator to make a decision either.
This means, that if you are running with MIXED
Based
Binary Logging enabled, then there may be times when you would want the triggers on the
target to fire, and times when you don't. Therefore the recommendations are as follows:
Tungsten Clustering Deployments
Switch to ROW
Based Binary Logging, and either
Implement the is_Primary()
function outlined below, or
Use the replicate.ignore
filter to ignore data changes to tables
altered by Triggers (ONLY suitable if the filtered
tables are solely managed by the Trigger)
Tungsten Replicator Deployments
If source instance is running in ROW
Based Binary Logging mode
Drop triggers on target. This is practical in fan-in topologies
for reporting or other cases where you do not need to failover to
the Replica at a later time. Optionally also implement the
dropddl.js
JavaScript filter (Available in Tungsten Replicator v6.1.2 onwards)
to prevent CREATE/DROP TRIGGER DDL being replicated, or
Implement the is_Primary()
function outlined below, or
Use the replicate.ignore
filter to ignore data changes to tables
altered by Triggers (ONLY suitable if the filtered tables are solely
managed by the Trigger)
If source instance is running in MIXED
Based Binary Logging mode
Use the replicate.ignore
filter to ignore data changes to tables
altered by Triggers (ONLY suitable if the filtered tables are solely
managed by the Trigger), or
Switch to ROW
Based Binary Logging and follow recommendations above
The is_Primary()
approach is
simple to implement. First, create a function like the following that
returns false if we are using the Tungsten user, as would be the case on
a Replica.
create function is_Primary()
returns boolean
deterministic
return if(substring_index(user(),'@',1) != 'tungsten',true, false);
Next add this to triggers that should not run on the Replica, as shown in the next example. This suppresses trigger action to insert into table bar except on the Primary.
delimiter //
create trigger foo_insert after insert on foo
for each row begin
if is_Primary() then
insert into bar set id=NEW.id;
end if;
end;
//
As long as applications do not use the Tungsten account on the Primary, the preceding approach will be sufficient to suppress trigger operation.
Alternatively, if you are implementing the is_Primary()
within a clustering deployment, you could check the database read_only parameter. In a clustered
deployment, the Replica databases will be in read_only mode and therefore the trigger could
be coded to only fire when the database read_only mode is OFF