There are currently three discrete faults that can cause a failover of a master:
Database server failure - failover will occur 20 seconds after the initial detection.
The Tungsten Manager is unable to connect to the database server and gets an i/o error. If the database cannot respond to a tcp connect request after the configured number of attempts, the database server is flagged as STOPPED which initiates the failover.
This would mean, literally, that the process for the database server is gone and cannot respond to a tcp connect request. In this case, by default, the manager will try two more times, once every 10 seconds, after the initial i/o error is detected and after the then 30 second interval has elapsed, will flag the database server as being in the STOPPED state and this, in turn, initiates the failover.
Host failure - failover will occur 30 seconds after the initial detection
The host on which the master database server is running is 'gone'. The first indication that the master host is gone could be because the manager on that host no longer appears in the group of managers, one of which runs on each database server host. It could also be that the managers on the hosts besides the master do not see a 'heartbeat' message from the master manager. In a variety of circumstances like this, both of the managers will, over a 60 second interval of time, once every 10 seconds, attempt to establish, definitively, that the master host is indeed either gone or completely unreachable via the network. If this is established, the remaining managers in the group will establish a quorum and the coordinator of that group will initiate failover.
A replicator failure, if
is set to
true, will cause a
failover 70 seconds after initial detection
Depending on how you have the manager configured, a master replicator
failure can also start a process of initiating a failover. There's a
specific manager property
that tells a manager to 'fence' a master replicator that goes into
either a failed or stopped state. The manager will then try to recover
the master replicator to an online state and, again, after an interval
of 60 seconds, if the master replicator does not recover, a failover
will be initiated. BY DEFAULT, THIS BEHAVIOR IS TURNED OFF. Most
customers prefer to keep a fully functional master running, even if
replication fails, rather than have a failover occur.
The interval of time from the first detection of a fault until a failover
occurs is configurable over 10 second intervals. The formula for
determining the listed default failover intervals is based on the value of
'threshold' in the properties file
= (threshold + 1) * 10 seconds
Additionally, there are multiple ways to influence the behavior of the cluster AFTER a failover has been invoked. Below are some of the key variables:
Behavior when MySQL is not available but the binary logs are - wait for the Replicator to finish extracting the binary logs or not?
The Manager and Replicator behave in concert when MySQL dies on the
master node. When this happens, the replicator is unable to update the
trep_commit_seqno table any longer, and therefore
must either abort extraction or continue extracting without recording
the extracted position into the database.
The default of
false means that the Manager will
delay failover until all remaining events have been extracted from the
binary logs on the failing master node as a way to protect data
Failover will only continue once:
all available events are completely read from the binary logs on the master node
all events have reached the slaves
then the Replicator will stop extracting once it is unable to update the
trep_commit_seqno table in MySQL, and the Manager
will perform the failover without waiting, at the risk of possible data
loss due to leaving binlog events behind. All such situations are
For use cases where failover speed is more important than data accuracy,
those NOT willing to wait for long failover can set
replicator.store.thl.stopOnDBError=true and still use
tungsten_find_orphaned to manually analyze and
perform the data recovery. For more information, please see
Slave THL apply wait time before failover - how long to wait term of seconds for a slave to finish applying all stored THL to the database before failing over to it.
During a failover, the manager will wait until the slave that is the candidate for promotion to master has applied all stored THL events before promoting that node to master.
The default value is 0, which means "wait indefinitely until all stored THL events are applied".
Any value other than zero (0) invites data loss due to the fact that once the slave is promoted to master, any unapplied stored events in the THL will be ignored, and therefore lost.
Whenever a failover occurs, the slave with most events stored in the local THL is selected so that when the events are eventually applied, the data is as close to the original master as possible with the least number of events missed.
That is usually, but not always, the most up-to-date slave, which is the one with the most events applied.
Slave latency check - how far behind in term of seconds is each slave? If too far behind, do not use for failover.
is the "maximum slave latency" - this means the number of seconds to
which a slave must be current with the master in order to qualify as a
candidate for failover. The default is 15 minutes (900 seconds).