Before covering the basics of creating different dataservice types, there
are some key terms that will be used throughout the setup and installation
process that identify different components of the system. these are
summarised in Table 2.1, “Key Terminology”.
Table 2.1. Key Terminology
A configured Tungsten Clustering service consisting of multiple
dataservices, typically at different physical locations.
A configured Tungsten Clustering service consisting of dataservers,
datasources and connectors.
The database on a host. Datasources include MySQL, PostgreSQL or
Host or Node
One member of a dataservice and the associated Tungsten components.
The machine from which Tungsten Clustering is installed and configured.
The machine does not need to be the same as any of the existing
hosts in the cluster.
The directory where the installation files are located and the
installer is executed. Further configuration and updates must be
performed from this directory.
A connector is a routing service that provides management for
connectivity between application services and the underlying
A witness host is a host that can be contacted using the
ping protocol to act as a network check for the
other nodes of the cluster. Witness hosts should be on the same
network and segment as the other nodes in the dataservice.
The manager plays a key role within any dataservice, communicating between
the replicator, connector and datasources to understand the current
status, and controlling these components to handle failures, maintenance,
and service availability.
The primary role of the manager is to monitor each of the services,
identify problems, and react to those problems in the most effective way
to keep the dataservice active. For example, in the case of a datasource
failure, the datasource is temporarily removed from the cluster, the
connector is updated to route queries to another available datasource, and
the replication is disabled.
These decisions are driven by a rule-based system, which checks current
status values, and performs different operations to achieve the correct
result and return the dataservice to operational status.
In terms of control and management, the manager is capable of performing
backup and restore information, automatically recovering from failure
(including re-provisioning from backups), and is also able to individually
control the configuration, service startup and shutdown, and overall
control of the system.
Within a typical Tungsten Clustering deployment there are multiple managers and
these keep in constant contact with each other, and the other services.
When a failure occurs, multiple managers are involved in decisions. For
example, if a host is no longer visible to one manager, it does not make
the decision to disable the service on it's own; only when a majority of
managers identify the same result is the decision made. For this reason,
there should be an odd number of managers (to prevent deadlock), or
managers can be augmented through the use of
One manager is automatically installed for each configured datasource;
that is, in a three-node system with a master and two slaves, three
managers will be installed.
Checks to determine the availability of hosts are performed by using
either the system ping protocol or the Echo TCP/IP
protocol on port 7 to determine whether a host is available.
ping protocol to determine whether a host is available.
The configuration of the protocol to be used can be made by adjusting the
manager properties. For more information, see
Section B.2.2.3, “Host Availability Checks”.
2.1.2. Connector (Router) Hosts
Connectors (known as routers within the dataservice) provide a routing
mechanism between client applications and the dataservice. The
Tungsten Connector component automatically routes database operations to the
master or slave, and takes account of the current cluster status as
communicated to it by the Tungsten Manager. This functionality solves three
primary issues that might normally need to be handled by the client
Datasource role redirection (i.e. master and slave). This includes
read/write splitting, and the ability to read data from a slave that
is up to date with a corresponding write.
Datasource failure (high-availability), including the ability to
redirect client requests in the event of a failure or failover. This
includes maintenance operations.
Dataservice topology changes, for example when expanding the number of
datasources within a dataservice
The primary role of the connector is to act as the connection point for
applications that can remain open and active, while simultaneously
supporting connectivity to the datasources. This allows for changes to the
topology and active role of individual datasources without interrupting
the client application. Because the operation is through one or more
static connectors, the application also does not need to be modified or
changed when the number of datasources is expanded or altered.
Depending on the deployment environment and client application
requirements, the connector can be installed either on the client
application servers, the database servers, or independent hosts. For more
information, see Section 6.2, “Clients and Deployment”.
Connectors can also be installed independently on specific hosts. The list
of enabled connectors is defined by the
--connectors option to
tpm. A Tungsten Clustering dataservice can be installed with
more connector servers than datasources or managers.
The replicator provides the core replication of information between
datasources and, in composite deployment, between dataservices. The
replicator operates by extracting data from the 'master' datasource (for
example, using the MySQL binary log), and then applies the data to one or
more target datasources.
Different deployments use different replicators and configurations, but in
a typical Tungsten Clustering deployment a master/slave or multimaster
deployment model is used. For Tungsten Clustering deployments there will be one
replicator instance installed on each datasource host.
Within the dataservice, the manager controls each replicator service and
it able to alter the replicator operation and role, for example by
switching between master and slave roles. The replicator also provides
information to the manager about the latency of the replication operation,
and uses this with the connectors to control client connectivity into the
Replication within Tungsten Clustering is supported by Tungsten Replicator™
and this supports a wide range of additional deployment topologies, and
heterogeneous deployments including MongoDB, Vertica, and Oracle.
Replication to and from a dataservice are supported. For more information
on replicating out of an existing dataservice, see:
Replicators are automatically configured according to the datasources and
topology specified when the dataservice is created.
Tungsten Clustering operates through the rules built into the manager that make
decisions about different configuration and status settings for all the
services within the cluster. In the event of a communication failure
within the system it is vital for the manager, in automatic policy mode,
to perform a switch from a failed or unavailable master.
Within the network, the managers communicate with each other, in addition
to the connectors and dataservers to determine their availability. The
managers compare states and network connectivity. In the event of an
issue, managers 'vote' on whether a failover or switch should occur.
The rules are designed to prevent unnecessary switches and failovers.
Managers vote, and an odd number of managers helps to ensure that prevent
split-brain scenarios when invalid failover decisions have been made.
Two types of witness are supported:
Passive Witness — a passive witness is
checked by the managers using a network ping to determine if the host
is available. The witness host or hosts are used only as check to
verify whether a failed host or failed network is the root cause.
Active Witness — an active witness is an
instance of Tungsten Manager running on a host that is otherwise not
part of the dataservice. An active witness has full voting rights
within the managers and can therefore make informed decisions about
the dataservice state in the event of a failure. Active witnesses can
only be a member of one cluster at a time.
All managers are active witnesses, and active witnesses are the
recommended solution for deployments where network availability is less
certain (i.e. cloud environments), and where you have two-node
Tungsten Clustering Quorum Requirements
There should be at least three managers (including any active
There should be an odd number of managers and witnesses, to prevent
If the dataservice contains only two hosts, at least one active
witness must be installed.
Dataservices may contain either passive or
active witnesses, but not both.
These rules apply for all Tungsten Clustering installations and must be adhered
to. Deployment will fail if these conditions are not met.
The rules for witness selection are as follows:
Passive witnesses must be on the same network segment the managers.
To prevent issues where a network switch or router failure would
cause the managers to falsely identify a network failure, the
managers must be able to connect to each other without having to
route across networks or network segments.
Active witnesses can be located beyond or across network segments,
but all active witnesses must have clear communication channel to
each other, and other managers. Difficulties in contacting other
managers and services in the network could cause unwanted failover
or shunning of datasources.
For example, consider the following scenario:
Master dataserver on
slave dataservers on
hostA can see the
hostB, but not
hostB can see the
hostC, but not
hostC can see the
hostC can communicate with each
Figure 2.1. Witness: Active Service
The master will not be automatically switched, given that
hostA is still available to two of
the managers in the network.
If a second manager identifies
Figure 2.2. Witness: Inactive Service
Passive witnesses can be enabled when using tpm by
./tools/tpm install alpha --witnesses=hostC,hostD \
To enable active witnesses, the
must be specified and the hosts that will act as active witnesses must be
added to the list of hosts provided to
--members. This enables all specified
witnesses to be enabled as active witnesses:
./tools/tpm install alpha --enable-active-witnesses=true \