Each host in your cluster must be configured with the
tungsten user, have the SSH key added,
and then be configured to ensure the system and directories are ready for
the Tungsten services to be installed and configured.
There are a number of key steps to the configuration process:
Creating a user environment for the Tungsten service
Creating the SSH authorization for the user on each host
Configuring the directories and install locations
Installing necessary software and tools
Configuring sudo access to enable the configured user to perform administration commands
The operations in the following sections must be performed on each host within your cluster. Failure to perform each step may prevent the installation and deployment of Tungsten cluster.
tungsten user should be created
with a home directory that will be used to hold the Tungsten distribution
files (not the installation files), and will be used to execute and create
the different Tungsten services.
For Tungsten to work correctly, the
tungsten user must be able to open a
larger number of files/sockets for communication between the different
components and processes as . You can check this by using
ulimit -acore file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 709 virtual memory (kbytes, -v) unlimited
The system should be configured to allow a minimum of 65535 open files.
You should configure both the
tungsten user and the database user
with this limit by editing the
tungsten - nofile 65535 mysql - nofile 65535
In addition, the number of running processes supported should be increased to ensure that there are no restrictions on the running processes or threads:
tungsten - nproc 8096 mysql - nproc 8096
You must logout and log back in again for the ulimit changes to take effect.
You may also need to set the limit settings on the specific service if your operating system uses the systemctl service management framework. To configure your file limits for the specific service:
Copy the MySQL service configuration file template to the configuration directory if it does not already exist:
sudo cp /lib/systemd/system/mysql.service /etc/systemd/system/
Edit the file, and add the following to the end of the configuration information:
This configures an unlimited number of open files, you can also specify a number, for example:
Reload the systemctl daemon configuration:
sudo systemctl daemon-reload
Now restart the MySQL service:
service mysql restart
The hostname, DNS, IP address and accessibility of this information must be consistent. For the cluster to operate successfully, each host must be identifiable and accessible to each other host, either by name or IP address.
Individual hosts within your cluster must be reachable and most conform to the following:
Do not use the
Do not use Zeroconf (
addresses. These may not resolve properly or fully on some systems.
The server hostname (as returned by the hostname) must match the names you use when configuring your service.
The IP address that resolves on the hostname for that host must
resolve to the IP address (not
127.0.0.1). The default
configuration for many Linux installations is for the hostname to
resolve to the same as
127.0.0.1 localhost 127.0.0.1 host1
Each host in the cluster must be able to resolve the address for all
the other hosts in the cluster. To prevent errors within the DNS
system causing timeouts or bad resolution, all hosts in the cluster,
in addition to the witness host, should be added to
127.0.0.1 localhost 192.168.1.60 host1 192.168.1.61 host2 192.168.1.62 host3 192.168.1.63 host4
In addition to explicitly adding hostnames to
/etc/hosts, the name server switch file,
/etc/nsswitch.conf should be updated to ensure
that hosts are searched first before using DNS services. For example:
hosts: files dns
Failure to add explicit hosts and change this resolution order can lead to transient DNS resolving errors triggering timeouts and failsafe switching of hosts within the cluster.
The IP address of each host within the cluster must resolve to the
same IP address on each node. For example, if
host1 resolves to
host1, the same IP address must
be returned when looking up
host1 on the host
To double check this, you should perform the following tests:
Confirm the hostname:
The hostname cannot contain underscores.
Confirm the IP address:
Confirm that the hostnames of the other hosts in the cluster resolve correctly to a valid IP address. You should confirm on each host that you can identify and connect to each other host in the planned cluster:
If the host does not resolve, either ensure that the hosts are added
to the DNS service, or explicitly add the information to the
/etc/hosts then you must ensure that
the information is correct and consistent on each host, and double
check using the above method that the IP address resolves correctly
for every host in the cluster.
The following network ports should be open between specific hosts to allow communication between the different components:
|Database Service||Database Host||Database Host||7||Checking availability|
|〃||〃||〃||10000-10001||Replication connection listener port|
|〃||10002-10003||Replication connection listener ports|
|Client Application||13306||MySQL port for Connectivity|
|Manager Hosts||7||Communication between managers within multi-site, multi-master clusters|
If a system has a firewall enabled, in addition to enabling communication between hosts as in the table above, the localhost must allow port-to-port traffic on the loopback connection without restrictions. For example, using iptables this can be enabled using the following command rule:
iptables -A INPUT -i lo -m state --state NEW -j ACCEPT
For password-less SSH to work between the different hosts in the cluster, you need to copy both the public and private keys between the hosts in the cluster. This will allow the staging server, and each host, to communicate directly with each other using the designated login.
To achieve this, on each host in the cluster:
Copy the public
private key (
from the staging server to the
Add the public key to the
cat .ssh/id_rsa.pub >> .ssh/authorized_keys
Ensure that the file permissions on the
.ssh directory are correct:
chmod 700 ~/.sshshell>
chmod 600 ~/.ssh/*
With each host configured, you should try to connecting to each host from the staging server to confirm that the SSH information has been correctly configured. You can do this by connecting to the host using ssh:
You should have logged into the host at the
tungsten home directory, and that directory should
be writable by the
On each host within the cluster you must pick, and configure, a number of directories to be used by Tungsten Replicator™, as follows:
/tmp directory must be
accessible and executable, as it is the location where some software
will be extracted and executed during installation and setup. The
directory must be writable by the
On some systems, the
filesystem is mounted as a separate filesystem and explicitly
configured to be non-executable (using the
noexec filesystem option). Check
the output from the mount command.
Tungsten Replicator™ needs to be installed in a specific directory.
The recommended solution is to use
/opt/continuent. This information will be
required when you configure the cluster service.
The directory should be created, and the owner and permissions set for the configured user:
sudo mkdir /opt/continuentshell>
sudo chown tungsten /opt/continuentshell>
sudo chmod 700 /opt/continuent
The home directory of the
tungsten user must be writable
by that user.
Tungsten Replicator™ relies on the following software. Each host must use the same version of each tool.
|Ruby||1.8.7, 1.9.3, or 2.0.0 to 2.4.0 or higher [a]||JRuby is not supported|
|Ruby OpenSSL Module||-||Checking using ruby -ropenssl -e 'p "works"'|
Ruby ||-||Install using gem install io-console [b]|
Ruby ||-||Install using gem install net-ssh [c]|
Ruby ||-||Install using gem install net-scp [d]|
|GNU tar||-||gtar is required for Solaris due to limitations in the native tar command|
|Java Runtime Environment||Java SE 7 (or compatible)|
[a] Ruby 1.9.1 and 1.9.2 are not supported; these releases remove the execute bit during installation.
[c] For Ruby 1.8.7 the minimum version of net-ssh is 2.5.2, install using gem install net-ssh -v 2.5.2
[d] For Ruby 1.8.7 the minimum version of net-scp is 1.0.4, install using gem install net-scp -v 1.0.4
These tools must be installed, running, and available to all users on each host.
To check the current version for any installed tool, login as the
configured user (e.g.
execute the command to get the latest version. For example:
Run java -version:
java -versionopenjdk version "1.8.0_102" OpenJDK Runtime Environment (build 1.8.0_102-b14) OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)
Tungsten Replicator is known to work with Java using the following JVMs:
Oracle JVM/JDK 8
On certain environments, a separate tool such as alternatives (RedHat/CentOS) or update-alternatives (Debian/Ubuntu) may need to be used to switch Java versions globally or for individual users. For example, within CentOS:
It is recommended to switch off all automated software and operating system update procedures. These can automatically install and restart different services which may be identified as failures by Tungsten Replicator. Software and Operating System updates should be handled by following the appropriate Section 8.11, “Performing Database or OS Maintenance” procedures.
It also recommended to install ntp or a similar time synchronization tool so that each host in the cluster has the same physical time.
Tungsten requires that the user you have configured to run the server has
sudo credentials so that it can run and install
Defaults:tungsten !authenticate ... ## Allow tungsten to run any command tungsten ALL=(ALL) ALL
For a secure environment where sudo access is not permitted for all operations, a minimum configuration can be used:
sudo can also be configured to handle only specific directories or files. For example, when using xtrabackup, or additional tools in the Tungsten toolkit, such as tungsten_provision_slave (in [Continuent Tungsten 2.0 Manual]), additional commands must be added to the permitted list:
tungsten ALL=(ALL) NOPASSWD: /sbin/service, /usr/bin/innobackupex, /bin/rm, » /bin/mv, /bin/chown, /bin/chmod, /usr/bin/scp, /bin/tar, /usr/bin/which, » /etc/init.d/mysql, /usr/bin/test, » /apps/tungsten/continuent/tungsten/tungsten-replicator/scripts/xtrabackup.sh, » /apps/tungsten/continuent/tungsten/tools/tpm, /usr/bin/innobackupex-1.5.1, » /bin/cat, /bin/find
Within Red Hat Linux add the following line:
tungsten ALL=(root) NOPASSWD: ALL
For a secure environment where sudo access is not permitted for all operations, a minimum configuration can be used:
tungsten ALL=(root) NOPASSWD: /usr/bin/which, /etc/init.d/mysql
tungsten ALL=(root) NOPASSWD: /sbin/service, /usr/bin/innobackupex, /bin/rm, » /bin/mv, /bin/chown, /bin/chmod, /usr/bin/scp, /bin/tar, /usr/bin/which, » /etc/init.d/mysql, /usr/bin/test, » /apps/tungsten/continuent/tungsten/tungsten-replicator/scripts/xtrabackup.sh, » /apps/tungsten/continuent/tungsten/tools/tpm, /usr/bin/innobackupex-1.5.1, » /bin/cat, /bin/find
When SELinux is enabled, systemctl may refuse to start mysqld if the listener port or location on disk have been changed. The solution is to inform SELinux about any changed or additional resources.
Tungsten best practice is to change the default MySQL port from
13306 so that requesting clients do
not accidentally connect directly to the database without being routed by
If using a non-standard port for MySQL and SELinux is enabled, you must also change the port context, for example:
semanage port -a -t mysqld_port_t -p tcp 13306
Ensure the file contexts are set correctly for SELinux. For example, to
allow MySQL data to be stored in a non-standard location (i.e.
semanage fcontext -a -t etc_runtime_t /datashell >
restorecon -Rv /data/shell >
semanage fcontext -a -t mysqld_db_t "/data(/.*)?"shell >
restorecon -Rv /data/*