Each host in your cluster must be configured with the
tungsten
user, have the SSH key added,
and then be configured to ensure the system and directories are ready for
the Tungsten services to be installed and configured.
There are a number of key steps to the configuration process:
Creating a user environment for the Tungsten service
Creating the SSH authorization for the user on each host
Configuring the directories and install locations
Installing necessary software and tools
Configuring sudo access to enable the configured user to perform administration commands
The operations in the following sections must be performed on each host within your cluster. Failure to perform each step may prevent the installation and deployment of Tungsten cluster.
For a full list of supported Operating System environments, see Table 2.2, “Tungsten OS Support”.
The tungsten
user should be created
with a home directory that will be used to hold the Tungsten distribution
files (not the installation files), and will be used to execute and create
the different Tungsten services.
For Tungsten to work correctly, the
tungsten
user must be able to open a
larger number of files/sockets for communication between the different
components and processes as . You can check this by using
ulimit:
shell> ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
The system should be configured to allow a minimum of 65535 open files.
You should configure both the
tungsten
user and the database user
with this limit by editing the
/etc/security/limits.conf
file:
tungsten - nofile 65535 mysql - nofile 65535
In addition, the number of running processes supported should be increased to ensure that there are no restrictions on the running processes or threads:
tungsten - nproc 8096 mysql - nproc 8096
You must logout and log back in again for the ulimit changes to take effect.
You may also need to set the limit settings on the specific service if your operating system uses the systemctl service management framework. To configure your file limits for the specific service:
Copy the MySQL service configuration file template to the configuration directory if it does not already exist:
shell> sudo cp /lib/systemd/system/mysql.service /etc/systemd/system/
Please note that the filename mysql.service
will vary based on multiple factors. Do check to be sure you are
using the correct file. For example, in some cases the filename
would be mysqld.service
Edit the proper file used above, and append to or edit the existing
entry to ensure the value of infinity
for the key
LimitNOFILE
:
LimitNOFILE=infinity
This configures an unlimited number of open files, you can also specify a number, for example:
LimitNOFILE=65535
Reload the systemctl daemon configuration:
shell> sudo systemctl daemon-reload
Now restart the MySQL service:
shell> service mysql restart
The hostname, DNS, IP address and accessibility of this information must be consistent. For the cluster to operate successfully, each host must be identifiable and accessible to each other host, either by name or IP address.
Individual hosts within your cluster must be reachable and must conform to the following:
Do not use the localhost
or
127.0.0.1
addresses.
Do not use Zeroconf (.local
)
addresses. These may not resolve properly or fully on some systems.
The server hostname (as returned by the hostname) must match the names you use when configuring your service.
The IP address that resolves on the hostname for that host must
resolve to the IP address (not
127.0.0.1
). The default
configuration for many Linux installations is for the hostname to
resolve to the same as
localhost
:
127.0.0.1 localhost 127.0.0.1 host1
Each host in the cluster must be able to resolve the address for all
the other hosts in the cluster. To prevent errors within the DNS
system causing timeouts or bad resolution, all hosts in the cluster,
in addition to the witness host, should be added to
/etc/hosts
:
127.0.0.1 localhost 192.168.1.60 host1 192.168.1.61 host2 192.168.1.62 host3 192.168.1.63 host4
In addition to explicitly adding hostnames to
/etc/hosts
, the name server switch file,
/etc/nsswitch.conf
should be updated to ensure
that hosts are searched first before using DNS services. For example:
hosts: files dns
Failure to add explicit hosts and change this resolution order can lead to transient DNS resolving errors triggering timeouts and failsafe switching of hosts within the cluster.
The IP address of each host within the cluster must resolve to the
same IP address on each node. For example, if
host1
resolves to
192.168.0.69
on
host1
, the same IP address must
be returned when looking up
host1
on the host
host2
.
To double check this, you should perform the following tests:
Confirm the hostname:
shell> uname -n
The hostname cannot contain underscores.
Confirm the IP address:
shell> hostname --ip-address
Confirm that the hostnames of the other hosts in the cluster resolve correctly to a valid IP address. You should confirm on each host that you can identify and connect to each other host in the planned cluster:
shell>nslookup
shell>host1
ping
host1
If the host does not resolve, either ensure that the hosts are added
to the DNS service, or explicitly add the information to the
/etc/hosts
file.
If using /etc/hosts
then you must ensure that
the information is correct and consistent on each host, and double
check using the above method that the IP address resolves correctly
for every host in the cluster.
The following network ports should be open between specific hosts to allow communication between the different components:
Component | Source | Destination | Port | Purpose |
---|---|---|---|---|
All Services | All Nodes | All Nodes | ICMP Ping | Checking availability (Default method) |
〃 | 〃 | 〃 | 7 | Checking availability |
Database Service | Database Host | Database Host | 2112 | THL replication |
〃 | 〃 | 〃 | 7800-7805 | Manager Remote Method Invocation (RMI) |
〃 | 〃 | 〃 | 9997 | Manager Remote Method Invocation (RMI) |
〃 | 〃 | 〃 | 10000-10001 | Replication connection listener port |
〃 | 〃 | 〃 | 11999-12000 | Tungsten manager |
Connector Service | Connector Host | Manager Hosts | 11999 | Tungsten manager |
Connector Service | 〃 | 〃 | 13306 | Database connectivity |
Client Application | Client | Connector | 3306 | Database connectivity for client |
For composite clusters, communication between each cluster within the composite configuration can be limited to the following ports:
Component | Port | Purpose |
---|---|---|
Database service | 9997 | Manager Remote Method Invocation (RMI) |
〃 | 2112 | THL replication |
〃 | 11999-12000 | Tungsten Manager |
Client Application | 13306 | MySQL port for Connectivity |
Manager Hosts | ICMP Ping | Checking availability (default method) |
〃 | 7 | Checking availability |
For Composite Active/Active and Multi-Site/Active-Active clusters that communicate through replication, the communication between sites can be limited to the following ports:
Component | Port | Purpose |
---|---|---|
〃 | 2113-211x | THL replication (See Note Below) |
〃 | 10002-10003 | Replication connection listener ports |
Client Application | 13306 | MySQL port for Connectivity |
Manager Hosts | ICMP Ping | Checking availability (default method) |
〃 | 7 | Checking availability |
Within Composite Active/Active, the THL ports configured for the cross-site replication is automatically configured to start at port 2113 and increment by 1 for each additional cross-site service. For example in a 2 cluster configuration, port 2112 would be configured for the local replication, and 2113 for the cross-site replication. In a 3 cluster configuration, port 2112 would be configured for the local replication and ports 2113 and 2114 for each of the cross-site replicators, therefore firewall rules should be configured accordingly to factor in the additional THL ports based on the deployment topology.
Within Multi-Site/Active-Active the THL ports will be configured manually by the user within the tungsten.ini configuration and these should be accounted for when configuring the firewall rules accordingly.
If a system has a firewall enabled, in addition to enabling communication between hosts as in the table above, the localhost must allow port-to-port traffic on the loopback connection without restrictions. For example, using iptables this can be enabled using the following command rule:
shell> iptables -A INPUT -i lo -m state --state NEW -j ACCEPT
For password-less SSH to work between the different hosts in the cluster, you need to copy both the public and private keys between the hosts in the cluster. This will allow the staging server, and each host, to communicate directly with each other using the designated login.
To achieve this, on each host in the cluster:
Copy the public
(.ssh/id_rsa.pub
) and
private key (.ssh/id_rsa
)
from the staging server to the
~tungsten/.ssh
directory.
Add the public key to the
.ssh/authorized_keys
file.
shell> cat .ssh/id_rsa.pub >> .ssh/authorized_keys
Ensure that the file permissions on the
.ssh
directory are correct:
shell>chmod 700 ~/.ssh
shell>chmod 600 ~/.ssh/*
With each host configured, you should try to connecting to each host from the staging server to confirm that the SSH information has been correctly configured. You can do this by connecting to the host using ssh:
tungsten:shell> ssh tungsten@host
You should have logged into the host at the
tungsten
home directory, and that directory should
be writable by the tungsten
user.
The manager checks the availability of other hosts, for example to determine whether the host is still up, rather than just an individual service on that host. These checks must be able to be performed by one of the two available methods. Without these checks, it is possible for the availability of hosts to be falsely determined. These checks are performed using one of two protocols:
ping — the preferred, and default, method using the system ping (ICMP) command.
default
— no longer the
default method even though it is labeled that way. Uses the TCP/IP
echo protocol on port 7. The port must be available on the source
and destination host, not blocked by a system or network firewall.
The configuration of which service to use depends on the setting of the
--mgr-ping-method
option during
configuration. If no option is given, tpm will test
ping first and then try
default
after. An error will be
thrown if neither option works for all members of the dataservice.
On each host within the cluster you must pick, and configure, a number of directories to be used by Tungsten Cluster™, as follows:
/tmp
Directory
The /tmp
directory must be
accessible and executable, as it is the location where some software
will be extracted and executed during installation and setup. The
directory must be writable by the
tungsten
user.
On some systems, the /tmp
filesystem is mounted as a separate filesystem and explicitly
configured to be non-executable (using the
noexec
filesystem option). Check
the output from the mount command.
Installation Directory
Tungsten Cluster™ needs to be installed in a specific directory.
The recommended solution is to use
/opt/continuent
. This information will be
required when you configure the cluster service.
The directory should be created, and the owner and permissions set for the configured user:
shell>sudo mkdir /opt/continuent
shell>sudo chown -R tungsten: /opt/continuent
shell>sudo chmod 700 /opt/continuent
Home Directory
The home directory of the
tungsten
user must be writable
by that user.
Tungsten Cluster™ relies on the following software. Each host must use the same version of each tool.
Software | Versions Supported | Notes |
---|---|---|
perl | - | Check using perl -v |
rsync | - | Check using rsync --help |
Ruby | 1.8.7, 1.9.3, or 2.0.0 upwards [a] | JRuby is not supported |
Ruby OpenSSL Module | - | Checking using ruby -ropenssl -e 'p "works"' |
Ruby Gems | - | |
Ruby io-console module
| - | Install using gem install io-console [b] |
Ruby net-ssh module
| - | Install using gem install net-ssh [c] |
Ruby net-scp module
| - | Install using gem install net-scp [d] |
GNU tar | - | gtar is required for Solaris due to limitations in the native tar command |
Java Runtime Environment | Java SE 8 (or compatible), Java SE 11 (or compatible) is supported in 6.1.2 and higher | Java 9 and 10 have been tested and validated but certification and support will only cover Long Term releases. See note below for more detail. |
zip | - | zip is required by tpm diag in 6.1.2 and higher |
readlink (GNU coreutils) | - | On most platforms, this should be available. readlink, supplied by GNU coreutils, is required by tpm diag |
[a] Ruby 1.9.1 and 1.9.2 are not supported; these releases remove the execute bit during installation. [b]
[c] For Ruby 1.8.7 the minimum version of net-ssh is 2.5.2, install using gem install net-ssh -v 2.5.2 For Ruby 2.5.9 and above, ensure you use v6.1.0 of net-ssh, install using gem install --force net-ssh -v 6.1.0 [d] For Ruby 1.8.7 the minimum version of net-scp is 1.0.4, install using gem install net-scp -v 1.0.4 For Ruby 2.5.9 and above, ensure you use v4.0.0 of net-scp, install using gem install --force net-scp -v 4.0.0 |
These tools must be installed, running, and available to all users on each host.
To check the current version for any installed tool, login as the
configured user (e.g. tungsten
), and
execute the command to get the latest version. For example:
Java
Run java -version:
shell> java -version
openjdk version "1.8.0_102"
OpenJDK Runtime Environment (build 1.8.0_102-b14)
OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)
See Section 2.2.5, “Java Requirements” for more detail on Java requirements and known issues with certain builds.
On certain environments, a separate tool such as alternatives (RedHat/CentOS) or update-alternatives (Debian/Ubuntu) may need to be used to switch Java versions globally or for individual users. For example, within CentOS:
shell> alternatives --display
It is recommended to switch off all automated software and operating system update procedures. These can automatically install and restart different services which may be identified as failures by Tungsten Replicator. Software and Operating System updates should be handled by following the appropriate Section 6.15, “Performing Database or OS Maintenance” procedures.
It also recommended to install ntp or a similar time synchronization tool so that each host in the cluster has the same physical time.
Tungsten requires that the user you have configured to run the server has
sudo credentials so that it can run and install
services as root
.
Within Linux environments you can do this by editing the
/etc/sudoers
file using visudo and
adding the following lines:
## Allow tungsten to run any command tungsten ALL=(ALL) NOPASSWD: ALL
The above syntax is applicable to most Linux environments, however double check if your environment uses different syntax!
sudo can also be configured to handle only specific directories or files. For example, when using xtrabackup, or additional tools in the Tungsten toolkit, such as tprovision, additional commands must be added to the permitted list:
tungsten ALL=(ALL) NOPASSWD: /sbin/service, /usr/bin/innobackupex, /bin/rm, » /bin/mv, /bin/chown, /bin/chmod, /usr/bin/scp, /bin/tar, /usr/bin/which, » /etc/init.d/mysql, /usr/bin/test, /usr/bin/systemctl, » /opt/continuent/tungsten/tungsten-replicator/scripts/xtrabackup.sh, » /opt/continuent/tungsten/tools/tpm, /usr/bin/innobackupex-1.5.1, » /bin/cat, /bin/find, /usr/bin/whoami, /bin/sh, /bin/rmdir, /bin/mkdir, » /usr/bin/mysql_install_db, /usr/bin/mysqld, /usr/bin/xtrabackup
To determine the surrent state of SELinux enforcement, use the getenforce command. For example:
shell> getenforce
Disabled
To disable SELinux, use the setenforce command. For example:
shell> setenforce 0
Should your company policy enforce the use of SELinux, then you will need to configure various SELinux contexts to allow Tungsten to operate.
When SELinux is enabled, systemctl may refuse to start mysqld if the listener port or location on disk have been changed. The solution is to inform SELinux about any changed or additional resources.
Tungsten best practice is to change the default MySQL port from
3306
to
13306
so that requesting clients do
not accidentally connect directly to the database without being routed by
the Connector.
If using a non-standard port for MySQL and SELinux is enabled, you must also change the port context, for example:
shell > semanage port -a -t mysqld_port_t -p tcp 13306
Ensure the file contexts are set correctly for SELinux. For example, to
allow MySQL data to be stored in a non-standard location (i.e.
/data
):
shell >semanage fcontext -a -t etc_runtime_t /data
shell >restorecon -Rv /data/
shell >semanage fcontext -a -t mysqld_db_t "/data(/.*)?"
shell >restorecon -Rv /data/*