|Linux||RedHat/CentOS||Primary platform||RHEL 4, 5, and 6, 7 as well as CentOS 5.x, 6.x, and 7.x versions are fully supported.|
|Linux||Ubuntu||Primary platform||Ubuntu 9.x-17.x versions are fully supported.|
|Linux||Debian/Suse/Other||Secondary Platform||Other Linux platforms are supported but are not regularly tested. We will fix any bugs reported by customers.|
|Linux||Docker||Unsupported||Unsupported - Use at your own risk. Docker containers are not well-suited for Tungsten deployments.|
|Solaris||Secondary Platform||Solaris 10 is fully supported. OpenSolaris is not supported at this time.|
|Mac OS X||Secondary platform||Mac OS/X Leopard and Snow Leopard are used for development at Continuent but not certified. We will fix any bugs reported by customers.|
|Windows||Limited Support||Tungsten 1.3 and above will support Windows platforms for connectivity (Tungsten Connector and SQL Router) but may require manual configuration. Tungsten clusters do not run on Windows.|
|BSD||Limited Support||Tungsten 1.3 and above will support BSD for connectivity (Tungsten Connector and SQL Router) but may require manual configuration. Tungsten clusters do not run on BSD.|
|MySQL||5.0, 5.1, 5.5, 5.6, 5.7 (without Geometry support in 5.3.0 or 6.0.0); full support was provided in 5.3.1 and 6.0.1||Primary platform||Statement and row based replication is supported. MyISAM and InnoDB table types are fully supported; InnoDB tables are recommended.|
|MySQL||5.7||Primary platform||Support is provided for compatibility with MySQL 5.7 in 5.0 and later, but new datatypes (JSON, virtual columns) in MySQL 5.7 are not yet supported.|
|Percona||5.5, 5.6, 5.7||Primary platform|
|MariaDB||5.5, 10.0, 10.1||Primary platform|
|Oracle (CDC)||10g Release 2 (10.2.0.5), 11g||Primary Platform||Synchronous CDC is supported on Standard Edition only; Synchronous and Asynchronous are supported on Enterprise Editions|
|Drizzle||Secondary Platform||Experimental support for Drizzle is available. Drizzle replication is not tested.|
RAM requirements are dependent on the workload being used and applied, but the following provide some guidance on the basic RAM requirements:
Tungsten Replicator requires 2GB of VM space for the Java execution, including the shared libraries, with approximate 1GB of Java VM heapspace. This can be adjusted as required, for example, to handle larger transactions or bigger commit blocks and large packets.
Performance can be improved within the Tungsten Replicator if there is a 2-3GB available in the OS Page Cache. Replicators work best when pages written to replicator log files remain memory-resident for a period of time, so that there is no file system I/O required to read that data back within the replicator. This is the biggest potential point of contention between replicators and DBMS servers.
Tungsten Manager requires approximately 500MB of VM space for execution.
Disk space usage is based on the space used by the core application, the staging directory used for installation, and the space used for the THL files:
The staging directory containing the core installation is
approximately 150MB. When performing a staging-directory based
installation, this space requirement will be used once. When using a
INI-file based deployment, this space will be required on each server.
For more information on the different methods, see
Section 9.1, “Comparing Staging and
Deployment of a live installation also requires approximately 150MB.
The THL files required for installation are based on the size of the
binary logs generated by MySQL. THL size is typically twice the size
of the binary log. This space will be required on each machine in the
cluster. The retention times and rotation of THL data can be
controlled, see Section D.1.5, “The
thl Directory” for more
information, including how to change the retention time and move files
When replicating from Oracle, the size of the THL will depend on the quantity of Change Data Capture (CDC) information generated. This can be managed by altering the intervals used to check for and extract the information.
A dedicated partition for THL or Tungsten Clustering is recommended to ensure that a full disk does not impact your OS or DBMS. Local disk, SAN, iSCSI and AWS EBS are suitable for storing THL. NFS is NOT recommended.
Because the replicator reads and writes information using buffered I/O in a serial fashion, there is no random-access or seeking.
Tungsten Replicator is known to work with with the following Java versions and JVMs:
Oracle JVM/JDK 8
Cloud deployments require a different set of considerations over and above the general requirements. The following is a guide only, and where specific cloud environment requirements are known, they are explicitly included:
|Instance Type||Instance sizes and types are dependent on the workload, but larger instances are recommended for transactional databases.||
|Instance Boot Volume||Use block, not ephemeral storage.||EBS|
|Instance Deployment||Use standard Linux distributions and bases. For ease of deployment and configuration, use Puppet.||Amazon Linux AMIs|
Development/QA nodes should always match the expected production environment.
Use Virtual Private Cloud (VPC) deployments, as these provide consistent IP address support.
When using Active Witnesses, a
micro instance can be used for a
single cluster. For composite clusters, an instance size larger than
micro must be used.
Multiple EBS-optimized volumes for data, using Provisioned IOPS for the EBS volumes depending on workload:
|Parameter||tpm Option||tpm Value||
MySQL ||MySQL Value|
|MySQL Binary Logs||
|Transaction History Logs (THL)||
Recommended Replication Formats
MIXED is recommended for MySQL
master/slave topologies (e.g., either single clusters or
ROW is strongly recommended
for multi-master setups. Without ROW, data drift is a possible problem
STATEMENT. Even with
ROW there are still cases
where drift is possible but the window is far smaller.
ROW is required for
Continuent has traditionally had a relaxed policy about Linux platform support for customers using our products.
While it is possible to install and run Continuent Tungsten products (i.e. Clustering/Replicator/etc.) inside Docker containers, there are many reasons why this is not a good idea.
As background, every database node in a Tungsten Cluster runs at least three (3) layers or services:
MySQL Server (i.e. MySQL Community or Enterprise, MariaDB or Percona Server)
Tungsten Manager, which handles health-checking, signaling and failover decisions (Java-based)
Tungsten Replicator, which handles the movement of events from the MySQL master server binary logs to the slave databases nodes (Java-based)
Optionally, a fourth service, the Tungsten Connector (Java-based), may be installed as well, and often is.
As such, this means that the Docker container would also need to support these 3 or 4 layers and all the resources needed to run them.
This is not what containers were designed to do. In a proper containerized architecture, each container would contain one single layer of the operation, so there would be 3-4 containers per “node”. This sort of architecture is best managed by some underlying technology like Swarm, Kubernetes, or Mesos.
More reasons to avoid using Docker containers with Continuent Tungsten solutions:
Our product is designed to run on a full Linux OS. By design Docker does not have a full init system like SystemD, SysV init, Upstart, etc… This means that if we have a process (Replicator, Manager, Connector, etc…) that process will run as PID 1. If this process dies the container will die. There are some solutions that let a Docker container to have a ‘full init’ system so the container can start more processes like ssh, replicator, manager, … all at once. However this is almost a heavyweight VM kind of behavior, and Docker wasn’t designed this way.
Requires a mutable container – to use Tungsten Clustering inside a Docker container, the Docker container must be launched as a mutable Linux instance, which is not the classic, nor proper way to use containers.
Our services are not designed as “serverless”. Serverless containers are totally stateless. Tungsten Clustering does not support this type of operation.
Until we make the necessary changes to our software, using Docker as a cluster node results in a minimum 1.2GB docker image.
Once Tungsten Clustering has been refactored using a microservices-based architecture, it will be much easier to scale our solution using containers.
A Docker container would need to allow for updates in order for the Tungsten Cluster software to be re-configured as needed. Otherwise, a new Docker container would need to be launched every time a config change was required.
There are known i/o and resource constraints for Docker containers, and therefore must be carefully deployed to avoid those pitfalls.
We test on CentOS-derived Linux platforms.
Continuent does NOT have Docker containerization on the product roadmap at this time. That being said, we do intend to provide containerization support at some point in the future. Customer demand will contribute to the timing of the effort.