Version End of Life. 26 October 2018
The following changes have been made to Tungsten Clustering and may affect existing scripts and integration tools. Any scripts or environment which make use of these tools should check and update for the new configuration:
When SSL is enabled, the Connector automatically advertisies the ports and itself as SSL capable. With some clients, this triggers them to use SSL even if SSL has not been configured. This causes the connections to fail and not operate correctly.
In addition to ensuring the basic compatibility of these tools, the
continuent-tools-hadoophas been updated to support the use of the beeline as well as the hive command.
Issues: CT-153, CT-155
For more information, see The load-reduce-check Tool.
The replicator and load-reduce-check (in [Tungsten Replicator 5.1 Manual]) command that is part of the
continuent-tools-hadooprepository has been updated so that it can support loading and replication into Hadoop from Oracle. This includes creating suitable DDL templates and support for accessing Oracle via JDBC to load DDL information.
The current file provides three functions:
readJSONFile— which loads an external JSON file into a variable.
A number of filters have been updated so that the THL metadata for the transaction includes whether a specific filter has been applied to the transaction in question. This is designed to make it easier to determine whether the filter has been applied, particularly in heterogeneous replication, and also to determine whether the incoming transaction are suitable to be applied to a targert that requires them. Currently the metadata is only added to the transactions and no enforcement is made.
The following filters add this information:
The format of the metadata is
The Hadoop loader would previously load CSV files directly into the
/users/tungstenwithin HDFS, completely ignoring the setting of thr replication user within the replicator. This has been corrected so that data can be loaded into the configured replication user.
By default the the Hadoop loader would default to use a directory structure that matched the
SERVICENAME/SCHEMANAME/TABLENAME. This cause problems with the default DDL templates and the
continuent-tools-hadooptools which used only the schema and table name.