Version End of Life. 26 October 2018
In addition to ensuring the basic compatibility of these tools,
has been updated to support the use of the
beeline as well as the
Issues: CT-153, CT-155
For more information, see The load-reduce-check Tool (in [Tungsten Replicator 5.1 Manual]).
The replicator and load-reduce-check (in [Tungsten Replicator 5.1 Manual]) command
that is part of the
repository has been updated so that it can support loading and
replication into Hadoop from Oracle. This includes creating
suitable DDL templates and support for accessing Oracle via JDBC
to load DDL information.
standardized set of filter functionality. This is proivided and
utilities are provided in the
The current file provides three functions:
— which loads an external JSON file into a variable.
provides JSON class including the ability to dump a
The thl (in [Tungsten Replicator 5.1 Manual]) has been improved to support
-from (in [Tungsten Replicator 5.1 Manual]) and
-to (in [Tungsten Replicator 5.1 Manual]) options for selecting the
range. These act as synonyms for the existing
-low (in [Tungsten Replicator 5.1 Manual]) and
-high (in [Tungsten Replicator 5.1 Manual]) options and can be used
with all commands.
A number of filters have been updated so that the THL metadata for the transaction includes whether a specific filter has been applied to the transaction in question. This is designed to make it easier to determine whether the filter has been applied, particularly in heterogeneous replication, and also to determine whether the incoming transaction are suitable to be applied to a targert that requires them. Currently the metadata is only added to the transactions and no enforcement is made.
The following filters add this information:
The format of the metadata is
One of the checks built into tpm (in [Tungsten Replicator 5.1 Manual]),
MySQLUnsupportedDataTypesCheck (in [Tungsten Replicator 5.1 Manual])
was spelt incorrectly, which meant that it was difficult to
bypass and ultimately did not always correctly run or get
The Hadoop loader would previously load CSV files directly into
within HDFS, completely ignoring the setting of thr replication
user within the replicator. This has been corrected so that data
can be loaded into the configured replication user.
By default the the Hadoop loader would default to use a
directory structure that matched the
This cause problems with the default DDL templates and the
tools which used only the schema and table name.