To upgrade an existing installation of Tungsten Cluster, the new distribution must be downloaded and unpacked, and the included tpm command used to update the installation. The upgrade process implies a small period of downtime for the cluster as the updated versions of the tools are restarted, However the process that takes place should not present as an outage to your applications providing steps when upgrading the connectors are followed carefully. Any downtime is deliberately kept to a minimum, and the cluster should be in the same operation state once the upgrade has finished as it was when the upgrade was started.
During the update process, the cluster will be in MAINTENANCE
mode.
This is intentional to prevent unwanted failovers during the process, however it is
important to understand that should the primary fail for genuine reasons NOT
associated with the upgrade, then failover will also not happen at that time.
It is important to ensure clusters are returned to the AUTOMATIC
state
as soon as all Maintenance operations are complete and the cluster is stable.
It is NOT advised to perform rolling upgrades of the tungsten software to avoid miscommunication between components running older/newer versions of the software that may prevent switches/failovers from occuring, therefore it is recommended to upgrade all nodes in place. The process of the upgrade places the cluster into MAINTENANCE mode which in itself avoids outages whilst components are restarted, and allows for a successful upgrade.
From version 7.1.0 onwards, the JGroup libraries were upgraded, this means that when upgrading to any release from 7.1.0 onwards FROM any release OLDER than 7.1.0, all nodes must be upgraded before full cluster communication will be restored. For that reason upgrades to 7.1.0 onwards from an OLDER release MUST be performed together, ensuring the cluster is only running with a mix of manager versions for as little time as possible. When upgrading nodes, do NOT SHUN the node otherwise you will not be able to recover the node into the cluster until all nodes are upgraded, which could result in an outage to your applications. Additionally, do NOT perform a switch until all nodes are upgraded. This means you should upgrade the master node in situ. Providing the cluster is in MAINTENANCE, this will not cause an outage and the cluster can still be upgraded with no visible outage to you applications.
For INI file upgrades, see Section 4.4.2, “Upgrading when using INI-based configuration, or without ssh Access”
Before performing and upgrade, please ensure that you have checked the Appendix B, Prerequisites, as software and system requirements may have changed between versions and releases.
To perform an upgrade of an entire cluster from a staging directory installation, where you have ssh access to the other hosts in the cluster:
On your staging server, download the release package.
Unpack the release package:
shell> tar zxf tungsten-clustering-6.1.25-6.tar.gz
Change to the extracted directory:
shell> cd tungsten-clustering6.1.25-6
The next step depends on your existing deployment:
If you are upgrading a Multi-Site/Active-Active deployment:
If you installed the original service by making use of the
$CONTINUENT_PROFILES
and
$REPLICATOR_PROFILES
environment variables, no
further action needs to be taken to update the configuration
information. Confirm that these variables are set before
performing the validation and update.
If you did not use these environment variables when deploying the solution, you must load the existing configuration from the current hosts in the cluster before continuing by using tpm fetch:
shell> ./tools/tpm fetch --hosts=east1,east2,east3,west1,west2,west3 \
--user=tungsten --directory=/opt/continuent
You must specify ALL the hosts
within both clusters within the current deployment when fetching
the configuration; use of the
autodetect
keyword will
not collect the correct information.
If you are upgrading any other deployment:
If you are are using the $CONTINUENT_PROFILES
variable to specify a location for your configuration, make sure
that the variable has been set correctly.
If you are not using $CONTINUENT_PROFILES
, a copy
of the existing configuration must be fetched from the installed
Tungsten Cluster installation:
shell> ./tools/tpm fetch --hosts=host1,host2,host3,autodetect \
--user=tungsten --directory=/opt/continuent
You must use the version of tpm from within the staging directory (./tools/tpm) of the new release, not the tpm installed with the current release.
The current configuration information will be retrieved to be used for the upgrade:
shell> ./tools/tpm fetch --hosts=host1,host2,host3 --user=tungsten --directory=/opt/continuent
.......
NOTE >> Configuration loaded from host1,host2,host3
Check that the update configuration matches what you expect by using tpm reverse:
shell>./tools/tpm reverse
# Options for the dsone data service tools/tpm configure dsone \ --application-password=password \ --application-port=3306 \ --application-user=app_user \ --connectors=host1,host2,host3 \ --datasource-log-directory=/var/log/mysql \ --install-directory=/opt/continuent \ --master=host1 \ --members=host1,host2,host3 \ '--profile-script=~/.bashrc' \ --replication-password=password \ --replication-port=13306 \ --replication-user=tungsten \ --start-and-report=true \ --user=tungsten \ --witnesses=192.168.0.1
Run the upgrade process:
shell> ./tools/tpm update
During the update process, tpm may report errors or warnings that were not previously reported as problems. This is due to new features or functionality in different MySQL releases and Tungsten Cluster updates. These issues should be addressed and the tpm update command re-executed.
The following additional options are available when updating:
--no-connectors
(optional)
By default, an update process will restart all services, including the connector. Adding this option prevents the connectors from being restarted. If this option is used, the connectors must be manually updated to the new version during a quieter period. This can be achieved by running on each host the command:
shell> tpm promote-connector
This will result in a short period of downtime (couple of seconds) only on the host concerned, while the other connectors in your configuration keep running. During the upgrade, the Connector is restarted using the updated software and/or configuration.
A successful update will report the cluster status as determined from each host in the cluster:
...........................................................................................................
Getting cluster status on host1
Tungsten Clustering (for MySQL) 6.1.25 build 6
connect to 'dsone@host1'
dsone: session established
[LOGICAL] /dsone > ls
COORDINATOR[host3:AUTOMATIC:ONLINE]
ROUTERS:
+----------------------------------------------------------------------------+
|connector@host1[31613](ONLINE, created=0, active=0) |
|connector@host2[27649](ONLINE, created=0, active=0) |
|connector@host3[21475](ONLINE, created=0, active=0) |
+----------------------------------------------------------------------------+
...
#####################################################################
# Next Steps
#####################################################################
We have added Tungsten environment variables to ~/.bashrc.
Run `source ~/.bashrc` to rebuild your environment.
Once your services start successfully you may begin to use the cluster.
To look at services and perform administration, run the following command
from any database server.
$CONTINUENT_ROOT/tungsten/tungsten-manager/bin/cctrl
Configuration is now complete. For further information, please consult
Tungsten documentation, which is available at docs.continuent.com.
NOTE >> Command successfully completed
The update process should now be complete. The current version can be confirmed by starting cctrl.
To perform an upgrade of an individual node, tpm can be used on the individual host. The same method can be used to upgrade an entire cluster without requiring tpm to have ssh access to the other hosts in the dataservice.
Before performing and upgrade, please ensure that you have checked the Appendix B, Prerequisites, as software and system requirements may have changed between versions and releases.
Application traffic to the nodes will be disconnected when the connector
restarts. Use the --no-connectors
tpm option when you upgrade to prevent the connectors
from restarting until later when you want them to.
To upgrade:
Place the cluster into maintenance mode
Upgrade the Replicas in the dataservice. Be sure to shun and welcome each Replica.
Upgrade the Primary node
Replication traffic to the Replicas will be delayed while the replicator restarts. The delays will increase if there are a large number of stored events in the THL. Old THL may be removed to decrease the delay. Do NOT delete THL that has not been received on all Replica nodes or events will be lost.
Upgrade the connectors in the dataservice one-by-one
Application traffic to the nodes will be disconnected when the connector restarts.
Place the cluster into automatic mode
For more information on performing maintenance across a cluster, see Section 6.15.3, “Performing Maintenance on an Entire Dataservice”.
To upgrade a single host using the tpm command:
Download the release package.
Unpack the release package:
shell> tar zxf tungsten-clustering-6.1.25-6.tar.gz
Change to the extracted directory:
shell> cd tungsten-clustering-6.1.25-6
Execute tpm update, specifying the installation directory. This will update only this host:
shell> ./tools/tpm update --replace-release
To update all of the nodes within a cluster, the steps above will need to be performed individually on each host.