8.1.3. cctrl Commands

Table 8.11. cctrl Commands

OptionDescription
adminChange to admin mode
cdChange to a specific site within a multisite service
clusterIssue a command across the entire cluster
cluster topology validateCheck, validate and report on current cluster topology and health.
cluster validateValidate the cluster quorum configuration
create compositeCreate a composite dataservice
datasourceIssue a command on a single datasource
expertChange to expert mode
failoverPerform a failover operation from a master to a slave
helpDisplay the help information
lsShow cluster status
membersList the managers of the dataservice
physicalEnter physical mode
pingTest host availability
quit, exitExit cctrl
recover master usingRecover the master within a datasource using the specified master
replicatorIssue a command on a specific replicator
routerIssue a command on a specific router (connector)
serviceRun a service script
setSet management options
set masterSet the master within a datasource
show topologyShows the currently configured cluster topology
switchPromote a slave to a master

8.1.3.1. cctrl admin Command

The admin command enables admin mode commands and displays. Admin mode is a specialized mode used to examine and repair cluster metadata. It is not recommended for normal use.

8.1.3.2. cctrl cd Command

The cd command changes the data service being administered. Subsequent commands will only affect the given data service name.

Using cd .. allows to go back to the root element. The given data service name can be either composite or physical Note that this command can only be used when cctrl is run with the '-multi' flag

8.1.3.3. cctrl cluster Command

The cluster command operates at the level of the full cluster.

8.1.3.3.1. cctrl cluster check Command

The cluster check command issues an MD5 consistency check on one or more tables in a database on the master data source. The consistency checks then replicate to each slave, whereupon the slave replicator repeats the check.

If the check fails, slaves may go offline or print a log warning depending on how the replicators are configured. The default is to go offline. You can return a replicator to the online state after a failed check by issuing a replicator online command.

The table name can also be a wildcard (*) in which case all tables will be checked. Users may optionally specify a range of rows to check using the -limit option, which takes a starting row option followed by a number of rows to check. Rows are selected in primary key order.

The following example checks all tables in database accounting.

[LOGICAL] /alpha > cluster check accounting.*

The following command checks only the first 10 rows in a single table.

[LOGICAL] /alpha > cluster check accounting.invoices -limit 1,10

Warning

Consistency checks can be very lengthy operations for large tables and will lock them while they run. On the master this can block applications. On slaves it blocks replication.

8.1.3.3.2. cctrl cluster flush Command

The cluster flush command sends a heartbeat event through the local cluster and returns a flush sequence number that is guaranteed to be equal to or greater than the sequence number of the flush event. Slaves that reach the flush sequence number are guaranteed to have applied the flush event.

This command is commonly used for operations like switch that need to synchronize the position of one or more masters or slaves.

8.1.3.3.3. cctrl cluster heartbeat Command

The cluster heartbeat command sends a heartbeat event through the local cluster to demonstrate that all replicators are working. You should see the sequence numbers on all data sources advance by at least 1 if it is successful.

8.1.3.3.4. cctrl cluster offline Command

The cluster offline command brings all data services that are not offline into the offline state. It has no effect on services that are already offline.

8.1.3.3.5. cctrl cluster online Command

The cluster online command brings all data services that are not online into the online state. It has no effect on services that are already online.

8.1.3.3.6. cctrl cluster validate Command

The cluster validate validates the configuration of the cluster with respect to the quorum used for decision making. The number of active managers, active witnesses and passive witnesses within the cluster is validated to ensure that there are enough active hosts to make a decision in the event of a failover or other failure event.

When executed, the validation routine checks all the available hosts, including witness hosts, and determines whether there are enough hosts, and whether their membership of the cluster is valid. In the event of deficiency, corrective action will be recommended.

By default, the command checks all hosts within the configured cluster:

[LOGICAL] /alpha > cluster validate
HOST host1/192.168.2.20: ALIVE
HOST host2/192.168.2.21: ALIVE
HOST host3/192.168.2.22: ALIVE
CHECKING FOR QUORUM: MUST BE AT LEAST 2 MEMBERS, OR 1 MEMBERS PLUS ALL WITNESSES
QUORUM SET MEMBERS ARE: host2, host1, host3
SIMPLE MAJORITY SIZE: 2
VALIDATED MEMBERS ARE: host2, host1, host3
REACHABLE MEMBERS ARE: host2, host1, host3
WITNESS HOSTS ARE: 
REACHABLE WITNESSES ARE: 
MEMBERSHIP IS VALID
GC VIEW OF CURRENT MEMBERS IS: host1, host2, host3
VALIDATED CURRENT MEMBERS ARE: host2, host1, host3
CONCLUSION: I AM IN A PRIMARY PARTITION OF 3 MEMBERS OUT OF THE REQUIRED MAJORITY OF 2
VALIDATION STATUS=VALID CLUSTER
ACTION=NONE

Additionally, a list of hosts to exclude from the check can be provided to verify the cluster capability when certain hosts have already failed or been shunned from the dataservice during maintenance.

To exclude hosts, add excluding and a comma-separated list of hosts to the command. For example:

[LOGICAL] /alpha > cluster validate excluding host3,host2
EXCLUDED host3 FROM VIEW
EXCLUDED host2 FROM VIEW
HOST host1/192.168.2.20: ALIVE
CHECKING FOR QUORUM: MUST BE AT LEAST 2 MEMBERS, OR 1 MEMBERS PLUS ALL WITNESSES
QUORUM SET MEMBERS ARE: host2, host1, host3
SIMPLE MAJORITY SIZE: 2
VALIDATED MEMBERS ARE: host1
REACHABLE MEMBERS ARE: host1
WITNESS HOSTS ARE: 
REACHABLE WITNESSES ARE: 
MEMBERSHIP IS VALID
GC VIEW OF CURRENT MEMBERS IS: host1
VALIDATED CURRENT MEMBERS ARE: host1
CONCLUSION: I AM IN A NON-PRIMARY PARTITION OF 1 MEMBERS OUT OF A REQUIRED MAJORITY SIZE OF 2
AND THERE ARE 0 REACHABLE WITNESSES OUT OF 0
VALIDATION STATUS=NON-PRIMARY PARTITION
ACTION=RESTART SAFE

Cluster validation can be used to provide validation only. To improve the support:

8.1.3.3.7. cctrl cluster topology validate Command

The cluster topology validate will check and validate a cluster topology and, in the process, will report any issues that it finds. The purpose of this command is to provide a fast way to see, immediately, if there are any issues with any components of a cluster.

Here's an example of the command when run in the context of a composite multi-master cluster:

[LOGICAL] /usa > cluster topology validate
Validating physical cluster 'east' for composite datasource 'east@usa'
Validating datasource 'db4@east'
Validating datasource 'db5@east'
Validating datasource 'db6@east'
Validating physical subservice 'east_from_west' of service 'east'
Validating datasource 'db4@east_from_west'
Validating datasource 'db5@east_from_west'
Validating datasource 'db6@east_from_west'
Physical cluster 'east_from_west' is VALID
Physical cluster 'east' is VALID
Validating physical cluster 'west' for composite datasource 'west@usa'
Validating datasource 'db1@west'
Validating datasource 'db2@west'
Validating datasource 'db3@west'
Validating physical subservice 'west_from_east' of service 'west'
Validating datasource 'db1@west_from_east'
Validating datasource 'db2@west_from_east'
Validating datasource 'db3@west_from_east'
Physical cluster 'west_from_east' is VALID
Physical cluster 'west' is VALID
Composite cluster 'usa' is VALID

Here's an example of the command when run in the context of a composite multi-master cluster:

[LOGICAL] /usa > cluster topology validate
Validating physical cluster 'east' for composite datasource 'east@usa'
Validating datasource 'db4@east'
Validating datasource 'db5@east'
Validating datasource 'db6@east'
Validating physical subservice 'east_from_west' of service 'east'
Validating datasource 'db4@east_from_west'
Validating datasource 'db5@east_from_west'
Validating datasource 'db6@east_from_west'
Physical cluster 'east_from_west' is VALID
Physical cluster 'east' is VALID
Validating physical cluster 'west' for composite datasource 'west@usa'
Validating datasource 'db1@west'
Validating datasource 'db2@west'
INVALID: db2@west: Primary datasource is 'db1' but replicator points at 'db3'
Validating datasource 'db3@west'
Validating physical subservice 'west_from_east' of service 'west'
Validating datasource 'db1@west_from_east'
Validating datasource 'db2@west_from_east'
Validating datasource 'db3@west_from_east'
INVALID: db3@west_from_east: Primary datasource is 'db1' but replicator points at 'db2'
Physical cluster 'west_from_east' is INVALID
Physical cluster 'west' is INVALID
Composite cluster 'usa' is INVALID

In the above case you can see that there are two issues, shown with the word INVALID at the start of the line: In the cluster west, datasource db2's replicator points at db3 but should point at db1. In the cluster west_from_east, db3's replicator points at db2 but should point at db1.

8.1.3.4. cctrl create composite Command

The create composite command creates a new composite data source or data service with the given name. Composite data services can only be create in the root directory '/' while composite data sources need to be created from a composite data service location. Composite data source names should be the same as the physical data services Composite data service name should be named after its composite data sources

The following example creates a composite data service named 'sj_nyc'

cctrl> create composite dataservice sj_nyc

The following example changes to the composite data service sj_nyc, then creates a composite data source named 'sj' in this composite data service

cctrl> cd sj_nyc
cctrl> create composite datasource sj

8.1.3.5. cctrl datasource Command

The datasource command affects a single data source.

datasource
backup
fail
host
offline
online
recover
restore
shun
welcome

Table 8.12. cctrldatasource Commands

OptionDescription
backupBackup a datasource
failFail a datasource
hostHostname of the datasource
offlinePut a datasource into the offline state
onlinePut a datasource into the online state
recoverRecover a datasource into operation state as slave
restoreRestore a datasource from a previous backup
shunShun a datasource
welcomeWelcome a shunned datasource back to the cluster

8.1.3.5.1. cctrl datasource backup Command

The datasource backup command invokes a backup on the data source on the named host using the default backup agent and storage agent. Backups taken in this way can be reloaded using the datasource restore command. The following command options are supported:

  • backupAgent — The name of a backup agent.

  • storageAgent — The name of a storage agent.

  • timeout — Number of seconds to wait before the backup command times out.

On success the backup URL will be written to the console.

The following example performs a backup on host saturn using the default backup agent.

cctrl> datasource saturn backup

The following example performs a backup on host mercury using the xtrabackup agent, which is named explicitly.

cctrl> datasource mercury backup xtrabackup
8.1.3.5.2. cctrl datasource fail Command

The datasource fail allows you to place a host into the failed state.

To place a node back into an online state, you must issue the datasource recover command.

If the cluster is in automatic policy mode, the cluster will attempt to recover the host automatically if the replicator and database are online.

In order to maintain the failed state, switch the cluster to maintenance and/or stop the underlying database and replicator.

The following example changes the state of the node venus, to failed:

cctrl> datasource venus fail
8.1.3.5.3. cctrl datasource offline Command

The datasource offline allows you to place a host into an offline state. It has no effect if the datasource is already in an offline state.

To place a node back into an online state, you must issue the datasource online command.

If the cluster is in AUTOMATIC policy mode, the cluster will return the host to online automatically.

In order to maintain the offline state, switch the cluster to MAINTENANCE and/or stop the underlying database and replicator.

The following example changes the state of the node mercury, to failed:

cctrl> datasource mercury offline
8.1.3.5.4. cctrl datasource online Command

The datasource online allows you to place a host into an online state. It has no effect if the datasource is already in an online state.

To place a node back into an online state, you must issue the datasource online command.

If the node is in a SHUNNED or FAIL state, this command will fail with an error. Instead, you should use the datasource recover command

The following example changes the state of the node mercury, to failed:

cctrl> datasource mercury online
8.1.3.5.5. cctrl datasource recover Command

The datasource recover reconfigures a shunned data source and returns it to the cluster as a slave. This command can be used with failed master as well as slave data sources.

For slave data sources, the recover command attempts to restart the DBMS server followed by replication. If successful, the data source joins the cluster as an online slave.

For master data sources, the recover command first reconfigures the master as a slave. It then performs the same recovery process as for a failed slave.

If datasource recover is unsuccessful, the next step is typically to restore the data source from a backup. This should enable it to rejoin the cluster as a normal slave.

The following example recovers host mercury following a failure. The command is identical for master and slave data sources.

cctrl> datasource mercury recover
8.1.3.5.6. cctrl datasource restore Command

The datasource restore command reloads a backup generated with the datasource backup command.

The following command options are supported:

  • uri — The URI of a specific backup to restore

  • timeout — Number of seconds to wait before the command times out.

To restore a data source you must first put the data source and its associated replicator offline.

The following example restores host saturn from the latest backup. The preceding commands place the datasource and replicator offline. The commands after the restore return the datasource to the cluster and put it online.

cctrl> datasource saturn shun
cctrl> datasource saturn offline
cctrl> replicator saturn offline
cctrl> datasource saturn restore
cctrl> datasource saturn welcome
cctrl> cluster online

The following example restores host mercury from an existing backup, which is explicitly named. The datasource and replicator must be offline.

cctrl> datasource mercury restore storage://file-system/store-0000000004.properties
8.1.3.5.7. cctrl datasource shun Command

The datasource shun command removes the data source from the cluster and makes it unavailable to applications. It will remain in the shunned state without further changes until you issue a datasource welcome or datasource recover command.

The datasource shun command is most commonly used to perform maintenance on a data source. It allows you to reboot servers and replicators without triggering automated policy mode rules.

The following example shuns the data source on host venus.

cctrl> datasource venus shun
8.1.3.5.8. cctrl datasource welcome Command

When a datasource has been shunned, the datasource can be welcomed back to the dataservice by using the welcome command. The welcome command attempts to enable the datasource in the ONLINE state using the current roles and configuration. If the datasource was operating as a slave before it was shunned, the welcome command will enable the datasource as a slave.

For example, the host host3 is a slave and currently online:

+----------------------------------------------------------------------------+
|host3(slave:ONLINE, progress=157454, latency=1.000)                      |
|STATUS [OK] [2013/05/14 05:12:52 PM BST]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=host2, state=ONLINE)                     |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+
[LOGICAL:EXPERT] /alpha > datasource host3 shun
DataSource 'host3' set to SHUNNED

To switch the datasource back to the online state, the welcome is used:

[LOGICAL:EXPERT] /alpha > datasource host3 welcome
DataSource 'host3' is now OFFLINE

The welcome command puts the datasource into the OFFLINE state. If the dataservice policy mode is AUTOMATIC, the node will be placed into ONLINE mode due to automatic recovery. When in MAINTENANCE or MANUAL mode, the node must be manually set online.

The welcome command may not always work if there has been a failure or topology change between the moment it was shunned and welcomed back. Using the recover command may be a better alternative to using welcome when bringing a datasource back online. The recover commands ensures that the replicator, connector and operation of the datasource are correct within the current cluster configuration. See Section 8.1.3.14, “cctrl recover Command”.

8.1.3.6. cctrl expert Command

The expert command enables expert mode in cctrl. This suppresses prompts for commands that can cause damage to data. It is provided as a convenience for fast administration of the system.

Warning

This mode should be used with care, and only be used by experienced operators who fully understand the implications of the subsequent commands issued.

Missuse of this feature may cause irreparable damage to a cluster

8.1.3.7. cctrl failover Command

The failover command performs a failover to promote an existing slave to master after the current master has failed. The master data source must be in a failed state to use failover. If the master data source is not failed, you should instead use switch.

If there is no argument the failover command selects the most caught up slave and promotes it as the master. You can also specify a particular host, in which case failover will ensure that the chosen slave is fully up-to-date and promote it.

Failover ensures that the slave has applied all transactions present in its log, then promotes the slave to master. It does not attempt to retrieve transactions from the old master, as this is by definition already failed. After promoting the chosen slave to master, failover reconfigures other slaves to point to it and ensures all data sources are online.

To recover a failed master you should use the datasource recover command.

Failover to any up-to-date slave in the cluster. If no slave is available the operation fails:

cctrl> failover

Failover from a broken master to a specific node:

cctrl> failover to mercury

8.1.3.8. cctrl help Command

The help command provides help text from within the cctrl operation.

With no other arguments, help provides a list of the available commands:

[LOGICAL] /alpha > help

--------
Overview
--------
Description: Overview of Tungsten cctrl Commands

Commands
--------
admin                          - Enter admin mode
cd <name>                      - Change to the specified SOR cluster element
cluster <command>              - Issue a command on the entire cluster
create composite <type> <name> - Create SOR cluster components
datasource <host> <cmd>        - Issue a command on a datasource
expert                         - Enter expert mode
failover                       - Failover from failed master to slave
help                           - Show help
ls [options]                   - Show generic cluster status
members                        - List all of the managers in the cluster
ping                           - Test host availability
physical                       - Enter physical mode
quit or exit                   - Leave cctrl
replicator <host> <cmd>        - Issue a command on a replicator
service               	       - Run a service script
set                            - Set management options
switch                         - Promote a slave to master

To get more information about particular commands type help followed by a 
command.  Examples: 'help datasource' or 'help create composite'.

To get specific information about an individual command or operation, provide the command name to the help command. For example, to get information about the ping command, type help ping at the cctrl prompt.

8.1.3.9. cctrl ls Command

The ls command displays the current structure and status of the cluster.

ls [-l] [host] [[resources] | [services] | [sessions]]

The ls command operates in a number of different modes, according to the options provided on the command-line, as follows:

  • No options

    Generates a list of the current routers, datasources, and the their current status and services.

  • -l

    Outputs extended information about the current status and configuration. The -l option can be used in both the standard (no option) and host specific output formats to provide more detailed information.

  • host

    You can also specify an individual component within the cluster on which to obtain information. For example, to get the information only for a single host, issue

    cctrl> ls host1
  • resources

    The resources option generates a list of the configured resources and their current status.

  • services

    The services option generates a list of the configured services known to the manager.

  • sessions

    The sessions outputs statistics for the cluster. Statistics will only be presented when SMARTSCALE is enabled for the connectors

Without any further options, the output of ls looks similar to the following:

[LOGICAL] /alpha > ls

COORDINATOR[host1:AUTOMATIC:ONLINE]

ROUTERS:
+----------------------------------------------------------------------------+
|connector@host1[1179](ONLINE, created=0, active=0)                          |
|connector@host2[1532](ONLINE, created=0, active=0)                          |
|connector@host3[17665](ONLINE, created=0, active=0)                         |
+----------------------------------------------------------------------------+

DATASOURCES:
+----------------------------------------------------------------------------+
|host1(master:ONLINE, progress=60, THL latency=0.498)                        |
|STATUS [OK] [2013/03/22 02:25:00 PM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=master, state=ONLINE)                                     |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|host2(slave:ONLINE, progress=31, latency=0.000)                             |
|STATUS [OK] [2013/03/22 02:25:00 PM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=host1, state=ONLINE)                        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|host3(slave:ONLINE, progress=35, latency=9.455)                             |
|STATUS [OK] [2013/03/21 06:47:53 PM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=host1, state=ONLINE)                        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

8.1.3.10. cctrl members Command

The members command outputs a list of the currently identified managers within the dataservice.

For example:

[LOGICAL] /alpha > members
alpha/host1(ONLINE)/192.168.1.60:7800
alpha/host2(ONLINE)/192.168.1.61:7800
alpha/host3(ONLINE)/192.168.1.62:7800

The command outputs each identified manager service within the current dataservice.

The format of the output information is:

DATASERVICE/HOST(STATUS)/IPADDR:PORT

Where:

  • DATASERVICE

    The name of the dataservice.

  • HOST

    The name of the host on which the manager resides.

  • STATUS

    The current status of the manager.

  • IPADDR

    The IP address of the manager.

  • PORT

    The primary TCP/IP port used for contacting the manager service.

The members service can be used as an indicator of the overall status of the dataservice. The information shown for each manager should within a single dataservice should be identical. If different information is shown, or an incomplete number of managers compared to the number of configured managers is provided, then it may indicate a communication or partition problem within the dataservice.

8.1.3.11. cctrl physical Command

The members command enables physical mode commands and displays. This is a specialized mode used to examine interfaces of resources managed by the cluster. It is not recommended for normal administrative use.

8.1.3.12. cctrl ping Command

The ping command checks to see whether a host is alive. If the host name is omitted, it tests all hosts in the cluster including witness hosts.

Ping uses the host ping timeout and methods specified in the manager.properties file. By default output is parsimonious.

The following shows and example of the output:

[LOGICAL] /nyc > ping
NETWORK CONNECTIVITY: PING TIMEOUT=2
NETWORK CONNECTIVITY: CHECKING MY OWN ('db2') CONNECTIVITY
NETWORK CONNECTIVITY: CHECKING CLUSTER MEMBER 'db1'
NETWORK CONNECTIVITY: CHECKING CLUSTER MEMBER 'db3'

8.1.3.13. cctrl quit Command

Exits cctrl and returns the user to the shell. For example:

8.1.3.14. cctrl recover Command

The recover will attempt to recover and bring online all nodes and services that are not in an ONLINE state.

Any previous failed Master nodes will be reconfigured as Slaves, and all associated replicator services will be reconciled to connect to correct Master

If recovery is unsuccessful, the next step is typically to restore any failed data source from a backup.

8.1.3.15. cctrl recover master using Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

8.1.3.16. cctrl recover relay using Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

8.1.3.17. cctrl recover using Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

8.1.3.18. cctrl replicator Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

8.1.3.19. cctrl rm Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

8.1.3.20. cctrl router Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

8.1.3.21. cctrl service Command

The service command executes a command on the operating system according to standard Linux/Unix service script conventions. The service command may apply to a single host or may be executed on all hosts using the * operator. This latter form is also known as a broadcast command. You can enter service commands from any manager.

Commonly defined services include the following. User-defined services may also be invoked using the service command provided they are listed in the service configuration files for the cluster.

  • connector: Tungsten Connector service

  • mysql: MySQL service

  • replicator: Tungsten Replicator service

The standard service commands are:

  • restart: Stop and then start the service

  • start: Start the service if it is not already running

  • status: Show the current process status

  • stop: Stop the service if it is running

  • tail: Show the end of the process log (useful for diagnostics)

To start all mysqld processes in the cluster. This should be done in. maintenance mode to avoid triggering a failover.

cctrl> service */mysql restart

Stop the replicator process on host mercury.

cctrl> service mercury/replicator tail

Show the end of the log belonging to the connector process on host jupiter.

cctrl> service jupiter/connector tail

Warning

[Re-]starting master DBMS servers can cause failover when operating in automatic policy mode. Always set policy mode to maintenance before restarting a master DBMS server.

8.1.3.22. cctrl set Command

The set command sets a management option. The following options are available.

  • set policy - Set policy for cluster automation

  • set output - Set logging level in cctrl

8.1.3.23. cctrl set force Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

8.1.3.24. cctrl show topology Command

The show topology command shows the topology for the currently selected cluster, or cluster composite.

For example, below is sample output for a Composite Primary/DR (active/passive) cluser:

[LOGICAL] /east > show topology
clustered_master_slave

For example, below is sample output for a Composite Multi-Master (active/active) cluser:

[LOGICAL] /usa > show topology
clustered_multi_master

When selecting a cluster within the composite:

[LOGICAL] /usa > use east
[LOGICAL] /east >  show topology
clustered_primary

The following values are output according to the cluster selected:

  • Primary service returns clustered-primary

  • Sub-service returns clustered-sub-service

  • Composite master-slave returns composite-master-slave

  • Composite multi-master returns composite-multi-master

8.1.3.25. cctrl set master Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

8.1.3.26. cctrl switch Command

The switch command performs a planned failover to promote an existing slave to master and reconfigure the current master as a slave.

The most common reason for a switch operation is to perform maintenance on the master.

If there is no argument the switch command selects the most caught up slave and promotes it as the master. You can also specify a particular host, in which case switch will ensure that the chosen slave is fully up-to-date and promote it.

Switch is a complex operation.

  • First, we ensure that all transactions to the master through SQL router or connector processes complete before initiating the switch.

  • It submits a flush transaction through the replicator to ensure that the chosen slave is fully caught up with the master.

  • It then reconfigures the master and slave to reverse their roles.

  • Finally, it puts the master and slave back online.

In the event that switch does not complete, We attempt to revert to the old master. If a switch fails, you should check the cluster using 'ls' to ensure that things rolled back correctly.

Examples:

Switch to any up-to-date slave in the cluster. If no slave is available the operation fails.

cctrl> switch

Switch the master to host mercury.

cctrl> switch to mercury

The switch command can also be used to switch between Composite Master and Composite Slave clusters in a Composite DR Topology.

In this scenario, the switch will promote the RELAY node in the remote cluster to be the master, and revert the Master in the local cluster to be the Relay

To initiate the switch in a composite cluster, issue the command from the composite cluster, for example if you have cluster service east and west in a composite cluster called global, and east is the current composite master:

cctrl> use global
cctrl> switch