9.1.3. cctrl Commands

Table 9.12. cctrl Commands

OptionDescription
adminChange to admin mode
cdChange to a specific site within a multisite service
clusterIssue a command across the entire cluster
cluster topology validateCheck, validate and report on current cluster topology and health.
cluster validateValidate the cluster quorum configuration
create compositeCreate a composite dataservice
datasourceIssue a command on a single datasource
expertChange to expert mode
failoverPerform a failover operation from a primary to a replica
helpDisplay the help information
lsShow cluster status
membersList the managers of the dataservice
physicalEnter physical mode
pingTest host availability
quit, exitExit cctrl
recover master usingRecover the Primary within a datasource using the specified Primary
replicatorIssue a command on a specific replicator
routerIssue a command on a specific router (connector)
serviceRun a service script
setSet management options
set masterSet the Primary within a datasource
show topologyShows the currently configured cluster topology
switchPromote a Replica to a Primary

9.1.3.1. cctrl admin Command

The admin command enables admin mode commands and displays. Admin mode is a specialized mode used to examine and repair cluster metadata. It is not recommended for normal use.

9.1.3.2. cctrl cd Command

The cd command changes the data service being administered. Subsequent commands will only affect the given data service name.

Using cd .. allows to go back to the root element. The given data service name can be either composite or physical Note that this command can only be used when cctrl is run with the '-multi' flag

9.1.3.3. cctrl cluster Command

The cluster command operates at the level of the full cluster.

9.1.3.3.1. cctrl cluster check Command

The cluster check command issues an MD5 consistency check on one or more tables in a database on the Primary data source. The consistency checks then replicate to each Replica, whereupon the Applier replicator repeats the check.

If the check fails, Replicas may go offline or print a log warning depending on how the replicators are configured. The default is to go offline. You can return a replicator to the online state after a failed check by issuing a replicator online command.

The table name can also be a wildcard (*) in which case all tables will be checked. Users may optionally specify a range of rows to check using the -limit option, which takes a starting row option followed by a number of rows to check. Rows are selected in primary key order.

The following example checks all tables in database accounting.

[LOGICAL] /alpha > cluster check accounting.*

The following command checks only the first 10 rows in a single table.

[LOGICAL] /alpha > cluster check accounting.invoices -limit 1,10

Warning

Consistency checks can be very lengthy operations for large tables and will lock them while they run. On the Primary this can block applications. On Replicas it blocks replication.

9.1.3.3.2. cctrl cluster flush Command

The cluster flush command sends a heartbeat event through the local cluster and returns a flush sequence number that is guaranteed to be equal to or greater than the sequence number of the flush event. Replicas that reach the flush sequence number are guaranteed to have applied the flush event.

This command is commonly used for operations like switch that need to synchronize the position of one or more Primaries or Replicas.

9.1.3.3.3. cctrl cluster heartbeat Command

The cluster heartbeat command sends a heartbeat event through the local cluster to demonstrate that all replicators are working. You should see the sequence numbers on all data sources advance by at least 1 if it is successful.

9.1.3.3.4. cctrl cluster offline Command

The cluster offline command brings all data services that are not offline into the offline state. It has no effect on services that are already offline.

9.1.3.3.5. cctrl cluster online Command

The cluster online command brings all data services that are not online into the online state. It has no effect on services that are already online.

9.1.3.3.6. cctrl cluster validate Command

The cluster validate validates the configuration of the cluster with respect to the quorum used for decision making. The number of active managers and active witnesses within the cluster is validated to ensure that there are enough active hosts to make a decision in the event of a failover or other failure event.

When executed, the validation routine checks all the available hosts, including witness hosts, and determines whether there are enough hosts, and whether their membership of the cluster is valid. In the event of deficiency, corrective action will be recommended.

By default, the command checks all hosts within the configured cluster:

[LOGICAL] /alpha > cluster validate
HOST host1/192.168.2.20: ALIVE
HOST host2/192.168.2.21: ALIVE
HOST host3/192.168.2.22: ALIVE
CHECKING FOR QUORUM: MUST BE AT LEAST 2 MEMBERS, OR 1 MEMBERS PLUS ALL WITNESSES
QUORUM SET MEMBERS ARE: host2, host1, host3
SIMPLE MAJORITY SIZE: 2
VALIDATED MEMBERS ARE: host2, host1, host3
REACHABLE MEMBERS ARE: host2, host1, host3
WITNESS HOSTS ARE: 
REACHABLE WITNESSES ARE: 
MEMBERSHIP IS VALID
GC VIEW OF CURRENT MEMBERS IS: host1, host2, host3
VALIDATED CURRENT MEMBERS ARE: host2, host1, host3
CONCLUSION: I AM IN A PRIMARY PARTITION OF 3 MEMBERS OUT OF THE REQUIRED MAJORITY OF 2
VALIDATION STATUS=VALID CLUSTER
ACTION=NONE

Additionally, a list of hosts to exclude from the check can be provided to verify the cluster capability when certain hosts have already failed or been shunned from the dataservice during maintenance.

To exclude hosts, add excluding and a comma-separated list of hosts to the command. For example:

[LOGICAL] /alpha > cluster validate excluding host3,host2
EXCLUDED host3 FROM VIEW
EXCLUDED host2 FROM VIEW
HOST host1/192.168.2.20: ALIVE
CHECKING FOR QUORUM: MUST BE AT LEAST 2 MEMBERS, OR 1 MEMBERS PLUS ALL WITNESSES
QUORUM SET MEMBERS ARE: host2, host1, host3
SIMPLE MAJORITY SIZE: 2
VALIDATED MEMBERS ARE: host1
REACHABLE MEMBERS ARE: host1
WITNESS HOSTS ARE: 
REACHABLE WITNESSES ARE: 
MEMBERSHIP IS VALID
GC VIEW OF CURRENT MEMBERS IS: host1
VALIDATED CURRENT MEMBERS ARE: host1
CONCLUSION: I AM IN A NON-PRIMARY PARTITION OF 1 MEMBERS OUT OF A REQUIRED MAJORITY SIZE OF 2
AND THERE ARE 0 REACHABLE WITNESSES OUT OF 0
VALIDATION STATUS=NON-PRIMARY PARTITION
ACTION=RESTART SAFE

Cluster validation can be used to provide validation only. To improve the support:

9.1.3.3.7. cctrl cluster topology validate Command

The cluster topology validate will check and validate a cluster topology and, in the process, will report any issues that it finds. The purpose of this command is to provide a fast way to see, immediately, if there are any issues with any components of a cluster.

Here's an example of the command when run in the context of an Composite Active/Active cluster:

[LOGICAL] /usa > cluster topology validate
Validating physical cluster 'east' for composite datasource 'east@usa'
Validating datasource 'db4@east'
Validating datasource 'db5@east'
Validating datasource 'db6@east'
Validating physical subservice 'east_from_west' of service 'east'
Validating datasource 'db4@east_from_west'
Validating datasource 'db5@east_from_west'
Validating datasource 'db6@east_from_west'
Physical cluster 'east_from_west' is VALID
Physical cluster 'east' is VALID
Validating physical cluster 'west' for composite datasource 'west@usa'
Validating datasource 'db1@west'
Validating datasource 'db2@west'
Validating datasource 'db3@west'
Validating physical subservice 'west_from_east' of service 'west'
Validating datasource 'db1@west_from_east'
Validating datasource 'db2@west_from_east'
Validating datasource 'db3@west_from_east'
Physical cluster 'west_from_east' is VALID
Physical cluster 'west' is VALID
Composite cluster 'usa' is VALID

Here's an example of the command when run in the context of an Composite Active/Active cluster that does not validate due to underlying issues:

[LOGICAL] /usa > cluster topology validate
Validating physical cluster 'east' for composite datasource 'east@usa'
Validating datasource 'db4@east'
Validating datasource 'db5@east'
Validating datasource 'db6@east'
Validating physical subservice 'east_from_west' of service 'east'
Validating datasource 'db4@east_from_west'
Validating datasource 'db5@east_from_west'
Validating datasource 'db6@east_from_west'
Physical cluster 'east_from_west' is VALID
Physical cluster 'east' is VALID
Validating physical cluster 'west' for composite datasource 'west@usa'
Validating datasource 'db1@west'
Validating datasource 'db2@west'
INVALID: db2@west: Primary datasource is 'db1' but replicator points at 'db3'
Validating datasource 'db3@west'
Validating physical subservice 'west_from_east' of service 'west'
Validating datasource 'db1@west_from_east'
Validating datasource 'db2@west_from_east'
Validating datasource 'db3@west_from_east'
INVALID: db3@west_from_east: Primary datasource is 'db1' but replicator points at 'db2'
Physical cluster 'west_from_east' is INVALID
Physical cluster 'west' is INVALID
Composite cluster 'usa' is INVALID

In the above case you can see that there are two issues, shown with the word INVALID at the start of the line. In the cluster west, datasource db2's replicator points at db3 but should point at db1. In the cluster west_from_east, db3's replicator points at db2 but should point at db1.

9.1.3.4. cctrl create composite Command

The create composite command creates a new composite data source or data service with the given name. Composite data services can only be create in the root directory '/' while composite data sources need to be created from a composite data service location. Composite data source names should be the same as the physical data services Composite data service name should be named after its composite data sources

The following example creates a composite data service named 'sj_nyc'

cctrl> create composite dataservice sj_nyc

The following example changes to the composite data service sj_nyc, then creates a composite data source named 'sj' in this composite data service

cctrl> cd sj_nyc
cctrl> create composite datasource sj

9.1.3.5. cctrl datasource Command

The datasource command affects a single data source.

Table 9.13. cctrldatasource Commands

OptionDescription
backupBackup a datasource
connectionsDisplays the current number of connections running to the given node through connectors.
drainPrevents new connection to be made to the given data source, while ongoing connection remain untouched. If a timeout (in seconds) is given, ongoing connections will be severed only after the timeout expires.
failFail a datasource
hostHostname of the datasource
offlinePut a datasource into the offline state
onlinePut a datasource into the online state
recoverRecover a datasource into operation state as Replica
restoreRestore a datasource from a previous backup
shunShun a datasource
welcomeWelcome a shunned datasource back to the cluster

9.1.3.5.1. cctrl datasource backup Command

The datasource backup command invokes a backup on the data source on the named host using the default backup agent and storage agent. Backups taken in this way can be reloaded using the datasource restore command. The following command options are supported:

  • backupAgent — The name of a backup agent.

  • storageAgent — The name of a storage agent.

  • timeout — Number of seconds to wait before the backup command times out.

On success the backup URL will be written to the console.

The following example performs a backup on host saturn using the default backup agent.

cctrl> datasource saturn backup

The following example performs a backup on host mercury using the xtrabackup agent, which is named explicitly.

cctrl> datasource mercury backup xtrabackup
9.1.3.5.2. cctrl datasource connections Command

Note

This feature is only available from release 7.0.2 onwards

The datasource connections command displays the current number of connections running to the given node through connectors

The optional -l flag will add the list of connectors and their number of connections to this node

Example:

[LOGICAL] /nyc > datasource db2 connections -l 
15 
connector@db3[16305] (10) 
connector@db2[20304] (5)
9.1.3.5.3. cctrl datasource drain Command

Note

This feature is only available from release 7.0.2 onwards

The datasource drain command will prevent new connections to the specified data source, while ongoing connections remain untouched.

If a timeout (in seconds) is given, ongoing connections will be severed after the timeout expires.

This command returns immediately, no matter whether a timeout is given or not. The number of remaining connections can be displayed with the datasource connections command.

This command will typically be used for seamless node maintenance, including composite data source (whole site) maintenance.

Under the hood, this command will put the data source into "SHUNNED" state, with lastShunReason set to "DRAIN-CONNECTIONS". Once the maintenance is finished and willing to re-allow connections to it, the command datasource welcome will bring the data source (and its underlying physical nodes if applicable) back to an "online" operational state

Example:

[LOGICAL] /nyc > datasource db2 drain 
DataSource 'db2@nyc' set to SHUNNED 

[LOGICAL] /nyc > ls 
COORDINATOR[db1:AUTOMATIC:ONLINE]  
ROUTERS: 
+---------------------------------------------------------------------------------+ 
|connector@db1[15937](ONLINE, created=58, active=10)                              | 
|connector@db2[20304](ONLINE, created=29, active=5)                               | 
|connector@db3[16305](ONLINE, created=20, active=10)                              | 
|connector@db4[15065](ONLINE, created=0, active=0)                                | 
|connector@db5[15172](ONLINE, created=0, active=0)                                | 
|connector@db6[15261](ONLINE, created=0, active=0)                                | 
+---------------------------------------------------------------------------------+  

DATASOURCES: 
+---------------------------------------------------------------------------------+ 
|db1(master:ONLINE, progress=189240, THL latency=0.492)                           | 
|STATUS [OK] [2022/11/17 01:21:10 PM UTC]                                         | 
+---------------------------------------------------------------------------------+ 
| MANAGER(state=ONLINE)                                                           | 
| REPLICATOR(role=master, state=ONLINE)                                           | 
| DATASERVER(state=ONLINE                                                         | 
| CONNECTIONS(created=59, active=10)                                              | 
+---------------------------------------------------------------------------------+ 

+---------------------------------------------------------------------------------+ 
|db2(slave:SHUNNED(DRAIN-CONNECTIONS), progress=102621, latency=172.826)          | 
|STATUS [SHUNNED] [2022/11/17 02:12:59 PM UTC]                                    | 
+---------------------------------------------------------------------------------+ 
| MANAGER(state=ONLINE)                                                           | 
| REPLICATOR(role=slave, master=db1, state=ONLINE)                                | 
| DATASERVER(state=ONLINE)                                                        | 
| CONNECTIONS(created=6, active=0)                                                | 
+---------------------------------------------------------------------------------+
 
+---------------------------------------------------------------------------------+ 
|db3(slave:ONLINE, progress=155889, latency=67.533)                               | 
|STATUS [OK] [2022/11/17 01:21:39 PM UTC]                                         | 
+---------------------------------------------------------------------------------+ 
| MANAGER(state=ONLINE)                                                           | 
| REPLICATOR(role=slave, master=db1, state=ONLINE)                                | 
| DATASERVER(state=ONLINE)                                                        | 
| CONNECTIONS(created=42, active=10)                                              | 
+---------------------------------------------------------------------------------+
9.1.3.5.4. cctrl datasource fail Command

The datasource fail allows you to place a host into the failed state.

To place a node back into an online state, you must issue the datasource recover command.

If the cluster is in automatic policy mode, the cluster will attempt to recover the host automatically if the replicator and database are online.

In order to maintain the failed state, switch the cluster to maintenance and/or stop the underlying database and replicator.

The following example changes the state of the node venus, to failed:

cctrl> datasource venus fail
9.1.3.5.5. cctrl datasource offline Command

The datasource offline allows you to place a host into an offline state. It has no effect if the datasource is already in an offline state.

To place a node back into an online state, you must issue the datasource online command.

If the cluster is in AUTOMATIC policy mode, the cluster will return the host to online automatically.

In order to maintain the offline state, switch the cluster to MAINTENANCE and/or stop the underlying database and replicator.

The following example changes the state of the node mercury, to failed:

cctrl> datasource mercury offline
9.1.3.5.6. cctrl datasource online Command

The datasource online allows you to place a host into an online state. It has no effect if the datasource is already in an online state.

To place a node back into an online state, you must issue the datasource online command.

If the node is in a SHUNNED or FAIL state, this command will fail with an error. Instead, you should use the datasource recover command

The following example changes the state of the node mercury, to failed:

cctrl> datasource mercury online
9.1.3.5.7. cctrl datasource recover Command

The datasource recover reconfigures a shunned data source and returns it to the cluster as a Replica. This command can be used with failed Primary as well as Replica data sources.

For Replica data sources, the recover command attempts to restart the DBMS server followed by replication. If successful, the data source joins the cluster as an online Replica.

For Primary data sources, the recover command first reconfigures the Primary as a Replica. It then performs the same recovery process as for a failed Replica.

If datasource recover is unsuccessful, the next step is typically to restore the data source from a backup. This should enable it to rejoin the cluster as a normal Replica.

The following example recovers host mercury following a failure. The command is identical for Primary and Replica data sources.

cctrl> datasource mercury recover
9.1.3.5.8. cctrl datasource restore Command

The datasource restore command reloads a backup generated with the datasource backup command.

The following command options are supported:

  • uri — The URI of a specific backup to restore

  • timeout — Number of seconds to wait before the command times out.

To restore a data source you must first put the data source and its associated replicator offline.

The following example restores host saturn from the latest backup. The preceding commands place the datasource and replicator offline. The commands after the restore return the datasource to the cluster and put it online.

cctrl> datasource saturn shun
cctrl> datasource saturn offline
cctrl> replicator saturn offline
cctrl> datasource saturn restore
cctrl> datasource saturn welcome
cctrl> cluster online

The following example restores host mercury from an existing backup, which is explicitly named. The datasource and replicator must be offline.

cctrl> datasource mercury restore storage://file-system/store-0000000004.properties
9.1.3.5.9. cctrl datasource shun Command

The datasource shun command removes the data source from the cluster and makes it unavailable to applications. It will remain in the shunned state without further changes until you issue a datasource welcome or datasource recover command.

The datasource shun command is most commonly used to perform maintenance on a data source. It allows you to reboot servers and replicators without triggering automated policy mode rules.

The following example shuns the data source on host venus.

cctrl> datasource venus shun
9.1.3.5.10. cctrl datasource welcome Command

When a datasource has been shunned, the datasource can be welcomed back to the dataservice by using the welcome command. The welcome command attempts to enable the datasource in the ONLINE state using the current roles and configuration. If the datasource was operating as a Replica before it was shunned, the welcome command will enable the datasource as a Replica.

For example, the host host3 is a Replica and currently online:

+----------------------------------------------------------------------------+
|host3(slave:ONLINE, progress=157454, latency=1.000)                      |
|STATUS [OK] [2013/05/14 05:12:52 PM BST]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=host2, state=ONLINE)                     |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+
[LOGICAL:EXPERT] /alpha > datasource host3 shun
DataSource 'host3' set to SHUNNED

To switch the datasource back to the online state, the welcome is used:

[LOGICAL:EXPERT] /alpha > datasource host3 welcome
DataSource 'host3' is now OFFLINE

The welcome command puts the datasource into the OFFLINE state. If the dataservice policy mode is AUTOMATIC, the node will be placed into ONLINE mode due to automatic recovery. When in MAINTENANCE or MANUAL mode, the node must be manually set online.

The welcome command may not always work if there has been a failure or topology change between the moment it was shunned and welcomed back. Using the recover command may be a better alternative to using welcome when bringing a datasource back online. The recover commands ensures that the replicator, connector and operation of the datasource are correct within the current cluster configuration. See Section 9.1.3.14, “cctrl recover Command”.

9.1.3.6. cctrl expert Command

The expert command enables expert mode in cctrl. This suppresses prompts for commands that can cause damage to data. It is provided as a convenience for fast administration of the system.

Warning

This mode should be used with care, and only be used by experienced operators who fully understand the implications of the subsequent commands issued.

Missuse of this feature may cause irreparable damage to a cluster

9.1.3.7. cctrl failover Command

The failover command performs a failover to promote an existing Replica to Primary after the current Primary has failed. The Primary data source must be in a failed state to use failover. If the Primary data source is not failed, you should instead use switch.

If there is no argument the failover command selects the most caught up Replica and promotes it as the Primary. You can also specify a particular host, in which case failover will ensure that the chosen Replica is fully up-to-date and promote it.

Failover ensures that the Replica has applied all transactions present in its log, then promotes the Replica to Primary. It does not attempt to retrieve transactions from the old Primary, as this is by definition already failed. After promoting the chosen Replica to Primary, failover reconfigures other Replicas to point to it and ensures all data sources are online.

To recover a failed Primary you should use the datasource recover command.

Failover to any up-to-date Replica in the cluster. If no Replica is available the operation fails:

cctrl> failover

Failover from a broken Primary to a specific node:

cctrl> failover to mercury

9.1.3.8. cctrl help Command

The help command provides help text from within the cctrl operation.

With no other arguments, help provides a list of the available commands:

[LOGICAL] /alpha > help

--------
Overview
--------
Description: Overview of Tungsten cctrl Commands

Commands
--------
admin                          - Enter admin mode
cd <name>                      - Change to the specified SOR cluster element
cluster <command>              - Issue a command on the entire cluster
create composite <type> <name> - Create SOR cluster components
datasource <host> <cmd>        - Issue a command on a datasource
expert                         - Enter expert mode
failover                       - Failover from failed &mas_lc; to &slv_lc;
help                           - Show help
ls [options]                   - Show generic cluster status
members                        - List all of the managers in the cluster
ping                           - Test host availability
physical                       - Enter physical mode
quit or exit                   - Leave cctrl
replicator <host> <cmd>        - Issue a command on a replicator
service               	       - Run a service script
set                            - Set management options
switch                         - Promote a &slv_lc; to &mas_lc;

To get more information about particular commands type help followed by a 
command.  Examples: 'help datasource' or 'help create composite'.

To get specific information about an individual command or operation, provide the command name to the help command. For example, to get information about the ping command, type help ping at the cctrl prompt.

9.1.3.9. cctrl ls Command

The ls command displays the current structure and status of the cluster.

ls [-l] [host] [[resources] | [services] | [sessions]]

The ls command operates in a number of different modes, according to the options provided on the command-line, as follows:

  • No options

    Generates a list of the current routers, datasources, and the their current status and services.

  • -l

    Outputs extended information about the current status and configuration. The -l option can be used in both the standard (no option) and host specific output formats to provide more detailed information.

  • host

    You can also specify an individual component within the cluster on which to obtain information. For example, to get the information only for a single host, issue

    cctrl> ls host1
  • resources

    The resources option generates a list of the configured resources and their current status.

  • services

    The services option generates a list of the configured services known to the manager.

  • sessions

    The sessions outputs statistics for the cluster. Statistics will only be presented when SMARTSCALE is enabled for the connectors

Without any further options, the output of ls looks similar to the following:

[LOGICAL] /alpha > ls

COORDINATOR[host1:AUTOMATIC:ONLINE]

ROUTERS:
+----------------------------------------------------------------------------+
|connector@host1[1179](ONLINE, created=0, active=0)                          |
|connector@host2[1532](ONLINE, created=0, active=0)                          |
|connector@host3[17665](ONLINE, created=0, active=0)                         |
+----------------------------------------------------------------------------+

DATASOURCES:
+----------------------------------------------------------------------------+
|host1(master:ONLINE, progress=60, THL latency=0.498)                        |
|STATUS [OK] [2013/03/22 02:25:00 PM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=master, state=ONLINE)                                     |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|host2(slave:ONLINE, progress=31, latency=0.000)                             |
|STATUS [OK] [2013/03/22 02:25:00 PM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=host1, state=ONLINE)                        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

+----------------------------------------------------------------------------+
|host3(slave:ONLINE, progress=35, latency=9.455)                             |
|STATUS [OK] [2013/03/21 06:47:53 PM GMT]                                    |
+----------------------------------------------------------------------------+
|  MANAGER(state=ONLINE)                                                     |
|  REPLICATOR(role=slave, master=host1, state=ONLINE)                        |
|  DATASERVER(state=ONLINE)                                                  |
|  CONNECTIONS(created=0, active=0)                                          |
+----------------------------------------------------------------------------+

The purpose of the Alert STATUS field is to provide standard, datasource-state-specific values for ease of parsing and backwards-compatibility with older versions of the cctrl command.

The STATUS field is effectively the same information as the DataSource State that appears on the first line after the colon (:), just presented slightly differently.

Here are the possible values for STATUS, showing the DataSource State first, and the matching Alert STATUS second:

Datasource State Alert STATUS
ONLINE OK
OFFLINE WARN (for non-composite datasources)
OFFLINE DIMINISHED (for composite passive replica)
OFFLINE CRITICAL (for composite active primary)
FAILED CRITICAL
SHUNNED SHUNNED

Any other DataSource State sets the STATUS to UNKNOWN.

9.1.3.10. cctrl members Command

The members command outputs a list of the currently identified managers within the dataservice.

For example:

[LOGICAL] /alpha > members
alpha/host1(ONLINE)/192.168.1.60:7800
alpha/host2(ONLINE)/192.168.1.61:7800
alpha/host3(ONLINE)/192.168.1.62:7800

The command outputs each identified manager service within the current dataservice.

The format of the output information is:

DATASERVICE/HOST(STATUS)/IPADDR:PORT

Where:

  • DATASERVICE

    The name of the dataservice.

  • HOST

    The name of the host on which the manager resides.

  • STATUS

    The current status of the manager.

  • IPADDR

    The IP address of the manager.

  • PORT

    The primary TCP/IP port used for contacting the manager service.

The members service can be used as an indicator of the overall status of the dataservice. The information shown for each manager should within a single dataservice should be identical. If different information is shown, or an incomplete number of managers compared to the number of configured managers is provided, then it may indicate a communication or partition problem within the dataservice.

9.1.3.11. cctrl physical Command

The members command enables physical mode commands and displays. This is a specialized mode used to examine interfaces of resources managed by the cluster. It is not recommended for normal administrative use.

9.1.3.12. cctrl ping Command

The ping command checks to see whether a host is alive. If the host name is omitted, it tests all hosts in the cluster including witness hosts.

Ping uses the host ping timeout and methods specified in the manager.properties file. By default output is parsimonious.

The following shows an example of the output:

[LOGICAL] /nyc > ping
NETWORK CONNECTIVITY: PING TIMEOUT=2
NETWORK CONNECTIVITY: CHECKING MY OWN ('db2') CONNECTIVITY
NETWORK CONNECTIVITY: CHECKING CLUSTER MEMBER 'db1'
NETWORK CONNECTIVITY: CHECKING CLUSTER MEMBER 'db3'

9.1.3.13. cctrl quit Command

Exits cctrl and returns the user to the shell. For example:

9.1.3.14. cctrl recover Command

The recover will attempt to recover and bring online all nodes and services that are not in an ONLINE state.

Any previous failed Primary nodes will be reconfigured as Replicas, and all associated replicator services will be reconciled to connect to the correct Primary

If recovery is unsuccessful, the next step is typically to restore any failed data source from a backup.

9.1.3.15. cctrl recover master using Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

9.1.3.16. cctrl recover relay using Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

9.1.3.17. cctrl recover using Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

9.1.3.18. cctrl replicator Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

9.1.3.19. cctrl rm Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

9.1.3.20. cctrl router Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

9.1.3.21. cctrl service Command

The service command executes a command on the operating system according to standard Linux/Unix service script conventions. The service command may apply to a single host or may be executed on all hosts using the * operator. This latter form is also known as a broadcast command. You can enter service commands from any manager.

Commonly defined services include the following. User-defined services may also be invoked using the service command provided they are listed in the service configuration files for the cluster.

  • connector: Tungsten Connector service

  • mysql: MySQL service

  • replicator: Tungsten Replicator service

The standard service commands are:

  • restart: Stop and then start the service

  • start: Start the service if it is not already running

  • status: Show the current process status

  • stop: Stop the service if it is running

  • tail: Show the end of the process log (useful for diagnostics)

To start all mysqld processes in the cluster. This should be done in. maintenance mode to avoid triggering a failover.

cctrl> service */mysql restart

Stop the replicator process on host mercury.

cctrl> service mercury/replicator tail

Show the end of the log belonging to the connector process on host jupiter.

cctrl> service jupiter/connector tail

Warning

[Re-]starting Primary DBMS servers can cause failover when operating in automatic policy mode. Always set policy mode to maintenance before restarting a Primary DBMS server.

9.1.3.22. cctrl set Command

The set command sets a management option. The following options are available.

  • set policy - Set policy for cluster automation

  • set output - Set logging level in cctrl

  • set force [true|false]

    Warning

    Setting force should NOT be used unless advised by Support. Using this feature without care, can break the cluster and possibly cause data corruption.

9.1.3.23. cctrl show topology Command

The show topology command shows the topology for the currently selected cluster, or cluster composite.

For example, below is sample output for an Composite Active/Passive cluster:

[LOGICAL] /east > show topology
clustered_master_slave

For example, below is sample output for an Composite Active/Active cluster:

[LOGICAL] /usa > show topology
clustered_multi_master

When selecting a cluster within the composite:

[LOGICAL] /usa > use east
[LOGICAL] /east >  show topology
clustered_primary

The following values are output according to the cluster selected:

  • Active service returns clustered-primary

  • Passive service returns clustered-sub-service

  • Composite Active/Passive returns composite-master-slave

  • Composite Active/Active returns composite-multi-master

9.1.3.24. cctrl set master Command

Content Being Written

This section of the documentation is currently being produced and may be incomplete and/or subject to change.

9.1.3.25. cctrl switch Command

The switch command performs a planned failover to promote an existing Replica to Primary and reconfigure the current Primary as a Replica.

The most common reason for a switch operation is to perform maintenance on the Primary.

If there is no argument the switch command selects the most caught up Replica and promotes it as the Primary. You can also specify a particular host, in which case switch will ensure that the chosen Replica is fully up-to-date and promote it.

Switch is a complex operation.

  • First, we ensure that all transactions to the Primary through SQL router or connector processes complete before initiating the switch.

  • It submits a flush transaction through the replicator to ensure that the chosen Replica is fully caught up with the Primary.

  • It then reconfigures the Primary and Replica to reverse their roles.

  • Finally, it puts the Primary and Replica back online.

In the event that switch does not complete, We attempt to revert to the old Primary. If a switch fails, you should check the cluster using 'ls' to ensure that things rolled back correctly.

Examples:

Switch to any up-to-date Replica in the cluster. If no Replica is available the operation fails.

cctrl> switch

Switch the Primary to host mercury.

cctrl> switch to mercury

The switch command can also be used to switch between the Active and Passive clusters in a Tungsten Cluster+ Active/Passive™ Topology.

In this scenario, the switch will promote the RELAY node in the remote cluster to be the Primary, and revert the Primary in the local cluster to be the Relay

To initiate the switch in a composite cluster, issue the command from the composite cluster, for example if you have cluster service east and west in a composite cluster called global, and east is the current Active site:

cctrl> use global
cctrl> switch