B.2. Staging Host Configuration

The staging host will form the base of your operation for creating your cluster. The primary role of the staging host is to hold the Tungsten Cluster™ software, and to install, transfer, and initiate the Tungsten Cluster™ service on each of the nodes within the cluster. The staging host can be a separate machine, or a machine that will be part of the cluster.

The recommended way to use Tungsten Cluster™ is to configure SSH on each machine within the cluster and allow the tpm tool to connect and perform the necessary installation and setup operations to create your cluster environment, as shown in Figure B.1, “Tungsten Deployment”.

Figure B.1. Tungsten Deployment

Tungsten Deployment

The staging host will be responsible for pushing and configuring each machine. For this to operate correctly, you should configure SSH on the staging server and each host within the cluster with a common SSH key. This will allow both the staging server, and each host within the cluster to communicate with each other.

You can use an existing login as the base for your staging operations. For the purposes of this guide, we will create a unique user, tungsten, from which the staging process will be executed.

  1. Create a new Tungsten user that will be used to manage and install Tungsten Cluster™. The recommended choice for MySQL installations is to create a new user, tungsten. You will need to create this user on each host in the cluster. You can create the new user using adduser:

    shell> sudo adduser tungsten

    You can add the user to the mysql group adding the command-line option:

    shell> sudo usermod -G mysql -a tungsten
  2. Login as the tungsten user:

    shell> su - tungsten
  3. Create an SSH key file, but do not configure a password:

    tungsten:shell> ssh-keygen -t rsa
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/tungsten/.ssh/id_rsa): 
    Created directory '/home/tungsten/.ssh'.
    Enter passphrase (empty for no passphrase): 
    Enter same passphrase again: 
    Your identification has been saved in /home/tungsten/.ssh/id_rsa.
    Your public key has been saved in /home/tungsten/.ssh/id_rsa.pub.
    The key fingerprint is:
    e3:fa:e9:7a:9d:d9:3d:81:36:63:85:cb:a6:f8:41:3b tungsten@staging
    The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |                 |
    |             .   |
    |            . .  |
    |        S .. +   |
    |       . o .X .  |
    |        .oEO + . |
    |       .o.=o. o  |
    |      o=+..    . |
    +-----------------+

    This creates both a public and private keyfile; the public keyfile will be shared with the hosts in the cluster to allow hosts to connect to each other.

  4. Within the staging server, profiles for the different cluster configurations are stored within a single directory. You can simplify the management of these different services by configuring a specific directory where these configurations will be stored. To set the directory, specify the directory within the $CONTINUENT_PROFILES environment variable, adding this variable to your shell startup script (.bashrc, for example) within your staging server.

    shell> mkdir -p /opt/continuent/software/conf
    shell> mkdir -p /opt/continuent/software/replicator.conf
    shell> export CONTINUENT_PROFILES=/opt/continuent/software/conf
    shell> export REPLICATOR_PROFILES=/opt/continuent/software/replicator.conf

We now have a staging server setup, an SSH keypair for our login information, and are ready to start setting up each host within the cluster.