MySQL Cluster Manager 8.0.43 User Manual
Extract the MySQL Cluster Manager 8.0.43 program and other files from the distribution archive. You must install a copy of MySQL Cluster Manager on each computer that you intend to use as a MySQL NDB Cluster host. In other words, you need to install MySQL Cluster Manager on each host that will be a member of the MySQL Cluster Manager management site. For each host, you should use the MySQL Cluster Manager build that matches that computer's operating system and processor architecture.
          On Linux systems, you can unpack the archive using the
          following command, which uses
          mcm-8.0.43-cluster-8.0.43-linux-glibc2.17-x86-64bit.tar.gz
          as an example (the actual filename will vary according to the
          MySQL Cluster Manager build that you intend to deploy):
        
$> tar -zxvf mcm-8.0.43-cluster-8.0.43-linux-glibc2.17-x86-64bit.tar.gz
          This command unpacks the archive into a directory having the
          same name as the archive, less the
          .tar.gz extension. The top-level
          directories under the unpacked directory are
          cluster and
          mcm-8.0.43.
        
            Because the Solaris version of
            tar cannot handle long
            filenames correctly, the MySQL Cluster Manager program files may be
            corrupted if you try to use it to unpack the MySQL Cluster Manager archive.
            To get around this issue on Solaris operating systems, you
            should use GNU tar
            (gtar)
            rather than the default tar
            supplied with Solaris. On Solaris,
            gtar is often already
            installed in the /usr/sfw/bin
            directory, although the
            gtar
            executable may not be included in your path. If
            gtar is not
            present on your system, please consult the
            Oracle
            Solaris Documentation for information on how to
            obtain and install it.
          
          In general, the location where you place the unpacked MySQL Cluster Manager
          directory and the name of this directory can be arbitrary.
          However, we recommend that you use a standard location for
          optional software, such as /opt on Linux
          systems, and that you name the directory using the
          8.0.43 version number (this facilitates subsequent
          upgrades). On a typical Linux system you can accomplish this
          task like this:
        
$>cd mcm-8.0.43-cluster-8.0.43-linux-glibc2.17-x86-64bit$>mv mcm-8.0.43 /opt/mcm-8.0.43
For ease of use, we recommend that you put the MySQL Cluster Manager files in the same directory on each host where you intend to run it.
Contents of the MySQL Cluster Manager Unix Distribution Archive. If you change to the directory where you placed the extracted MySQL Cluster Manager archive and list the contents, you should see something similar to what is shown here:
$>cd /opt/mcm-8.0.43$>lsbin docs etc lib licenses share var
These directories are described in the following table:
Table 3.1 Contents of the MySQL Cluster Manager Unix distribution archive, by directory
| Directory | Contents | 
|---|---|
| bin | MySQL Cluster Manager agent and client executables | 
| docs | Contains the sample configuration file, sample_mcmd.conf, theLICENSEfile, and theREADME.txtfile | 
| etc/init.d | Contains the initscripts | 
| liband subdirectories | Libraries needed to run the MySQL Cluster Manager agent | 
| licenses/glib- | An archive containing source code (including licensing and
                documentation), for the GLiblibrary | 
| var | XML files containing information needed by MySQL Cluster Manager about processes, attributes, and command syntax | 
          Normally, the only directories of those shown in the preceding
          table that you need be concerned with are the
          bin, docs, and
          etc directories.
        
          For MySQL Cluster Manager 8.0.43 distributions that include MySQL NDB Cluster,
          the complete MySQL NDB Cluster 8.0.43 binary distribution is
          included in the cluster directory. Within
          this directory, the layout of the MySQL NDB Cluster distribution is the
          same as that of the standalone MySQL NDB Cluster binary distribution.
          For example, MySQL NDB Cluster binary programs such as
          ndb_mgmd, ndbd,
          ndbmtd, and ndb_mgm can
          be found in cluster/bin. For more
          information, see MySQL Installation Layout for Generic Unix/Linux Binary Package,
          and
          Installing an NDB Cluster Binary Release on Linux.
        
          If you wish to use the included MySQL NDB Cluster software, it is
          recommended that you move the cluster
          directory and all its contents to a location outside the MySQL Cluster Manager
          installation directory, such as
          /opt/ndb-.
          For example, on a Linux system, you can move the MySQL NDB Cluster NDB
          8.0.43 software that is bundled with MySQL Cluster Manager 8.0.43
          to a suitable location by first navigating to the directory
          unpacked from the distribution archive and then using a shell
          command similar to what is shown here:
        version
$> mv cluster /opt/ndb-8.0.43
            The mcmd
            --bootstrap option uses the
            MySQL NDB Cluster binaries in the cluster folder
            that is under the same directory as the MySQL Cluster Manager installation
            directory, and bootstrapping fails if the binaries cannot be
            found there. To work around this issue, create a symbolic
            link to the correct directory in the directory above the
            installation directory, like this:
          
$> ln -s /opt/ndb-8.0.43 cluster
          After doing this, you can use the mcm
          client commands add package and
          upgrade cluster to upgrade any desired
          cluster or clusters to the new MySQL NDB Cluster software version.
        
          The MySQL Cluster Manager agent by default writes its log file as
          mcmd.log in the same directory where the
          installation directory is found. When the agent runs for the
          first time, it creates a directory where the agent stores its
          own configuration data; by default, that is
          mcm_data in the parent directory of the
          MySQL Cluster Manager installation directory. The configuration data, log
          files, and data node file systems for a given MySQL NDB Cluster under
          MySQL Cluster Manager control, and named
          cluster_name, can be found in
          clusters/
          under this data directory (sometimes also known as the MySQL Cluster Manager
          data
          repository).
        cluster_name
          The location of the MySQL Cluster Manager agent configuration file, log file,
          and data directory can be controlled with
          mcmd startup options or by making changes
          in the agent configuration file. To simplify upgrades of
          MySQL Cluster Manager, we recommend that you change the data repository to a
          directory outside the MySQL Cluster Manager installation directory, such as
          /var/opt/mcm. See
          Section 3.4, “MySQL Cluster Manager Configuration File”, and
          Section 4.2, “Starting and Stopping the MySQL Cluster Manager Agent”, for more
          information.
        
MySQL Cluster Manager init script. On Linux and other Unix-like systems, you can set up the MySQL Cluster Manager agent to run as a daemon, using the init script that is supplied with the MySQL Cluster Manager distribution.
MySQL Cluster Manager 8.0 uses a new init script; do not use the init script from MySQL Cluster Manager 1.4 with MySQL Cluster Manager 8.0.
To do this, follow the steps listed here:
              Copy the file /etc/init.d/mcmd under
              the MySQL Cluster Manager installation directory to your system's
              /etc/init.d/ directory (or
              equivalent). On a typical Linux system, you can do this
              using the following command in the system shell, where
              mcmdir is the MySQL Cluster Manager
              installation directory:
            
$>cd$>mcmdir/etc/init.dcp mcmd /etc/init.d/mcmd
Make sure that this file has appropriate permissions and is executable by the user account that runs MySQL Cluster Manager. On a typical Linux system, this can be done by executing commands in your system shell similar to those shown here:
$>chown mcmuser /etc/init.d/mcmd$>chmod 755 /etc/init.d/mcmd
Be sure to refer to your operating system documentation for exact information concerning the commands needed to perform these operations, as they may vary between platforms.
              Open the file /etc/init.d/mcmd in a
              text editor. Here, we show a portion of this file, in
              which we have highlighted the two lines that need to be
              updated:
            
MCMD_SERVICE="mcmd" MCMD_PSERVICE="MySQL Cluster Manager" MCMD_ROOTDIR=@@MCMD_ROOTDIR@@ MCMD_BIN="$MCMD_ROOTDIR/bin/mcmd" MCMD_CONFIG="$MCMD_ROOTDIR/etc/mcmd.conf" # Run service as non-root user MCMD_USER=@@MCMD_USER@@ SU="su --login $MCMD_USER --command"
              In the first of the highlighted lines, replace the
              placeholder @@MCMD_ROOTDIR@@ with the
              complete path to the MySQL Cluster Manager installation directory. In the
              second of these lines, replace the placeholder
              @@MCMD_USER@@ with the name of the
              system user that runs the MySQL Cluster Manager agent (note that this must
              not be the system
              root account). Save the edited file.
            
The MySQL Cluster Manager agent should now be started automatically whenever the system is restarted.
          When the agent is configured as a daemon, cluster processes
          are started automatically when the agent is restarted, as long
          as the cluster was running when the agent shut down; however,
          StopOnError
          must be disabled (set to 0) for all data nodes in order for
          that to work. If the cluster was stopped when the
          agent shut down, it is necessary to have in place a script
          that waits for the agent to complete its startup and recovery
          phases, and then, when the agent is ready, starts the cluster
          using a command such as
          mcmdir/bin/mcm -e 'start
          cluster cluster_name;'
Install MySQL Cluster Manager as a service using systemd. On Linux and other Unix-like systems that supports systemd, you can set up the MySQL Cluster Manager agent to run as a service by following theses steps:
              Create the system user mcm to run the
              mcm service
            
sudo useradd --no-create-home -s /bin/false mcm
              Set the necessary file and folder permissions (replace
              mcmdir with the path for your
              MySQL Cluster Manager installation directory)
            
sudo chown -R mcm:mcmmcmdirchmod 600mcmdir/mcmd.conf
              Create the systemd configuration file
              /etc/systemd/system/mcm.service for
              the mcm service:
            
[Unit]
Description=MySQL Cluster Manager
Documentation=https://dev.mysql.com/doc/mysql-cluster-manager/en/
After=network-online.target
[Service]
User=mcm
Group=mcm
Restart=always
Type=simple
ExecStart=mcmdir/mcm8.0.43/bin/mcmd --config=mcmdir/mcmd.conf
[Install]
WantedBy=multi-user.targetReload systemd configuration files for your system, to make service addition take effect:
sudo systemctl daemon-reload
Start, enable, and check status of the service by these commands
sudo systemctl start mcm sudo systemctl enable mcm sudo systemctl status mcm
              If the service is not started correctly, look in the
              messages file:
sudo tail -150f /var/log/messages
          When the agent is configured as a service, cluster processes
          are started automatically when the agent is restarted, as long
          as the cluster was running when the agent shut down; however,
          StopOnError
          must be disabled (set to 0) for all data nodes in order for
          this to happen. If the cluster was stopped when the
          agent shut down, it is necessary to have in place a script
          that waits for the agent to complete its startup and recovery
          phases, and then, when the agent is ready, starts the cluster
          using a command such as
          mcmdir/bin/mcm -e 'start
          cluster cluster_name;'