MySQL Cluster Manager 8.4.5 User Manual
The next step in the import process is to prepare the wild cluster for migration. This requires, among other things, removing cluster processes from control by any system service management facility, and making sure all management nodes are running with configuration caching disabled. More detailed information about performing these tasks is provided in the remainder of this section.
Before proceeding with any migration, the taking of a
backup using the ndb_mgm client's
START BACKUP
command
is strongly recommended.
Any cluster processes that are under the control of a
system boot process management facility such as
/etc/init.d
on Linux systems or the
Services Manager on Windows platforms should be removed
from this facility's control. Consult your operating
system's documentation for information about how to do
this. Be sure not to stop any running cluster processes in
the course of doing so.
Create a MySQL user account on each of the wild cluster's
SQL nodes for MySQL Cluster Manager to execute the
import config
and
import cluster
commands
in the steps to follow. The account name and password
MySQL Cluster Manager uses to access MySQL nodes are specified by the
mcmd agent options
mcmd-user
and
mcmd_password
(the default
values are mcmd
and
super
, respectively); use those
credentials when creating the account on the wild
cluster's SQL nodes, and grant the user all privileges on
the server, including the privilege to grant privileges.
For example, log in to each of the wild cluster's SQL
nodes with the mysql client as
root
and execute the SQL statements
shown here:
CREATE USER 'mcmd'@'localhost' IDENTIFIED BY 'super'; GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'localhost' WITH GRANT OPTION;
Keep in mind that this must be done on all the SQL nodes, unless distributed privileges are enabled on the wild cluster.
Make sure every node of the wild cluster has been started
with its node ID specified with the
--ndb-nodeid
option at the command line,
not just in the cluster configuration file. That is
required for each process to be correctly identified by
mcmd during the import. You can check
if the requirement is fulfilled by the
ps -ef | grep command,
which shows the options the process has been started with:
$> ps -ef | grep ndb_mgmd
ari 8118 1 0 20:51 ? 00:00:04 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini
--configdir=/home/ari/bin/cluster/wild-cluster --initial --ndb-nodeid=50
(For clarity's sake, in the command output for the
ps -ef | grep command in
this and the upcoming sections, we are skipping the line
of output for the grep
process itself.)
If the requirement is not fulfilled, restart the process
with the --ndb-nodeid
option; the restart
can also be performed in step (e) or (f) below for any
nodes you are restarting in those steps.
Make sure that the configuration cache is disabled for
each management node. Since the configuration cache is
enabled by default, unless the management node has been
started with the
--config-cache=false
option, you will need to stop and restart it with that
option, in addition to other options that it has been
started with previously.
On Linux, we can once again use
ps to obtain the
information we need to accomplish this step. In a shell on
host 198.51.100.102
, on which the
management node resides:
$> ps -ef | grep ndb_mgmd
ari 8118 1 0 20:51 ? 00:00:04 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini
--configdir=/home/ari/bin/cluster/wild-cluster --initial --ndb-nodeid=50
The process ID is 8118. The configuration cache is turned
on by default, and a configuration directory has been
specified using the
--configdir
option.
First, terminate the management node using
kill as shown here, with
the process ID obtained from
ps previously:
$> kill -15 8118
Verify that the management node process was stopped—it should no longer appear in the output of another ps command.
Now, restart the management node as described previously,
with the configuration cache disabled and with the options
that it was started with previously. Also, as already
stated in step (d) above, make sure that the
--ndb-nodeid
option is
specified at the restart:
$> /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini --config-cache=false --ndb-nodeid=50
MySQL Cluster Management Server mysql-8.4.5
2020-11-08 21:29:43 [MgmtSrvr] INFO -- Skipping check of config directory since config cache is disabled.
Do not use 0
or
OFF
for the value of the
--config-cache
option when restarting
ndb_mgmd in this step. Using either
of these values instead of false
at
this time causes the migration of the management node
process to fail at later point in the import process.
Verify that the process is running as expected, using ps:
$> ps -ef | grep ndb_mgmd
ari 10221 1 0 19:38 ? 00:00:09 /home/ari/bin/cluster/bin/ndb_mgmd --config-file=/home/ari/bin/cluster/wild-cluster/config.ini --config-cache=false --ndb-nodeid=50
The management node is now ready for migration.
While our example cluster has only a single management node, it is possible for a MySQL NDB Cluster to have more than one. In such cases, you must make sure the configuration cache is disabled for each management with the steps described in this step.