MySQL Cluster Manager 9.3.0 User Manual
The next task is to create a “target” cluster. Once this is done, we modify the target cluster's configuration until it matches that of the wild cluster that we want to import. At a later point in the example, we also show how to test the configuration in a dry run before attempting to perform the actual configuration import.
To create and then configure the target cluster, follow these steps:
Install MySQL Cluster Manager and start mcmd on all hosts with the same system user who started the wild cluster processes. Once you have done this, you can start the mcm client (see Section 4.3, “Starting the MySQL Cluster Manager Client”) on any one of these hosts to perform the next few steps.
Create a MySQL Cluster Manager site encompassing all four of the wild
cluster's hosts, using the create
site
command, as shown here:
mcm> create site --hosts=198.51.100.102,198.51.100.103,198.51.100.104 newsite;
+---------------------------+
| Command result |
+---------------------------+
| Site created successfully |
+---------------------------+
1 row in set (0.15 sec)
We have named this site newsite
. You
should be able to see it listed in the output of the
list sites
command,
similar to what is shown here:
mcm> list sites;
+---------+------+-------+----------------------------------------------+
| Site | Port | Local | Hosts |
+---------+------+-------+----------------------------------------------+
| newsite | 1862 | Local | 198.51.100.102,198.51.100.103,198.51.100.104 |
+---------+------+-------+----------------------------------------------+
1 row in set (0.01 sec)
Add a MySQL Cluster Manager package referencing the MySQL NDB Cluster binaries using
the add package
command;
use the command's
--basedir
option to point to the correct location of the MySQL NDB Cluster
executables. The command shown here creates such a
package, named newpackage
:
mcm> add package --basedir=/home/ari/bin/cluster newpackage;
+----------------------------+
| Command result |
+----------------------------+
| Package added successfully |
+----------------------------+
1 row in set (0.70 sec)
You do not need to include the bin
directory containing the MySQL NDB Cluster executables in the
--basedir
path.
If the executables are in
/home/ari/bin/cluster/bin
, it is
sufficient to specify
/home/ari/bin/cluster
; MySQL Cluster Manager
automatically checks for the binaries in a
bin
directory within the directory
specified by
--basedir
.
Create the target cluster including at least some of the
same processes and hosts used by the standalone cluster.
Do not include any processes or hosts that are
not part of this cluster. In order to prevent
potentially disruptive process or cluster operations from
interfering by accident with the import process, it is
strongly recommended that you create the cluster for
import using the
--import
option for the create
cluster
command.
You must also take care to preserve the correct node ID
(as listed in the config.ini
file
shown previously) for each node.
The following command creates the cluster
newcluster
for import and includes the
management and data nodes, but not the SQL or
“free” API node (which we add in the next
step):
mcm> create cluster --import --package=newpackage \
--processhosts=ndb_mgmd:50@198.51.100.102,ndbd:2@198.51.100.103,ndbd:3@198.51.100.104 \
newcluster;
+------------------------------+
| Command result |
+------------------------------+
| Cluster created successfully |
+------------------------------+
1 row in set (0.96 sec)
You can verify that the cluster was created correctly by
checking the output of show
status
with the
--process
(-r
)
option, like this:
mcm> show status -r newcluster;
+--------+----------+----------------+--------+-----------+------------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+----------------+--------+-----------+------------+
| 50 | ndb_mgmd | 198.51.100.102 | import | | newpackage |
| 2 | ndbd | 198.51.100.103 | import | n/a | newpackage |
| 3 | ndbd | 198.51.100.104 | import | n/a | newpackage |
+--------+----------+----------------+--------+-----------+------------+
3 rows in set (0.05 sec)
If necessary, add any remaining processes and hosts from
the wild cluster not included in the previous step using
one or more add process
commands. We have not yet accounted for two of the nodes
from the wild cluster: the SQL node with node ID 51, on
host 198.51.100.102
, and the API node
with node ID 52, which is not bound to any specific host.
You can use the following command to add both of these
processes to newcluster
:
mcm> add process --processhosts=mysqld:51@198.51.100.102,ndbapi:52@* newcluster;
+----------------------------+
| Command result |
+----------------------------+
| Process added successfully |
+----------------------------+
1 row in set (0.41 sec)
Once again checking the output from show
status
-r
, we see that the
mysqld
and ndbapi
processes were added as expected:
mcm> show status -r newcluster;
+--------+----------+----------------+--------+-----------+------------+
| NodeId | Process | Host | Status | Nodegroup | Package |
+--------+----------+----------------+--------+-----------+------------+
| 50 | ndb_mgmd | 198.51.100.102 | import | | newpackage |
| 2 | ndbd | 198.51.100.103 | import | n/a | newpackage |
| 3 | ndbd | 198.51.100.104 | import | n/a | newpackage |
| 51 | mysqld | 198.51.100.102 | import | | newpackage |
| 52 | ndbapi | * | import | | |
+--------+----------+----------------+--------+-----------+------------+
5 rows in set (0.06 sec)
You can also see that since newcluster
was created using the create
cluster
command's
--import
option, the status of all processes in this
cluster—including those we just added—is
import
. This means we cannot yet start
newcluster
or any of its processes. The
import
status and its effects on
newcluster
and its cluster processes
persist until we have completed importing another cluster
into newcluster
.
The target newcluster
cluster now has
the same processes, with the same node IDs, and on the
same hosts as the original standalone cluster. We are
ready to proceed to the next step.
Duplicate the wild cluster's configuration attributes
in the target cluster using the
import config
command.
Test out first the effects of the command by running it
with the
--dryrun
option (the step only works if you have
created the
mcmd user on the cluster's mysqld nodes):
Before executing this command it is necessary to set any
non-default ports for ndb_mgmd
and
mysqld
processes using the
set
command in the
mcm client.
mcm> import config --dryrun newcluster;
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Command result |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Import checks passed. Please check /home/ari/bin/mcm_data/clusters/newcluster/tmp/import_config.49d541a9_294_0.mcm on host localhost.localdomain for settings that will be applied. |
+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (6.87 sec)
As indicated by the output from
import
config --dryrun
, you can see the configuration
attributes and values that would be copied to
newcluster
by the command without the
--dryrun
option in the file
/
.
If you open this file in a text editor, you will see a
series of path-to-mcm-data-repository
/clusters/clustername
/tmp/import_config.message_id
.mcmset
commands
that would accomplish this task, similar to what is shown
here:
# The following will be applied to the current cluster config: set NoOfReplicas:ndbd=2 newcluster; set DataDir:ndb_mgmd:50=/home/ari/bin/cluster/wild-cluster/50/data newcluster; set DataDir:ndbd:2=/home/ari/bin/cluster/wild-cluster/2/data newcluster; set DataDir:ndbd:3=/home/ari/bin/cluster/wild-cluster/3/data newcluster; set basedir:mysqld:51=/home/ari/bin/cluster/ newcluster; set datadir:mysqld:51=/home/ari/bin/cluster/wild-cluster/51/data/ newcluster; set sql_mode:mysqld:51="NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES" newcluster; set ndb_connectstring:mysqld:51=198.51.100.102 newcluster;
Options used at the command line instead of in a
configuration file to start a node of the standalone
cluster are not imported into the target cluster by the
import config
command;
moreover, they will cause one of the following to happen
when the import config
--dryrun
is
run:
For some options, MySQL Cluster Manager will issue a warning that
“Option <param>
may be removed on next restart of process
<type>
<nodeid>
,”
meaning that those options will not be imported into
the target cluster, and thus will not be applied when
those nodes are restarted after the import. Here are
the lists of such options for each node type:
For ndb_mgmd nodes:
---configdir
,
--initial
,
--log-name
,
--reload
,
--verbose
For ndbd and
ndbmtd nodes:
--connect-retries
,
--connect-delay
,
--daemon=false
,
--nodaemon
,
--verbose
,
--core-file
For mysqld nodes:
--ndbcluster
, the
--ndbinfo-*
options,
--verbose
,
--datadir
,
--defaults-group-suffix
,
--core-file
For some other options, while their values will also not be imported into the target cluster, no warnings will be issued for them. Here are lists of such options for each node type:
For ndb_mgmd nodes:
--config-cache
,
--daemon
,
--ndb-nodeid
,
---nodaemon=false
,
--config-file
,
--skip-config-cache
For ndbd and
ndbmtd nodes:
--daemon
,
--foreground
,
--initial
,
--ndb-connectstring
,
--connect-string
,
--ndb-mgmd-host
,
--ndb-nodeid
,
--nodaemon=false
For mysqld nodes:
--ndb-connectstring
,
--ndb-mgmd-host
,
--ndb-nodeid
,
--defaults-file
,
--no-defaults
,
--basedir
For options that belong to neither of the two groups
described above, having started the standalone
cluster's nodes with them at the command line will
cause the import
config
--dryrun
command to fail with an error, complaining that the
options are unsupported.
When you run into the first or third case described above, you have to do one of the following:
If the options are required for the target cluster and
they can be set using the
set
command (see
Command-line-only attributes),
set them for the target cluster using the
set
command, and then
retry the import
config
--dryrun
command.
If the options are not needed for the target cluster,
or it cannot be set using the
set
command, restart
the wild cluster's node without those options, and
then retry the import
config
--dryrun
command.
After the successful dry run, you are now ready to import
the wild cluster's configuration into
newcluster
, with the command shown
here:
mcm> import config newcluster;
+------------------------------------------------------------------------------------------------------------------+
| Command result |
+------------------------------------------------------------------------------------------------------------------+
| Configuration imported successfully. Please manually verify plugin options, abstraction level and default values |
+------------------------------------------------------------------------------------------------------------------+
As an alternative, instead of importing all the settings
using the import config
command, you can make changes to the
/
file generated by the dry run as you wish, and then import
the settings by executing the file with the
mcm agent:
path-to-mcm-data-repository
/clusters/clustername
/tmp/import_config.message_id
.mcm
mcm> source /path-to-mcm-data-repository
/clusters/clustername
/tmp/import_config.message_id
.mcm
You should check the resulting configuration of
newcluster
carefully against the
configuration of the wild cluster. If you find any
inconsistencies, you must correct these in
newcluster
using the appropriate
set
commands.