MySQL Shell 9.3
Follow this procedure to deploy a sandbox or production InnoDB ClusterSet deployment. A sandbox deployment is where all the MySQL server instances and other software run on a single machine. For a production deployment, the server instances and other software are on separate machines.
The procedure assumes you already have the following components, as listed in Section 9.1, “InnoDB ClusterSet Requirements”:
An existing InnoDB Cluster that meets the requirements stated in Section 9.1, “InnoDB ClusterSet Requirements”. This is the primary cluster that the InnoDB ClusterSet deployment supports.
MySQL Shell, connected to the existing InnoDB Cluster. MySQL Shell's AdminAPI commands are used in the deployment procedure.
MySQL Router, to bootstrap against InnoDB ClusterSet. MySQL Router instances that you had already bootstrapped against the existing InnoDB Cluster can be reused in an InnoDB ClusterSet deployment, but you need to bootstrap them again to implement the InnoDB ClusterSet configuration.
A number of standalone MySQL Server instances (which are not part of an InnoDB Cluster or InnoDB ReplicaSet) to make into one or more replica clusters. They must meet the requirements stated in Section 9.1, “InnoDB ClusterSet Requirements”. A minimum of three member servers in each replica cluster is recommended for tolerance of failures.
The user account that you use during the InnoDB ClusterSet
deployment procedure is the InnoDB Cluster server configuration
account from the primary cluster. This is the account that was
created on the primary cluster's member servers using a
dba.configureInstance()
command with the
clusterAdmin
option. Each member server has
only one server configuration account. The same user account name
and password must be used on every member server in the cluster,
and you need to create it on all the servers in the
InnoDB ClusterSet deployment. It is possible to use the
root
account as the InnoDB Cluster server
configuration account, but this is not recommended, because it
means the root
account on every member server
in the cluster must have the same password. For more information,
see Section 9.3, “User Accounts for InnoDB ClusterSet”.
To set up the InnoDB ClusterSet deployment, follow this procedure:
Connect to any member server in the existing InnoDB Cluster with MySQL Shell, using the InnoDB Cluster server configuration account to make the connection. For example:
mysql-js> \connect icadmin@127.0.0.1:3310
Creating a session to 'icadmin@127.0.0.1:3310'
Please provide the password for 'icadmin@127.0.0.1:3310': **************
Save password for 'icadmin@127.0.0.1:3310'? [Y]es/[N]o/Ne[v]er (default No):
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 59
Server version: 8.0.27-commercial MySQL Enterprise Server - Commercial
No default schema selected; type \use <schema> to set one.
<ClassicSession:icadmin@127.0.0.1:3310>
In this example:
is the URI-like connection string for any member server
instance that is online in the InnoDB Cluster.
icadmin
@127.0.0.1:3310
The URI-like connection string is comprised of the following elements:
is
the user name for the InnoDB Cluster server
configuration account.
icadmin
is the host and port for the member server instance, as
displayed by the
127.0.0.1:3310
command.
cluster
.status()
Issue a dba.getCluster()
command to get the
Cluster
object that represents the
InnoDB Cluster, assigning it to a variable so that you can
work with it. For example:
mysql-js> cluster1 = dba.getCluster()
<Cluster:clusterone>
In this example, clusterone
is the name of
the existing InnoDB Cluster, as shown in the
clusterName
field returned by the
command, and the returned cluster
.status()Cluster
object is
assigned to the variable cluster1
.
It is important to do this when you are connected to the server instance using the InnoDB Cluster server configuration account. The returned object defaults to using the account it was fetched with for operations where permissions are required. Some operations during the InnoDB ClusterSet deployment process require permissions, and the default user account stored in the object is used for this, so that the process does not need to store any other user accounts.
Issue a
command, using the cluster
.createClusterSet()Cluster
object, to
create the InnoDB ClusterSet with the existing
InnoDB Cluster as the primary cluster. For example:
mysql-js> myclusterset = cluster1.createClusterSet('testclusterset')
A new ClusterSet will be created based on the Cluster 'clusterone'.
* Validating Cluster 'clusterone' for ClusterSet compliance.
* Creating InnoDB ClusterSet 'testclusterset' on 'clusterone'...
* Updating metadata...
ClusterSet successfully created. Use ClusterSet.createReplicaCluster() to add Replica Clusters to it.
<ClusterSet:testclusterset>
In this example, clusterone
is the name of
the existing InnoDB Cluster, cluster1
is
the variable to which the returned Cluster
object was assigned, testclusterset
is the
name for the InnoDB ClusterSet that you are creating, and
myclusterset
is the variable to which the
returned ClusterSet
object is assigned.
The domainName
parameter is
required and specifies the name of the InnoDB ClusterSet
deployment that you are creating
(testclusterset
in the example).
The domainName
must be non-empty
and no greater than 63 characters long. It can only start
with an alphanumeric character or with _ (underscore), and
can only contain alphanumeric, _ ( underscore), .
(period), or - (hyphen) characters.
Use the dryRun
option if you want to
carry out validations and log the changes without actually
executing them. For example:
mysql-js> myclusterset = cluster1.createClusterSet('testclusterset', {dryRun: true})
* Validating Cluster 'clusterone' for ClusterSet compliance.
NOTE: dryRun option was specified. Validations will be executed, but no changes will be applied.
* Creating InnoDB ClusterSet 'clusterset' on 'clusterone'...
* Updating metadata...
dryRun finished.
Use the clusterSetReplicationSslMode
option if you want to require or disable encryption
(TLS/SSL) for the replication channels in the
InnoDB ClusterSet deployment. The default setting,
AUTO
, enables encryption if the server
instance supports it, and disables it if it does not.
REQUIRED
enables encryption for all the
replication channels, and DISABLED
disables encryption for all the replication channels. For
example:
mysql-js> myclusterset = cluster1.createClusterSet("testclusterset", {clusterSetReplicationSslMode: 'REQUIRED'})
clusterSetReplicationSslMode
supports
VERIFY_CA
and
VERIFY_IDENTITY
. For example:
mysql-js> myclusterset = cluster.createClusterSet("testclusterset", {"clusterSetReplicationSslMode":"VERIFY_IDENTITY"});
When you issue the
command, MySQL Shell checks that the target InnoDB Cluster
complies with the requirements to become the primary cluster
in an InnoDB ClusterSet deployment, and returns an error if
it does not. If the target InnoDB Cluster meets the
requirements, MySQL Shell carries out the following setup
tasks:
cluster
.createClusterSet()
Updates the metadata schema to include InnoDB ClusterSet metadata.
Sets the
skip_replica_start
system
variable to ON
on all the member
servers so that replication threads are not automatically
started.
Adds the target InnoDB Cluster to the InnoDB ClusterSet in the metadata and marks it as the primary cluster.
Returns the ClusterSet
object that
represents the InnoDB ClusterSet.
Verify that the InnoDB ClusterSet deployment that you have
created is healthy by issuing a
command, using the returned clusterSet
.status()ClusterSet
object. For example:
mysql-js> myclusterset.status()
{
"clusters": {
"clusterone": {
"clusterRole": "PRIMARY",
"globalStatus": "OK",
"primary": "127.0.0.1:3310"
}
},
"domainName": "testclusterset",
"globalPrimaryInstance": "127.0.0.1:3310",
"primaryCluster": "clusterone",
"status": "HEALTHY",
"statusText": "All Clusters available."
}
You can also use a
command to view the cluster itself. Alternatively, you can
select the extended output for
cluster
.status()
to see the detailed status for the clusters in the
InnoDB ClusterSet topology. For example:
clusterSet
.status()
mysql-js> myclusterset.status({extended: 1})
{
"clusters": {
"clusterone": {
"clusterRole": "PRIMARY",
"globalStatus": "OK",
"primary": "127.0.0.1:3310",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3310": {
"address": "127.0.0.1:3310",
"memberRole": "PRIMARY",
"mode": "R/W",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:3320": {
"address": "127.0.0.1:3320",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:3330": {
"address": "127.0.0.1:3330",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
}
},
"transactionSet": "953a51d5-2690-11ec-ba07-00059a3c7a00:1,c51c1b15-269e-11ec-b9ba-00059a3c7a00:1-86,c51c29ad-269e-11ec-b9ba-00059a3c7a00:1-8"
}
},
"domainName": "testclusterset",
"globalPrimaryInstance": "127.0.0.1:3310",
"metadataServer": "127.0.0.1:3310",
"primaryCluster": "clusterone",
"status": "HEALTHY",
"statusText": "All Clusters available."
}
See Section 9.7, “InnoDB ClusterSet Status and Topology” for more
information and a description of the output from the
command.
clusterSet
.status()
If you want to get the ClusterSet
object
representing the InnoDB ClusterSet for a connected server
instance at any time, for example after restarting
MySQL Shell, use a dba.getClusterSet()
or
command. For example:
cluster
.getClusterSet()
mysql-js> myclusterset = dba.getClusterSet()
<ClusterSet:testclusterset>
Assigning a returned Cluster
or
ClusterSet
object to a variable enables you
to execute further operations against the cluster or
ClusterSet using the object's methods. The returned object
uses a new session, independent from MySQL Shell's global
session. This ensures that if you change the MySQL Shell
global session, the Cluster
or
ClusterSet
object maintains its session to
the server instance. Note that when you use the object, the
server instance from which you got it must still be online in
the InnoDB ClusterSet. If that server instance goes offline,
the object no longer works and you will need to get it again
from a server that is still online.
Create the InnoDB Cluster server configuration account on
each of the standalone server instances that will be part of
the replica cluster, by issuing a
dba.configureInstance()
command with the
clusterAdmin
option. The account to create
is the InnoDB Cluster server configuration account from the
primary cluster, which you used to create the ClusterSet.
Don't specify any of the InnoDB Cluster administrator
accounts (created with
).
These will be automatically transferred from the primary
cluster to the replica clusters during the provisioning
process.
cluster
.setupAdminAccount()
You do not need to connect to the standalone server instances
beforehand, as the connection string is included in the
command. In the connection string, use an account with full
MySQL administrator permissions, including permissions to
create accounts (WITH GRANT OPTION
). In
this example, the root
account is used:
mysql-js> dba.configureInstance('root@127.0.0.1:4410', {clusterAdmin: 'icadmin'})
Please provide the password for 'root@127.0.0.1:4410': ***************
Save password for 'root@127.0.0.1:4410'? [Y]es/[N]o/Ne[v]er (default No):
Configuring local MySQL instance listening at port 4410 for use in an InnoDB cluster...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within
the same host.
This instance reports its own address as 127.0.0.1:4410
Password for new account: **************
Confirm password: **************
applierWorkerThreads will be set to the default value of 4.
The instance '127.0.0.1:4410' is valid to be used in an InnoDB cluster.
Cluster admin user 'icadmin' created.
The instance '127.0.0.1:4410' is already ready to be used in an InnoDB cluster.
Successfully enabled parallel appliers.
In this example,
is the URI-like connection string for the standalone server,
and root
@127.0.0.1:4410
is
the user name for the InnoDB Cluster server configuration
account that will be created on the instance. For better
security, specify the password for the InnoDB Cluster server
configuration account at the interactive prompt as shown in
the example, or you can provide it using the
icadmin
clusterAdminPassword
option. The
dba.configureInstance()
command grants the
account the required permissions automatically, although you
may set up the account manually if you prefer, granting it the
permissions listed in
Configuring InnoDB Cluster Administrator Accounts Manually. For
more details of the dba.configureInstance()
command and its options, see
Section 8.4.2, “Configuring Production Instances for InnoDB Cluster Usage”.
When you issue dba.configureInstance()
,
MySQL Shell verifies that the server instance meets the
requirements for use with InnoDB Cluster. The requirements
for InnoDB ClusterSet will be checked when you issue the
commands to create the replica cluster and add instances to
it.
Connect to any active instance in the primary cluster that is
already in the InnoDB ClusterSet deployment, using the
InnoDB Cluster server configuration account. Ensure you
still have the ClusterSet
object that was
returned when you created the InnoDB ClusterSet, or fetch it
again using dba.getClusterSet()
or
.
Again, it is important to do this when you are connected to
the server instance using the InnoDB Cluster server
configuration account. The default user account stored in the
object is used for some operations during the
InnoDB ClusterSet deployment process, regardless of the
account that you specify on the connection.
cluster
.getClusterSet()
Issue a
command using the clusterSet
.createReplicaCluster()ClusterSet
object to
create the replica cluster, naming one of the standalone
server instances. This server instance will be the replica
cluster's primary. The command returns a
Cluster
object for the replica cluster, and
you can assign this to a variable if you want. For example:
mysql-js> cluster2 = myclusterset.createReplicaCluster("127.0.0.1:4410", "clustertwo", {recoveryProgress: 1, timeout: 10})
Setting up replica 'clustertwo' of cluster 'clusterone' at instance '127.0.0.1:4410'.
A new InnoDB cluster will be created on instance '127.0.0.1:4410'.
Validating instance configuration at 127.0.0.1:4410...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within
the same host.
This instance reports its own address as 127.0.0.1:4410
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:44101'. Use the
localAddress option to override.
* Checking transaction state of the instance...
NOTE: The target instance '127.0.0.1:4410' has not been pre-provisioned (GTID set is empty). The
Shell is unable to decide whether replication can completely recover its state.
The safest and most convenient way to provision a new instance is through automatic clone
provisioning, which will completely overwrite the state of '127.0.0.1:4410' with a physical
snapshot from an existing clusterset member. To use this method by default, set the
'recoveryMethod' option to 'clone'.
WARNING: It should be safe to rely on replication to incrementally recover the state of the new
Replica Cluster if you are sure all updates ever executed in the ClusterSet were done with GTIDs
enabled, there are no purged transactions and the instance used to create the new Replica Cluster
contains the same GTID set as the ClusterSet or a subset of it. To use this method by default,
set the 'recoveryMethod' option to 'incremental'.
Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):
Waiting for clone process of the new member to complete. Press ^C to abort the operation.
* Waiting for clone to finish...
NOTE: 127.0.0.1:4410 is being cloned from 127.0.0.1:3310
** Stage DROP DATA: Completed
NOTE: 127.0.0.1:4410 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:4410 has restarted, waiting for clone to finish...
** Stage FILE COPY: Completed
** Stage PAGE COPY: Completed
** Stage REDO COPY: Completed
** Stage FILE SYNC: Completed
** Stage RESTART: Completed
* Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s)
Creating InnoDB cluster 'clustertwo' on '127.0.0.1:4410'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
* Configuring ClusterSet managed replication channel...
** Changing replication source of 127.0.0.1:4410 to 127.0.0.1:3310
* Waiting for instance to synchronize with PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
* Updating topology
Replica Cluster 'clustertwo' successfully created on ClusterSet 'testclusterset'.
<Cluster:clustertwo>
For the
command:
clusterSet
.createReplicaCluster()
The instance
parameter is required
and specifies the host and port number of the standalone
server's MySQL Server instance. This is the server
instance that is going to be the primary of the replica
cluster. In the example command above, this is
.
127.0.0.1:4410
The clusterName
parameter is
required and specifies an identifier for the replica
cluster. In the example command above,
is used. The name must be unique in the
InnoDB ClusterSet, and it must follow the
InnoDB Cluster naming requirements. Only alphanumeric
characters, hyphens (-), underscores (_), and periods (.)
can be used, and the name must not start with a number.
The maximum length is 63 characters. The cluster name is
case sensitive.
clustertwo
Use the dryRun
option if you want to
carry out validations and log the changes without actually
executing them.
Use the recoveryMethod
option if you
want to select a provisioning method. If you do not
specify this as an option, the default setting
AUTO
is used. In that case, the
function compares the GTID set on the server instance to
the GTID set on the primary cluster, and attempts to
determine the most appropriate provisioning method. If
this cannot be determined, the function prompts you to
select a provisioning method, or cancels the operation if
you are not in interactive mode.
The provisioning process, which is called distributed
recovery, can use cloning, where the state of the server
instance is completely overwritten by a physical snapshot
taken from an existing member server in the cluster. To
select this in advance, specify the
CLONE
setting. The alternative is
incremental state transfer from an existing member
server's binary log, in this case a member of the primary
cluster. Here, the server instance receives and applies
transactions from the primary cluster that it does not
already have. To select this in advance, specify the
INCREMENTAL
setting.
Use the cloneDonor
option if you want
to select a specific server to provide the snapshot that
overwrites the current server, if distributed recovery is
carried out by cloning. The operation chooses a secondary
member of the primary cluster by default, or the primary
if no secondary is available. The selected server instance
must be a member of the primary cluster in the
InnoDB ClusterSet. Specify a host and port number. IPv6
addresses are not supported for this option.
Use the recoveryProgress
option to
specify the verbosity level (0, 1, or 2) for the
distributed recovery process. Setting 0 shows no progress
information, 1 shows detailed static progress information,
and 2 shows detailed dynamic progress information using
progress bars. 2 is the default if standard output is a
terminal, otherwise 1 is the default.
Use the timeout
option if you want to
set a timeout to wait for the server instance to
synchronize with the primary cluster after it has been
provisioned and the ClusterSet replication channel has
been established. By default there is no timeout.
Use the manualStartOnBoot
option to
specify whether Group Replication starts automatically and
rejoins the cluster when the MySQL server starts up, or
whether it must be started manually. The default,
false
, means Group Replication starts
automatically.
Use the communicationStack
option to
define how the members communicate with one another, using
XCOM
or MYSQL
protocols. See
Section 8.5.9, “Configuring the Group Replication Communication Stack”.
If you are using MySQL 8.0.27, or higher, the default, and
recommended, protocol is MYSQL
.
The options memberSslMode
,
ipAllowlist
,
localAddress
,
exitStateAction
,
memberWeight
,
consistency
,
expelTimeout
, and
autoRejoinTries
are available if you
want to configure the setup of Group Replication for the
replica InnoDB Cluster. These options work in the same
way as they do for an InnoDB Cluster that is not part of
a ClusterSet. For details of the options, see
Section 8.5, “Configuring InnoDB Cluster”.
(Note:
ipAllowlist
and
localAddress
are available only for the
XCOM
communication stack.)
It is possible to use the options
localAddress
and
groupName
to set a Group Replication
local address and group identifier. However, this is not
recommended, as incorrect values can cause errors in Group
Replication. Only use these options if you already
experienced an issue with the values selected by the
InnoDB ClusterSet setup process for these items.
When creating an InnoDB ClusterSet, if you have security
requirements that all accounts created automatically by
AdminAPI have strict authentication requirements, you can
set a value for the
replicationAllowedHost
configuration
option of the ClusterSet. The
replicationAllowedHost
MySQL Shell
option allows you to set internally managed replication
accounts for a ClusterSet to a strict subnet based filter
instead of the default wildcard value of
%
.The
replicationAllowedHost
option takes a
string value. For example, to create a clusterset called
my_clusterset_domain
and set the
replicationAllowedHost
option to
192.0.2.0/24
, issue:
mysql-js> <Cluster>.createClusterSet('my_clusterset_domain', {replicationAllowedHost:'192.0.2.0/24'})
If you change replicationAllowedHost
on
a ClusterSet, the account used for the replication channel
between clusters is changed to allow connections only from
the value you specify for
replicationAllowedHost
. The host must
be accessible in both the primary and replica clusters. If
not, there is no replication between clusters.
A ClusterSet can be modified after creation to set a
replicationAllowedHost
, by issuing:
mysql-js> <Clusterset>.setOption('replicationAllowedHost','192.0.2.0/24')
When you issue the
command, MySQL Shell checks that the target server instance
complies with the requirements to become the primary server in
a replica InnoDB Cluster in an InnoDB ClusterSet
deployment, and returns an error if it does not. If the
instance meets the requirements, MySQL Shell carries out the
following setup tasks:
clusterSet
.createReplicaCluster()
Creates the ClusterSet replication channel
clusterset_replication
, and creates a
replication user with a random password. This is an
asynchronous replication channel between the target
instance and the primary server of the primary cluster,
which is managed by InnoDB ClusterSet. Encryption is
configured for the channel according to the
clusterSetReplicationSslMode
option for
the InnoDB ClusterSet. MySQL Shell verifies that the
replication setup is working, and returns an error if it
is not.
Provisions the MySQL Server instance with the dataset from the primary InnoDB Cluster and synchronizes the GTID set, using the selected recovery method. Note that if there is a large amount of data in the ClusterSet's member servers, distributed recovery could take several hours.
Adds the InnoDB Cluster administrator accounts and the MySQL Router administrator accounts on the server instance. If the instance is provisioned by state transfer from the binary log, the provisioning process includes the transactions that create the accounts, or else the accounts are transferred during cloning. Either way, these accounts become available on the server instance. See Section 9.3, “User Accounts for InnoDB ClusterSet” for more information.
Configures and starts Group Replication for the replica
cluster. The InnoDB ClusterSet replica cluster creation
process overwrites any existing persisted Group
Replication configuration options for which you specify
new settings on the
command. It also always overwrites the following
configuration options, even if you do not specify them on
the command:
clusterSet
.createReplicaCluster()group_replication_group_name
,
group_replication_group_seeds
,
group_replication_local_address
,
group_replication_view_change_uuid
(versions 8.0.27 to 8.2.0, only), and
group_replication_enforce_update_everywhere_checks
.
However, any other Group Replication configuration options
that you changed on the server instance prior to using it
in the replica cluster are left as they were. See the
important note about this in
Section 9.1, “InnoDB ClusterSet Requirements”.
Sets the
skip_replica_start
system
variable to ON
so that replication
threads are not automatically started on the server, and
sets the super_read_only
system variable so that clients cannot write transactions
to the server.
Disables the Group Replication member action
mysql_disable_super_read_only_if_primary
so that super_read_only
remains set on the primary of the cluster after a view
change.
Enables the Group Replication member action
so that asynchronous connection failover for replicas is
enabled for the ClusterSet replication channel. With this
function enabled, if the primary that is replicating goes
offline or into an error state, the new primary starts
replication on the same channel when it is elected.
mysql_start_failover_channels_if_primary
Transfers the ClusterSet metadata to the server instance, creates the replica cluster in the InnoDB ClusterSet, and adds the target server instance to it as the primary.
Returns the Cluster
object for the
replica cluster.
Using the Cluster
object that was returned
for the replica cluster by
,
issue a
clusterSet
.createReplicaCluster()
command naming another of the standalone server instances.
This server instance will be a secondary in the replica
cluster. For example:
cluster
.addInstance
mysql-js> cluster2.addInstance('icadmin@127.0.0.1:4420')
NOTE: The target instance '127.0.0.1:4420' has not been pre-provisioned (GTID set is empty). The
Shell is unable to decide whether clone based recovery is safe to use.
The safest and most convenient way to provision a new instance is through automatic clone
provisioning, which will completely overwrite the state of '127.0.0.1:4420' with a physical
snapshot from an existing cluster member. To use this method by default, set the
'recoveryMethod' option to 'clone'.
Please select a recovery method [C]lone/[A]bort (default Clone): c
Validating instance configuration at localhost:4420...
NOTE: Instance detected as a sandbox.
Please note that sandbox instances are only suitable for deploying test clusters for use within
the same host.
This instance reports its own address as 127.0.0.1:4420
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '127.0.0.1:44201'. Use the
localAddress option to override.
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Adding instance to the cluster...
* Waiting for the Cluster to synchronize with the PRIMARY Cluster...
** Transactions replicated ############################################################ 100%
* Configuring ClusterSet managed replication channel...
** Changing replication source of 127.0.0.1:4420 to 127.0.0.1:3310
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and
let it continue in background.
Clone based state recovery is now in progress.
NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.
* Waiting for clone to finish...
NOTE: 127.0.0.1:4420 is being cloned from 127.0.0.1:4410
** Stage DROP DATA: Completed
** Clone Transfer
FILE COPY ############################################################ 100% Completed
PAGE COPY ############################################################ 100% Completed
REDO COPY ############################################################ 100% Completed
NOTE: 127.0.0.1:4420 is shutting down...
* Waiting for server restart... ready
* 127.0.0.1:4420 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s)
State recovery already finished for '127.0.0.1:4420'
The instance '127.0.0.1:4420' was successfully added to the cluster.
For more details on the
command, see Section 8.4.4, “Adding Instances to an InnoDB Cluster”.
cluster
.addInstance
If you need to get the Cluster
object for
the replica cluster again, connect to any active instance in
the replica cluster using the InnoDB Cluster server
configuration account and issue
dba.getCluster()
. This account is used for
some of the operations in the setup process. If the setup
process finds that the account is not present on the
standalone server instance, an error is returned, and you will
need to issue dba.configureInstance()
to
create the account.
When the command is successful, the server instance is added to the replica cluster and provisioned with the data for the InnoDB ClusterSet. The donor for a cloning operation will be from the replica cluster, not the primary cluster.
Repeat the
operation to add all of the standalone server instances to the
replica cluster. A minimum of three instances is recommended
for tolerance to failures. You can have up to nine member
servers in a replica cluster, which is a limit built into the
underlying Group Replication technology.
cluster
.addInstance
Verify that the completed replica cluster and the
InnoDB ClusterSet deployment are healthy. You can do this
using a
command to view the replica cluster, and a
cluster
.status()
command to view the InnoDB ClusterSet deployment.
Alternatively, you can select the extended output for
clusterSet
.status()
to see the detailed status for all the clusters. For example:
clusterSet
.status()
mysql-js> myclusterset.status({extended: 1})
{
"clusters": {
"clusterone": {
"clusterRole": "PRIMARY",
"globalStatus": "OK",
"primary": "127.0.0.1:3310",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:3310": {
"address": "127.0.0.1:3310",
"memberRole": "PRIMARY",
"mode": "R/W",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:3320": {
"address": "127.0.0.1:3320",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:3330": {
"address": "127.0.0.1:3330",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
}
},
"transactionSet": "953a51d5-2690-11ec-ba07-00059a3c7a00:1,c51c1b15-269e-11ec-b9ba-00059a3c7a00:1-131,c51c29ad-269e-11ec-b9ba-00059a3c7a00:1-8"
},
"clustertwo": {
"clusterRole": "REPLICA",
"clusterSetReplication": {
"applierStatus": "APPLIED_ALL",
"applierThreadState": "Waiting for an event from Coordinator",
"applierWorkerThreads": 4,
"receiver": "127.0.0.1:4410",
"receiverStatus": "ON",
"receiverThreadState": "Waiting for source to send event",
"source": "127.0.0.1:3310"
},
"clusterSetReplicationStatus": "OK",
"globalStatus": "OK",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"127.0.0.1:4410": {
"address": "127.0.0.1:4410",
"memberRole": "PRIMARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:4420": {
"address": "127.0.0.1:4420",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
},
"127.0.0.1:4430": {
"address": "127.0.0.1:4430",
"memberRole": "SECONDARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.27"
}
},
"transactionSet": "0f6ff279-2764-11ec-ba06-00059a3c7a00:1-5,953a51d5-2690-11ec-ba07-00059a3c7a00:1,c51c1b15-269e-11ec-b9ba-00059a3c7a00:1-131,c51c29ad-269e-11ec-b9ba-00059a3c7a00:1-8",
"transactionSetConsistencyStatus": "OK",
"transactionSetErrantGtidSet": "",
"transactionSetMissingGtidSet": ""
}
},
"domainName": "testclusterset",
"globalPrimaryInstance": "127.0.0.1:3310",
"metadataServer": "127.0.0.1:3310",
"primaryCluster": "clusterone",
"status": "HEALTHY",
"statusText": "All Clusters available."
}
See Section 9.7, “InnoDB ClusterSet Status and Topology” for more
information on the output of the
command.
clusterSet
.status()
Add further replica clusters as required, by repeating the above steps with a different set of standalone instances. There is no defined limit on the number of replica clusters that you can have in an InnoDB ClusterSet deployment. The process is the same in each case, as summarized here:
Create the InnoDB Cluster server configuration account
on each of the standalone server instances by issuing a
dba.configureInstance()
command with
the clusterAdmin
option.
Fetch the ClusterSet
object using
dba.getClusterSet()
or
,
when you are connected to a member of the
InnoDB ClusterSet using the InnoDB Cluster server
configuration account. You can get the object from any
member server in the primary cluster or in one of the
replica clusters that you created already.
cluster
.getClusterSet()
Issue a
command using the clusterSet
.createReplicaCluster()ClusterSet
object to
create the replica cluster, naming one of the standalone
server instances.
Using the Cluster
object that was
returned for the replica cluster by
,
issue a
clusterSet
.createReplicaCluster()
command naming another of the standalone server instances.
cluster
.addInstance
Repeat the
operation to add all of the standalone server instances to
the replica cluster.
cluster
.addInstance
Verify that the completed replica cluster and the
InnoDB ClusterSet deployment are healthy, for example by
using a
command with extended output.
clusterSet
.status()
Bootstrap MySQL Router instances against the InnoDB ClusterSet to manage application traffic, and configure them as appropriate. By default, MySQL Router directs all read and write requests to whichever cluster is currently the primary cluster in an InnoDB ClusterSet deployment, but you can configure a MySQL Router instance to route traffic only to a specific cluster. For instructions, see Section 9.6, “Integrating MySQL Router With InnoDB ClusterSet”.