MySQL Shell 9.3
To upgrade the server instances in an InnoDB ClusterSet, complete the following steps:
Check the versions of the installed binaries:
mysqlrouter --version: Checks the version of MySQL Router installed.
mysqlsh --version: Checks the version of MySQL Shell installed.
mysqld --version: Checks the version of MySQL Server installed.
To upgrade MySQL Router, complete the following steps:
Stop MySQL Router.
On a Unix system, if you used the optional
--directory
bootstrap option, a self-contained
installation is created with all generated directories and
files at the location you selected when you
bootstrapped
the router. These files include stop.sh
.
Navigate to this directory and issue this command:
./stop.sh
On Microsoft Windows, if you used the optional
--directory
bootstrap option, a self-contained
installation is created with all generated directories and
files at the location you selected when you
bootstrapped
the router. These files include stop.ps1
.
Navigate to this directory and issue this command:
.\stop.ps1
Or on a Linux system using systemd
, stop the
MySQL Router service by issuing:
systemctl stop mysqlrouter.service
Otherwise, kill the process ID (PID) of the associated mysqlrouter process.
Obtain and install the latest version of MySQL Router.
Start MySQL Router.
On a Unix system, if you used the optional
--directory
bootstrap option, a self-contained
installation is created with all generated directories and
files at the location you selected. These files include
start.sh
. Navigate to the directory and issue
this command:
./start.sh
If the path to the new router has changed, you must update
the start.sh
shell script to reflect the path.
#!/bin/bash basedir=/tmp/myrouter ROUTER_PID=$basedir/mysqlrouter.pid /usr/bin/mysqlrouter -c $basedir/mysqlrouter.conf & disown %-
If you upgrade MySQL Router manually, opposed to using package
management, you can update the basedir=
.
Bootstrapping the router again also regenerates the
start.sh
shell script.
Or on a Linux system using systemd
, start the
MySQL Router service by issuing:
systemctl start mysqlrouter.service
On Microsoft Windows, if you used the optional
--directory
bootstrap option, a self-contained
installation is created with all generated directories and
files at the location you selected. These files include
start.ps1
. Navigate to the directory and issue
this command:
.\start.ps1
On starting MySQL Router using the new router binaries, the version of the router is upgraded:
mysqlrouter --version
Upgrade MySQL Shell by installing the new binaries, and stopping and starting MySQL Shell:
Obtain and install the latest version of MySQL Shell.
Stop and quit MySQL Shell by issuing:
\q
Restart MySQL Shell from the command line, by issuing:
mysqlsh
Upgrade the InnoDB ClusterSet Metadata:
To upgrade a ClusterSet, connect MySQL Shell's global
session to your ClusterSet and use the
dba.upgradeMetadata()
operation to
upgrade the ClusterSet's metadata to the new metadata.
If a ClusterSet's metadata schema needs to be upgraded, then the upgrade process itself should be performed in the ClusterSet's primary Cluster.
The Metadata Upgrade may do nothing if the InnoDB ClusterSet already uses the latest version.
Upgrade MySQL Server by upgrading all the replica clusters' instances first and then upgrading the primary cluster's instances.
When upgrading each cluster, primary or replica clusters, upgrade all secondary instances before upgrading the primary instance.
Upgrading MySQL Server is optional. Server upgrades can have a greater impact than upgrading MySQL Shell and MySQL Router. Also, you should always keep MySQL Shell and MySQL Router at the latest version, even if the server is not; this is true for InnoDB Clusters and ReplicaSets.
For details on upgrading with Group Replication, see Upgrading a Group Replication Member.
Stop MySQL Server by issuing one of the following commands:
If MySQL Server is using systemd issue:
systemctl stop mysqld
If MySQL Server is using init.d issue:
/etc/init.d/mysql stop
If MySQL Server is using service issue:
service mysql stop
If you deployed MySQL Server on Microsoft Windows issue:
mysqladmin -u root -p shutdown
Obtain and install the latest version of MySQL Server.
Start MySQL Server by issuing one of the following commands:
If MySQL Server is using systemd issue:
systemctl start mysqld
If MySQL Server is using init.d issue:
/etc/init.d/mysql start
If MySQL Server is using service issue:
service mysql start
If you deployed MySQL Server on Microsoft Windows issue:
mysqld
When all the secondary instances are upgraded, upgrade the primary instance to complete the upgrade process.
After upgrading MySQL Router, MySQL Shell, and MySQL Servers are upgraded:
Check the status of
the ClusterSet by issuing
<ClusterSet>.status()
. For more
information about <ClusterSet>.status()
,
see Section 9.7, “InnoDB ClusterSet Status and Topology”.
Resolve any
clusterErrors
and
statusText
returned by the
<ClusterSet>.status()
operation.
Check each
Cluster in the ClusterSet by issuing
<Cluster>.status()
and resolve any
issues. For more information about
<Cluster>.status()
, see
Checking a
cluster's Status with
Cluster.status()
.
Check the
details of all the registered MySQL Router instances by
issuing <ClusterSet>.listRouters()
. For
more information, see
Integrating
MySQL Router With InnoDB ClusterSet.
These commands allow you to check that the upgrade has been successful or if you need to complete any additional steps.
The additional steps depend; on how many versions you are skipping, what version you are upgrading, and from what version you are coming.
Begin your post upgrade check by checking the status of the
InnoDB ClusterSet. This check uses the
<ClusterSet>.status({extended:1})
operation
.
In this example, we issue
<ClusterSet>.status({extended:1})
:
mysql-js><ClusterSet>.status({extended:1})
{
"clusters": {
"cluster1": {
"clusterRole": "PRIMARY",
"globalStatus": "OK",
"primary": "127.0.0.1:3310",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"127.0.0.1:3310": {
"address": "127.0.0.1:3310",
"memberRole": "PRIMARY",
"mode": "R/W",
"status": "ONLINE",
"version": "8.0.28"
}
},
"transactionSet": "c0361cad-9093-11ec-94ce-0a0027000010:1-90,c0362acf-9093-11ec-94ce-0a0027000010:1"
},
"replicacluster1": {
"clusterErrors": [
"ERROR: Cluster members are reachable but they're all OFFLINE.",
"WARNING: Replication from the Primary Cluster not in expected state"
],
"clusterRole": "REPLICA",
"clusterSetReplication": {
"applierStatus": "OFF",
"applierThreadState": "",
"applierWorkerThreads": 4,
"receiver": "127.0.0.1:3320",
"receiverStatus": "OFF",
"receiverThreadState": "",
"source": "127.0.0.1:3310"
},
"clusterSetReplicationStatus": "STOPPED",
"globalStatus": "NOT_OK",
"status": "OFFLINE",
"statusText": "All members of the group are OFFLINE",
"topology": {
"127.0.0.1:3320": {
"address": "127.0.0.1:3320",
"instanceErrors": [
"NOTE: group_replication is stopped."
],
"memberRole": "SECONDARY",
"memberState": "OFFLINE",
"mode": "R/O",
"status": "(MISSING)",
"version": "8.0.28"
}
},
"transactionSet": "1ec95a0b-9094-11ec-9bc5-0a0027000010:1,c0361cad-9093-11ec-94ce
-0a0027000010:1-90,c0362acf-9093-11ec-94ce-0a0027000010:1",
"transactionSetConsistencyStatus": "OK",
"transactionSetErrantGtidSet": "",
"transactionSetMissingGtidSet": ""
},
"replicacluster2": {
"clusterRole": "REPLICA",
"clusterSetReplication": {
"applierStatus": "APPLIED_ALL",
"applierThreadState": "Waiting for an event from Coordinator",
"applierWorkerThreads": 4,
"receiver": "127.0.0.1:3330",
"receiverStatus": "ON",
"receiverThreadState": "Waiting for source to send event",
"source": "127.0.0.1:3310"
},
"clusterSetReplicationStatus": "OK",
"globalStatus": "OK",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"127.0.0.1:3330": {
"address": "127.0.0.1:3330",
"memberRole": "PRIMARY",
"mode": "R/O",
"replicationLagFromImmediateSource": "",
"replicationLagFromOriginalSource": "",
"status": "ONLINE",
"version": "8.0.28"
}
},
"transactionSet": "329dc243-9094-11ec-b9dd-0a0027000010:1,c0361cad-9093-11ec
-94ce-0a0027000010:1-90,c0362acf-9093-11ec-94ce-0a0027000010:1",
"transactionSetConsistencyStatus": "OK",
"transactionSetErrantGtidSet": "",
"transactionSetMissingGtidSet": ""
}
},
"domainName": "clusterset1",
"globalPrimaryInstance": "127.0.0.1:3310",
"metadataServer": "127.0.0.1:3310",
"primaryCluster": "cluster1",
"status": "AVAILABLE",
"statusText": "Primary Cluster available, there are issues with a Replica cluster."
}
For more information about the
<ClusterSet>.status()
operation, see
ClusterSet.status().
Resolve any errors returned by the
<ClusterSet>.status({extended:1})
operation.
In this example, we have an error returned in
clusterErrors
informing us that
<ClusterSet>.status({extended:1})
operation was unable to connect to any online members, and
with the statusText
that the Primary Cluster is
available, but there are issues with a replica cluster in
the InnoDB ClusterSet.
... "replicacluster1": { "clusterErrors": [ "ERROR: Could not connect to any ONLINE members but there are unreachable instances that could still be ONLINE." ], ... "statusText": "Primary Cluster available, there are issues with a Replica cluster." }
In this example, we need to check the status of the InnoDB
Cluster replicacluster1
and ensure it is
brought back online.
Once we have resolved the issues returned by the
<ClusterSet>.status({extended:1})
operation, we check the status of each InnoDB Cluster in the
ClusterSet.
Check the status of each InnoDB Cluster, by issuing
<Cluster>.status()
.
In the following example,
<Cluster>.status({extended: true})
, used
to provide more detailed information about the status of the
InnoDB Cluster, returns two issues:
mysqlsh> cluster.status({extended: true}); { "clusterName": "MyCluster", "defaultReplicaSet": { "GRProtocolVersion": "8.0.16", "groupName": "459ec434-8926-11ec-b8c3-02001707f44a", "groupViewChangeUuid": "AUTOMATIC", "groupViewId": "16443558036060755:13", "name": "default", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "example-el7-1644251369:33311": { "address": "example-el7-1644251369:33311", "applierWorkerThreads": 4, "fenceSysVars": [], "instanceErrors": [ "NOTE: instance server_id is not registered in the metadata. Use cluster.rescan() to update the metadata.", "NOTE: The required parallel-appliers settings are not enabled on the instance. Use dba.configureInstance() to fix it." ], "memberId": "247131ab-8926-11ec-850b-02001707f44a", "memberRole": "PRIMARY", "memberState": "ONLINE", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "example-el7-1644251369:33314": { "address": "example-el7-1644251369:33314", "applierWorkerThreads": 4, "fenceSysVars": [], "instanceErrors": [ "NOTE: instance server_id is not registered in the metadata. Use cluster.rescan() to update the metadata.", "NOTE: The required parallel-appliers settings are not enabled on the instance. Use dba.configureInstance() to fix it." ], "memberId": "303dcfa7-8926-11ec-a6e5-02001707f44a", "memberRole": "PRIMARY", "memberState": "ONLINE", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" }, "example-el7-1644251369:33317": { "address": "example-el7-1644251369:33317", "applierWorkerThreads": 4, "fenceSysVars": [], "instanceErrors": [ "NOTE: instance server_id is not registered in the metadata. Use cluster.rescan() to update the metadata.", "NOTE: The required parallel-appliers settings are not enabled on the instance. Use dba.configureInstance() to fix it." ], "memberId": "3bb2592e-8926-11ec-8b6f-02001707f44a", "memberRole": "PRIMARY", "memberState": "ONLINE", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.28" } }, "topologyMode": "Multi-Primary" }, "groupInformationSourceMember": "example-el7-1644251369:33311", "metadataVersion": "2.1.0" }
<Cluster>.status({extended: true})
displays more detailed information about the cluster. In
this example, we use the Boolean value true
,
which is equivalent to
<Cluster>.status({'extended':1})
. For
more information, see
Checking a
Cluster's Status with
Cluster.status()
.
The instanceErrors
suggest that in this
upgrade, we should issue
<Cluster>.rescan()
and
dba.configureInstance()
on each member in the
InnoDB Cluster:
... "NOTE: instance server_id is not registered in the metadata. Use cluster.rescan() to update the metadata.", "NOTE: The required parallel-appliers settings are not enabled on the instance. Use dba.configureInstance() to fix it." ...
The <Cluster>.rescan()
operation enables
you to rescan the InnoDB Cluster for new and obsolete Group
Replication instances, as well as changes in the used
topology mode. For more information, see
Rescanning a Cluster.
mysqlsh> cluster1.rescan(); Rescanning the cluster... Result of the rescanning operation for the 'MyCluster1' cluster: { "name": "MyCluster1", "newTopologyMode": null, "newlyDiscoveredInstances": [], "unavailableInstances": [], "updatedInstances": [] }
You can only run <Cluster>.rescan()
on
the individual Clusters of the ClusterSet, not the
ClusterSet as a whole.
The dba.configureInstance()
function checks all
of the settings required to enable the instance to be used
for InnoDB Cluster usage. For more information, see
Configuring
Production Instances for InnoDB Cluster Usage.
In this example, we issue
dba.configureInstance()
on each member in the
Cluster, to ensure required the parallel-appliers settings
are enabled on the instance:
mysqlsh> dba.configureInstance('cladmin:cladminpw@localhost:33311') The instance 'example-el7-1644251369:33311' belongs to an InnoDB Cluster. Configuring local MySQL instance listening at port 33311 for use in an InnoDB cluster... This instance reports its own address as ^[[1mexample-el7-1644251369:33311^[[0m Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed. applierWorkerThreads will be set to the default value of 4. ^[[36mNOTE: ^[[0mSome configuration options need to be fixed: +----------------------------------------+---------------+----------------+----------------------------+ | Variable | Current Value | Required Value | Note | +----------------------------------------+---------------+----------------+----------------------------+ | binlog_transaction_dependency_tracking | COMMIT_ORDER | WRITESET | Update the server variable | +----------------------------------------+---------------+----------------+----------------------------+ Configuring instance... The instance 'example-el7-1644251369:33311' was configured to be used in an InnoDB cluster.
Once you have resolved the issues returned by the
<ClusterSet>.status({extended:1})
and
<Cluster>.status({extended:1})
operations,
you must run <ClusterSet>.listRouters()
.
<ClusterSet>.listRouters()
returns
details of all the registered MySQL Router instances. The
details provides information about each registered MySQL Router
instance, such as its name in the metadata, the hostname,
ports, and so on. For more information, see
Integrating
MySQL Router With InnoDB ClusterSet.
For example, on our example ClusterSet we issue:
mysqlsh> <ClusterSet>.listRouters(); WARNING: The following Routers were bootstrapped before the ClusterSet was created: [EXAMPLE::R1]. Please re-bootstrap the Routers to ensure the optimal configurations are set. { "domainName": "MyClusterSet", "routers": { "EXAMPLE::R1": { "hostname": "EXAMPLE", "lastCheckIn": "2022-02-23 07:14:50", "roPort": 6447, "roXPort": 6449, "routerErrors": [ "WARNING: Router needs to be re-bootstraped." ], "rwPort": 6446, "rwXPort": 6448, "targetCluster": null, "version": "8.0.28" } } }
The returned information shows:
The name of the MySQL Router instance.
Last check-in timestamp, which is generated by a periodic ping from the MySQL Router stored in the metadata.
Hostname where the MySQL Router instance is running.
Read-Only and Read-Write ports which the MySQL Router publishes for classic MySQL protocol connections.
Read-Only and Read-Write ports which the MySQL Router publishes for X Protocol connections.
The name of the target cluster. In this example, MySQL Router directs traffic from client applications to the cluster in the InnoDB ClusterSet deployment that is currently the primary cluster.
Version of this MySQL Router instance.
In this example, there is also information on
routerErrors
returned.
The routerErrors
informs us that MySQL Router
needs to be re-bootstraped. The reason for this error is
that if you create a ClusterSet based on that standalone
Cluster, then MySQL Router must be bootstrapped again to inform
the Router that it is working on a ClusterSet.
Resolve these warning to complete your post-upgrade checks. If you do not receive any warnings your post-upgrade checks are complete.