MySQL Shell 8.0 Release Notes
MySQL Shell now enables you to create and configure the MySQL
user accounts required by InnoDB Cluster, InnoDB ReplicaSet,
and MySQL Router using AdminAPI operations. Previously, accounts
required by InnoDB Cluster and InnoDB ReplicaSet had to be
configured using the clusterAdmin
option, and
accounts required by MySQL Router had to be configured manually
using SQL. The following AdminAPI operations are now
available:
use
and
Cluster
.setupAdminAccount(user,
[options])
to configure a MySQL user account with
the necessary privileges to administer an InnoDB Cluster
or InnoDB ReplicaSet.
Replicaset
.setupAdminAccount(user,
[options])
use
and
Cluster
.setupRouterAccount(user,
[options])
to create a MySQL user account or
upgrade an existing account so that it that can be used by
MySQL Router to operate on an InnoDB Cluster or
InnoDB ReplicaSet. This is now the recommended method of
adding MySQL Router accounts to use with InnoDB Cluster and
InnoDB ReplicaSet.
Replicaset
.setupRouterAccount(user,
[options])
(WL #13536)
AdminAPI now uses a locking mechanism to avoid different
operations from performing changes on an InnoDB ReplicaSet
simultaneously. Previously, different instances of MySQL Shell
could connect to an InnoDB ReplicaSet at the same time and
execute AdminAPI operations simultaneously. This could lead to
inconsistent instance states and errors, for example if
and
ReplicaSet
.addInstance()
were executed in parallel.
ReplicaSet
.setPrimary()
Now, the InnoDB ReplicaSet operations have the following locking:
dba.upgradeMetadata()
and
dba.createReplicaSet()
are globally
exclusive operations. This means that if MySQL Shell
executes these operations on an InnoDB ReplicaSet, no
other operations can be executed against the
InnoDB ReplicaSet or any of its instances.
and
ReplicaSet
.forcePrimaryInstance()
are operations that change the primary. This means that if
MySQL Shell executes these operations against an
InnoDB ReplicaSet, no other operations which change the
primary, or instance change operations can be executed until
the first operation completes.
ReplicaSet
.setPrimaryInstance()
ReplicaSet
.addInstance(),
,
and
ReplicaSet
.rejoinInstance()
are operations that change an instance. This means that if
MySQL Shell executes these operations on an instance, the
instance is locked for any further instance change
operations. However, this lock is only at the instance level
and multiple instances in an InnoDB ReplicaSet can each
execute one of this type of operation simultaneously. In
other words, at most one instance change operation can be
executed at a time, per instance in the InnoDB ReplicaSet.
ReplicaSet
.removeInstance()
dba.getReplicaSet()
and
are InnoDB ReplicaSet read operations and do not require
any locking.
ReplicaSet
.status()
(WL #13540)
References: See also: Bug #30349849.
Use the --replicaset
option to
configure MySQL Shell to work with an InnoDB ReplicaSet at
start up. You must specify a connection to a replica set
instance for this option to work correctly. If a replica set is
found, this option populates the rs
global
object, which can then be used to work with the
InnoDB ReplicaSet. As part of this addition, the
--redirect-primary
and
--redirect-secondary
options
have been updated to also work with InnoDB ReplicaSet.
When running MySQL Shell, use the
shell.connectToPrimary([
to check whether
the target instance belongs to an InnoDB Cluster or
InnoDB ReplicaSet. If so, MySQL Shell opens a new session to
the primary, sets the global session to the established session
and returns it. If no connectionData
,
password
])connectionData
is provided, the current global session is used.
(WL #13236)
During distributed recovery which is using MySQL Clone, the
instance restarts after the data files are cloned, but if the
instance has to apply a large back log of transactions to finish
the recovery process, then the restart process could take longer
than the default 1 minute timeout. Now, when there is a large
back log use the dba.restartWaitTimeout
option to configure a longer timeout to ensure the apply process
has time to process the transactions.
(Bug #30866632)
The dba.deleteSandboxInstance()
operation did
not provide an error if you attempted to delete a sandbox which
did not exist. Now, in such a situation the
dba.deleteSandboxInstance()
operation throws
a runtimeError
.
(Bug #30863587)
The
operation was not stopping Group Replication on any reachable
instances that were not part of the visible membership of the
target instance, which could lead to undefined behavior if any
of those instances were automatically rejoining the cluster. The
fix stops Group Replication on any reachable instances that are
not included in the new forced quorum membership.
(Bug #30739252)Cluster
.forceQuorumUsingPartitionOf()
It was possible for AdminAPI to select an invalidated instance
as the latest primary, despite it having a lower
view_id
. This was because the process of
getting the primary of the InnoDB ReplicaSet was incorrectly
reconnecting to an invalidated member if it was the last
instance in the InnoDB ReplicaSet.
(Bug #30735124)
When a cluster that was created with a MySQL Shell version
lower than 8.0.19 was offline (for example after a server
upgrade of the instances), if you then used MySQL Shell 8.0.19
to connect to the cluster,
dba.upgradeMetadata()
and
dba.rebootClusterFromCompleteOutage()
blocked
each other. You could not run
dba.upgradeMetadata()
because it requires
dba.rebootClusterFromCompleteOutage()
to be
run first to bring the cluster back online. And you could not
run dba.rebootClusterFromCompleteOutage()
because dba.upgradeMetadata()
had not been
run. To avoid this problem please upgrade to MySQL Shell
8.0.20, where the preconditions for
dba.rebootClusterFromCompleteOutage()
and
dba.forceQuorumUsingPartitionOf()
have been
updated to ensure they are compatible with clusters created
using earlier versions. In other words, they are available even
if the metadata was created using an older MySQL Shell version.
(Bug #30661129)
Using MySQL Clone as the distributed recovery method to add an
instance to an InnoDB ReplicaSet resulted in a segmentation
fault if the target instance did not support
RESTART
. Now, the
operation aborts in such a situation and reverts changes after
the connection timeout limit is reached. This is because the add
operation needs to connect to the target instance to finish the
operation. In such a situation, if it is not possible to upgrade
the instance to a version of MySQL which supports
ReplicaSet
.addInstance()RESTART
, you have to restart the
server manually, and then issue
again to retry. The retry can then use incremental recovery,
which does not trigger the clone and the subsequent restart.
(Bug #30657911)ReplicaSet
.addInstance()
normally fails if there are errant GTIDs in the added instance,
but if MySQL Clone is being used for distributed recovery, that
check is bypassed because the cloning process fixes the problem.
However, if all members are using IPv6, MySQL Clone cannot be
used, so incremental recovery is used instead. In such a
situation, the instance was being added to the cluster without
errors, but the errant transaction persisted. Now, if all of the
cluster's online instances are using IPv6 addresses and the
operation tries to use MySQL Clone for distributed recovery, an
error is thrown.
(Bug #30645697)Cluster
.addInstance()
When an InnoDB Cluster or InnoDB ReplicaSet is using the
MySQL Clone plugin, AdminAPI ensures the
performance_schema.clone_status
table is
cleared out when the clone process starts. However, in some rare
and very specific scenarios a race condition could happen and
the clone operation was considered to be running before actually
clearing out the table. In this situation, the MySQL Shell
clone monitoring could result in an unexpected halt.
As part of this fix, a potential infinite loop in the clone monitoring phase that could happen very rarely when the cloning process was extremely fast has also been fixed. (Bug #30645665)
Group Replication system variable queries were being executed
early, without considering whether the Group Replication plugin
was installed yet. Now, the reboot operation has been fixed so
that if system variable queries fail with
ER_UNKNOWN_SYSTEM_VARIABLE
then
the Group Replication plugin is installed automatically.
(Bug #30531848)
The
operation was not correctly handling the following cases:
Cluster
.removeInstance(instance
)
if the instance had
report_host
set to a
different value from the instance_name
in
the metadata, specifying the instance using its IP failed.
if the instance was unreachable, it was not possible to remove it if the given address did not match the address in the metadata.
when an instance was OFFLINE
but
reachable (for example because Group Replication stopped but
the server was still running),
failed. Now, in such a situation, if you are sure it is safe
to remove the instance, use the
Cluster
.removeInstance(instance
)force=true
option, which means that
synchronization is no longer attempted as part of the remove
operation.
if the instance was OFFLINE
but
reachable, removing the instance through an address that did
not match what was in the metadata would make the operation
appear to succeed but the instance was not actually removed
from the metadata.
(Bug #30501628, Bug #30625424)
After operations such as removing an instance or dissolving a
cluster, the group_replication_recovery
and
group_replication_applier
replication
channels were not being removed.
(Bug #29922719, Bug #30878446)
The default location of the MySQL option file, for example
/etc/my.cnf
, stopped being detected by the
dba.configureInstance()
operation on some
platforms (Debian and so on). This was a regression. The fix
ensures that the predefined paths to option files matches the
defaults, such as /etc/my.cnf
and
/etc/mysql/my.cnf
.
(Bug #96490, Bug #30171324)
A new method shell.openSession
is provided in
the shell
global object to let you create and
return a session object, rather than set it as the global
session for MySQL Shell.
(WL #13328)
You can now request compression for MySQL Shell connections
that use X Protocol, as well as those that use
classic MySQL protocol. For X Protocol connections, the default is
that compression is requested, and uncompressed connections are
allowed if the negotiations for a compressed connection do not
succeed. For classic MySQL protocol connections, the default is that
compression is disabled. After the connection has been made, the
MySQL Shell \status
command shows whether or
not compression is in use for a session.
New compression controls in MySQL Shell let you specify in the connection parameters whether compression is required, preferred, or disabled, select compression algorithms for the connection, and specify a numeric compression level for the algorithms. (WL #13328)
When you create an extension object for MySQL Shell, the
options
key is no longer required when you
specify a parameter of the data type "dictionary". If you do
define options for a dictionary, MySQL Shell validates the
options specified by the end user and raises an error if an
option is passed to the function that is not in this list. If
you create a dictionary with no list of options, any options
that the end user specifies for the dictionary are passed
directly through to the function by MySQL Shell with no
validation.
(Bug #30986260)
A bug in MySQL Shell 8.0.19, affecting classic MySQL protocol connections only, meant that access was denied if a user had stored the connection's password with MySQL Shell and then changed it. The password store now removes invalid passwords and presents the user with a password prompt as expected. (Bug #30912984, Bug #98503)
When MySQL Shell's \source
command was used
in interactive mode to execute code from a script file,
multi-line SQL statements in the script file could cause
MySQL Shell to enter a loop of repeatedly executing the script.
The issue has now been fixed.
(Bug #30906751, Bug #98625)
If a stored procedure was called in MySQL Shell but its result was not used, any subsequent SQL statement returned a result set error, and exiting MySQL Shell at that point resulted in an incorrect shutdown. MySQL Shell cleared the first result set retrieved by a stored procedure in order to run a subsequent SQL statement, but did not check for any additional result sets that had been retrieved, which were left behind and caused the error. This check is now carried out and the additional result sets are discarded before another statement is executed. (Bug #30825330)
Due to a regression in MySQL Shell 8.0.19, the upgrade checker
utility checkForServerUpgrade()
did not
accept any runtime options if connection data was not provided
as the first argument. The issue has been fixed and the
utility's argument checking has been enhanced.
(Bug #30689606)
MySQL Shell, which now bundles Python 3.7.4, could not be built from source with Python 3.8. The incompatibilities have now been corrected so Python 3.8 may be used. (Bug #30640012)
MySQL Shell's upgrade checker utility
checkForServerUpgrade()
did not flag removed
system variables that were specified using hyphens rather than
underscores. The utility also now continues with its sequence of
checks if a permissions check cannot be performed at the
required time.
(Bug #30615030, Bug #97855)
MySQL Shell's \status
command showed that a
connection was compressed if the connection had been created
while starting MySQL Shell, but not if it was created after
starting MySQL Shell. Compression is now shown in both cases.
(Bug #29006903)