MySQL Shell 9.3
MySQL Shell's instance dump utility
util.dumpInstance()
and schema dump utility
util.dumpSchemas()
, support the export of all
schemas or a selected schema from an on-premise MySQL instance
into an Oracle Cloud Infrastructure Object Storage bucket or a set of local files. The
table dump utility util.dumpTables()
supports
the same operations for a selection of tables or views from a
schema. The exported items can then be imported into a HeatWave Service DB
System or a MySQL Server instance using the
util.loadDump()
utility (see
Section 12.6, “Dump Loading Utility”). To get the
best functionality, always use the most recent version available
of MySQL Shell's dump and dump loading utilities.
MySQL Shell's instance dump utility, schema dump utility, and table dump utility provide Oracle Cloud Infrastructure Object Storage streaming, HeatWave Service compatibility checks and modifications, parallel dumping with multiple threads, and file compression, which are not provided by mysqldump. Progress information is displayed during the dump. You can carry out a dry run with your chosen set of dump options to show information about what actions would be performed, what items would be dumped, and (for the instance dump utility and schema dump utility) what HeatWave Service compatibility issues would need to be fixed, when you run the utility for real with those options.
When choosing a destination for the dump files, note that for import into a HeatWave Service DB System, the MySQL Shell instance where you run the dump loading utility must be installed on an Oracle Cloud Infrastructure Compute instance that has access to the HeatWave Service DB System. If you dump the instance, schema, or tables to an Object Storage bucket, you can access the Object Storage bucket from the Compute instance. If you create the dump files on your local system, you need to transfer them to the Oracle Cloud Infrastructure Compute instance using the copy utility of your choice, depending on the operating system you chose for your Compute instance.
The dumps created by MySQL Shell's instance dump utility,
schema dump utility, and table dump utility comprise DDL files
specifying the schema structure, and tab-separated
.tsv
files containing the data. You can also
choose to produce the DDL files only or the data files only, if
you want to set up the exported schema as a separate exercise
from populating it with the exported data. You can choose
whether or not to lock the instance for backup during the dump
for data consistency. By default, the dump utilities chunk table
data into multiple data files and compress the files.
You can use options for the utilities to include or exclude
specified schemas and tables, users and their roles and grants,
events, routines, and triggers. If you specify conflicting
include and exclude options or name an object that is not
included in the dump, an error is reported and the dump stops so
you can correct the options. If you need to dump the majority of
the schemas in a MySQL instance, as an alternative strategy, you
can use the instance dump utility rather than the schema dump
utility, and specify the excludeSchemas
option to list those schemas that are not to be dumped.
Similarly, if you need to dump the majority of the tables in a
schema, you can use the schema dump utility with the
excludeTables
option rather than the table
dump utility.
The data for the mysql.apply_status
,
mysql.general_log
,
mysql.schema
, and mysql.slow_log
tables
is always excluded from a dump created by
MySQL Shell's schema dump utility, although their DDL
statements are included. The
information_schema
, mysql
,
ndbinfo
,
performance_schema
, and
sys
schemas are always excluded from an
instance dump.
By default, the time zone is standardized to UTC in all the
timestamp data in the dump output, which facilitates moving data
between servers with different time zones and handling data that
has multiple time zones. You can use the tzUtc:
false
option to keep the original timestamps if
preferred.
The MySQL Shell dump loading utility
util.loadDump()
supports loading exported
instances and schemas from an Object Storage bucket using a
pre-authenticated request (PAR). For information about loading
dumps using a PAR, see
Section 12.6, “Dump Loading Utility”.
MySQL Shell's instance dump utility, schema dump utility, and
table dump utility are partition aware (see
Partitioning, in the MySQL
Manual). When a table being dumped is partitioned,
each partition is treated as an independent table; if the table
has subpartitions each subpartition is treated as an independent
table. This also means that, when chunking is enabled, each
partition or subpartition of a partitioned or subpartitioned
table is chunked independently. The base names of dump files
created for partitioned tables use the format
,
where schema
@table
@partition
schema
and
table
are, respectively the names of
the parent schema and table, and
partition
is the URL-encoded name of
the partition or subpartition.
To manage additions of features that are not supported by
earlier versions of the MySQL Shell utilities,
util.dumpInstance()
,
util.dumpSchemas()
,
util.dumpTables()
, and
util.loadDump()
write a list of features used
in creating the dump to the dump metadata file; for each such
feature, an element is added to the list. When the dump loading
utility reads the metadata file and finds an unsupported feature
listed, it reports an error; the error message includes a
version of MySQL Shell that supports the feature.
The instance dump utility, schema dump utility, and table dump utility only support General Availability (GA) releases of MySQL Server versions.
MySQL 5.7 or later is required for the destination MySQL instance where the dump will be loaded.
For the source MySQL instance, dumping from MySQL 5.7 or later is fully supported in all MySQL Shell releases where the utilities are available.
Object names in the instance or schema must be in the
latin1
or utf8
characterset.
Data consistency is guaranteed only for tables that use the
InnoDB
storage engine.
The minimum required set of privileges that the user account
used to run the utility must have on all the schemas
involved is as follows:
EVENT
,
RELOAD
,
SELECT
,
SHOW VIEW
, and
TRIGGER
.
If the consistent
option is set to
true
, which is the default, the
LOCK TABLES
privilege on
all dumped tables can substitute for the
RELOAD
privilege if the
latter is not available.
If the user account does not have the
BACKUP_ADMIN
privilege
and LOCK INSTANCE FOR BACKUP
cannot
be executed, the utilities make an extra consistency
check during the dump. If this check fails, an instance
dump is stopped, but a schema dump or a table dump
continues and returns an error message to alert the user
that the consistency check failed.
If the consistent
option is set to
false
, the
BACKUP_ADMIN
and
RELOAD
privileges are not
required.
If the dump is from a MySQL 5.6 instance and includes
user accounts (which is possible only with the instance
dump utility), the SUPER
privilege is also required.
The user account used to run the utility needs the
REPLICATION CLIENT
privilege
in order for the utility to be able to include the binary
log file name and position in the dump metadata. If the user
ID does not have that privilege, the dump continues but does
not include the binary log information. The binary log
information can be used after loading the dumped data into
the replica server to set up replication with a non-GTID
source server, using the
ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS
option of the CHANGE REPLICATION SOURCE
TO
statement.
The upload method used to transfer files to an Oracle Cloud Infrastructure Object Storage bucket has a file size limit of 1.2 TiB.
The utilities convert columns with data types that are not
safe to be stored in text form (such as
BLOB
) to Base64. The size of these
columns therefore must not exceed approximately 0.74 times
the value of the
max_allowed_packet
system
variable (in bytes) that is configured on the target MySQL
instance.
For the table dump utility, exported views and triggers must not use qualified names to reference other views or tables.
The table dump utility does not dump routines, so any routines referenced by the dumped objects (for example, by a view that uses a function) must already exist when the dump is loaded.
For import into a HeatWave Service DB System, set the
ocimds
option to true
,
to ensure compatibility with HeatWave Service.
When migrating to HeatWave Service, it is recommended to always use the latest available version of MySQL Shell.
For compatibility with HeatWave Service, all tables must use the
InnoDB
storage engine. The
ocimds
option checks for any exceptions
found in the dump, and the compatibility
option alters the dump files to replace other storage
engines with InnoDB
.
For the instance dump utility and schema dump utility, for
compatibility with HeatWave Service, all tables in the instance or
schema must be located in the MySQL data directory and must
use the default schema encryption. The
ocimds
option alters the dump files to
apply these requirements.
HeatWave Service uses partial_revokes=ON
, which
means database-level user grants on schemas which contain
wildcards, such as _
or
%
, are reported as errors.
You can also use the compatibility options,
ignore_wildcard_grants
and
strip_invalid_grants
See Options for HeatWave Service and Oracle Cloud Infrastructure for more information.
A number of other security related restrictions and
requirements apply to items such as tablespaces and
privileges for compatibility with HeatWave Service. The
ocimds
option checks for any exceptions
found during the dump, and the
compatibility
option automatically alters
the dump files to resolve some of the compatibility issues.
You might need (or prefer) to make some changes manually.
For more details, see the description for the
compatibility
option.
For HeatWave Service High Availability, which uses Group Replication,
primary keys are required on every table. The
ocimds
option checks and reports an error
for any tables in the dump that are missing primary keys.
The compatibility
option can be set to
ignore missing primary keys if you do not need them, or to
notify MySQL Shell’s dump loading utility to add primary
keys in invisible columns where they are not present. For
details, see the description for the
compatibility
option. If possible,
instead of managing this in the utility, consider creating
primary keys in the tables on the source server before
dumping them again.
If any of the dump utilities are run against MySQL 5.7, with
"ocimds": true
,
util.checkForServerUpgrade
is run
automatically, unless these checks are disabled by
skipUpgradeChecks
. Pre-upgrade checks are
run depending on the type of objects included in the dump.
The instance dump utility, schema dump utility, and table dump utility use the MySQL Shell global session to obtain the connection details of the target MySQL server from which the export is carried out. You must open the global session (which can have an X Protocol connection or a classic MySQL protocol connection) before running one of the utilities. The utilities open their own sessions for each thread, copying options such as connection compression and SSL options from the global session, and do not make any further use of the global session.
In the MySQL Shell API, the instance dump utility, schema dump
utility, and table dump utility are functions of the
util
global object, and have the following
signatures:
util.dumpInstance(outputUrl[, options]) util.dumpSchemas(schemas, outputUrl[, options]) util.dumpTables(schema, tables, outputUrl[, options])
options
is a dictionary of options that can
be omitted if it is empty. The available options for the
instance dump utility, schema dump utility, and table dump
utility are listed in the remaining sections in this topic.
For the schema dump utility, schemas
specifies a list of one or more schemas to be dumped from the
MySQL instance.
For the table dump utility, schema
specifies
the schema that contains the items to be dumped, and
tables
is an array of strings specifying the
tables or views to be dumped. The table dump includes the
information required to set up the specified schema in the
target MySQL instance, although it can be loaded into an
alternative target schema by using the dump loading utility's
schema
option.
The table dump utility can be used to select individual tables
from a schema, for example if you want to transfer tables
between schemas. In this example in MySQL Shell's JavaScript
mode, the tables employees
and
salaries
from the hr
schema are exported to the local directory
emp
, which the utility creates in the current
working directory:
shell-js> util.dumpTables("hr", [ "employees", "salaries" ], "emp")
To dump all of the views and tables from the specified schema,
use the all
option and set the
tables
parameter to an empty array, as in
this example:
shell-js> util.dumpTables("hr", [], "emp", { "all": true })
If you are dumping to the local filesystem,
outputUrl
is a string specifying the path to
a local directory where the dump files are to be placed. You can
specify an absolute path or a path relative to the current
working directory. You can prefix a local directory path with
the file://
schema. In this example, the
connected MySQL instance is dumped to a local directory, with
some modifications made in the dump files for compatibility with
HeatWave Service. The user first carries out a dry run to inspect the
schemas and view the compatibility issues, then runs the dump
with the appropriate compatibility options applied to remove the
issues:
shell-js> util.dumpInstance("C:/Users/hanna/worlddump", {dryRun: true, ocimds: true}) Checking for compatibility with HeatWave Service 8.0.33 ... Compatibility issues with HeatWave Service 8.0.33 were found. Please use the 'compatibility' option to apply compatibility adaptations to the dumped DDL. Util.dumpInstance: Compatibility issues were found (RuntimeError) shell-js> util.dumpInstance("C:/Users/hanna/worlddump", { > ocimds: true, compatibility: ["strip_definers", "strip_restricted_grants"]})
The target directory must be empty before the export takes
place. If the directory does not yet exist in its parent
directory, the utility creates it. For an export to a local
directory, the directories created during the dump are created
with the access permissions rwxr-x---
, and
the files are created with the access permissions
rw-r-----
(on operating systems where these
are supported). The owner of the files and directories is the
user account that is running MySQL Shell.
If you are dumping to an Oracle Cloud Infrastructure Object Storage bucket,
outputUrl
is a path that will be used to
prefix the dump files in the bucket, to simulate a directory
structure. Use the osBucketName
option to
provide the name of the Object Storage bucket, and the
osNamespace
option to identify the namespace
for the bucket. In this example, the user dumps the
world
schema from the connected MySQL
instance to an Object Storage bucket, with the same
compatibility modifications as in the previous example:
shell-js> util.dumpSchemas(["world"], "worlddump", { > "osBucketName": "hanna-bucket", "osNamespace": "idx28w1ckztq", > "ocimds": "true", "compatibility": ["strip_definers", "strip_restricted_grants"]})
In the Object Storage bucket, the dump files all appear with the
prefix worlddump
, for example:
worlddump/@.done.json worlddump/@.json worlddump/@.post.sql worlddump/@.sql worlddump/world.json worlddump/world.sql worlddump/world@city.json worlddump/world@city.sql worlddump/world@city@@0.tsv.zst worlddump/world@city@@0.tsv.zst.idx ...
The namespace for an Object Storage bucket is displayed in the
Bucket Information tab of the bucket
details page in the Oracle Cloud Infrastructure console, or can be obtained using the
Oracle Cloud Infrastructure command line interface. A connection is established to the
Object Storage bucket using the default profile in the default
Oracle Cloud Infrastructure CLI configuration file, or
alternative details that you specify using the
ociConfigFile
and
ociProfile
options. For instructions to set
up a CLI configuration file, see
SDK
and CLI Configuration File.
dryRun: [ true | false ]
Display information about what would be dumped with the
specified set of options, and about the results of HeatWave Service
compatibility checks (if the ocimds
option is specified), but do not proceed with the dump.
Setting this option enables you to list out all of the
compatibility issues before starting the dump. The default
is false
.
showProgress: [ true | false ]
Display (true
) or hide
(false
) progress information for the
dump. The default is true
if
stdout
is a terminal
(tty
), such as when MySQL Shell is in
interactive mode, and false
otherwise.
The progress information includes the estimated total
number of rows to be dumped, the number of rows dumped so
far, the percentage complete, and the throughput in rows
and bytes per second.
threads: int
The number of parallel threads to use to dump chunks of data from the MySQL instance. Each thread has its own connection to the MySQL instance. The default is 4.
maxRate:
"string
"
The maximum number of bytes per second per thread for data
read throughput during the dump. The unit suffixes
k
for kilobytes, M
for megabytes, and G
for gigabytes can
be used (for example, setting 100M
limits throughput to 100 megabytes per second per thread).
Setting 0
(which is the default value),
or setting the option to an empty string, means no limit
is set.
defaultCharacterSet:
"string
"
The character set to be used during the session
connections that are opened by MySQL Shell to the server
for the dump. The default is utf8mb4
.
The session value of the system variables
character_set_client
,
character_set_connection
,
and character_set_results
are set to this value for each connection. The character
set must be permitted by the
character_set_client
system variable and supported by the MySQL instance.
consistent: [ true | false ]
Enable (true
) or disable
(false
) consistent data dumps by
locking the instance for backup during the dump. The
default is true
.
When true
is set, the utility sets a
global read lock using the FLUSH TABLES WITH READ
LOCK
statement (if the user ID used to run the
utility has the RELOAD
privilege), or a series of table locks using LOCK
TABLES
statements (if the user ID does not have
the RELOAD
privilege but
does have LOCK TABLES
). The
transaction for each thread is started using the
statements SET SESSION TRANSACTION ISOLATION
LEVEL REPEATABLE READ
and START
TRANSACTION WITH CONSISTENT SNAPSHOT
. When all
threads have started their transactions, the instance is
locked for backup (as described in
LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE Statements) and the global
read lock is released.
If the user account does not have the
BACKUP_ADMIN
privilege and
LOCK INSTANCE FOR BACKUP
cannot be
executed, the utilities make an extra consistency check
during the dump. If this check fails, an instance dump is
stopped, but a schema dump or a table dump continues and
returns an error message to alert the user that the
consistency check failed.
skipConsistencyChecks: [ true | false ]
Enable (true
) or disable
(false
) the extra consistency check
performed when consistent: true
.
Default is false
.
This option is ignored if consistent:
false
.
skipUpgradeChecks: [true | false]
Default is false
. Enable to disable the
upgrade checks which are normally run by default when
ocimds: true
.Compatibility issues
related to MySQL version upgrades will not be checked. Use
this option only when executing the Upgrade Checker
separately.
tzUtc: [ true | false ]
Include a statement at the start of the dump to set the
time zone to UTC. All timestamp data in the dump output is
converted to this time zone. The default is
true
, so timestamp data is converted by
default. Setting the time zone to UTC facilitates moving
data between servers with different time zones, or
handling a set of data that has multiple time zones. Set
this option to false
to keep the
original timestamps if preferred.
compression:
"string
;level=n
"
The compression type and level of compression to use when creating the dump files. The following compression options are available:
none
:No compression is applied.
gzip
: Uses the gzip compression
library. Compression level can be set from 0 to 9.
Default compression level is 1. For example:
"compression": "gzip;level=4"
zstd
: Default. Uses the zstd
compression library. Compression level can be set from
1 to 22. Default compression level is 1. For example:
"compression": "zstd;level=15"
checksum: [ true | false ]
If enabled, a metadata file,
@.checksums.json
is generated with
the dump. This file contains the checksum data for the
dump, enabling data verification when loading the dump.
See
Options for Load Control.
The following conditions apply if checksum:
true
:
If ddlOnly:false
and
chunking:true
, a checksum is
generated for each dumped table and partition chunk.
If ddlOnly:false
and
chunking:false
, a checksum is
generated for each dumped table and table partition.
If ddlOnly:true
, a checksum is
generated for each dumped table and table partition.
chunking: [ true | false ]
Enable (true
) or disable
(false
) chunking for table data, which
splits the data for each table into multiple files. The
default is true
, so chunking is enabled
by default. Use bytesPerChunk
to
specify the chunk size. If you set the chunking option to
false
, chunking does not take place and
the utility creates one data file for each table.
If a table has no primary key or unique index, chunking is
done based on the number of rows in the table, the average
row length, and the bytesPerChunk
value.
bytesPerChunk:
"string
"
Sets the approximate number of bytes to be written to each
data file when chunking is enabled. The unit suffixes
k
for kilobytes, M
for megabytes, and G
for gigabytes can
be used. The default is 64 MB . Specifying this option
sets chunking
to
true
implicitly. The utility aims to
chunk the data for each table into files each containing
this amount of data before compression is applied. The
chunk size is an average and is calculated based on table
statistics and explain plan estimates.
dialect: [default|csv|csv-unix|tsv]
Specify a set of field- and line-handling options for the
format of the exported data file. You can use the selected
dialect as a base for further customization, by also
specifying one or more of the
linesTerminatedBy
,
fieldsTerminatedBy
,
fieldsEnclosedBy
,
fieldsOptionallyEnclosed
, and
fieldsEscapedBy
options to change the
settings.
The default dialect produces a data file matching what
would be created using a
SELECT...INTO
OUTFILE
statement with the default settings for
that statement. .txt
is an appropriate
file extension to assign to these output files. Other
dialects are available to export CSV files for either DOS
or UNIX systems (.csv
), and TSV files
(.tsv
).
The settings applied for each dialect are as follows:
Table 12.3 Dialect settings for table export utility
|
|
|
|
|
|
---|---|---|---|---|---|
|
[LF] |
[TAB] |
[empty] |
|
\ |
|
[CR][LF] |
, |
'' |
|
\ |
|
[LF] |
, |
'' |
|
\ |
|
[CR][LF] |
[TAB] |
'' |
|
\ |
The carriage return and line feed values for the dialects are operating system independent.
If you use the linesTerminatedBy
,
fieldsTerminatedBy
,
fieldsEnclosedBy
,
fieldsOptionallyEnclosed
, and
fieldsEscapedBy
options,
depending on the escaping conventions of your
command interpreter, the backslash character (\)
might need to be doubled if you use it in the option
values.
Like the MySQL server with the
SELECT...INTO
OUTFILE
statement, MySQL Shell does not
validate the field- and line-handling options that
you specify. Inaccurate selections for these options
can cause data to be exported partially or
incorrectly. Always verify your settings before
starting the export, and verify the results
afterwards.
linesTerminatedBy:
"characters
"
One or more characters (or an empty string) with which the
utility terminates each of the lines in the exported data
file. The default is as for the specified dialect, or a
linefeed character (\n
) if the dialect
option is omitted. This option is equivalent to the
LINES TERMINATED BY
option for the
SELECT...INTO
OUTFILE
statement. Note that the utility does
not provide an equivalent for the LINES STARTING
BY
option for the
SELECT...INTO
OUTFILE
statement, which is set to the empty
string.
fieldsTerminatedBy:
"characters
"
One or more characters (or an empty string) with which the
utility terminates each of the fields in the exported data
file. The default is as for the specified dialect, or a
tab character (\t
) if the dialect
option is omitted. This option is equivalent to the
FIELDS TERMINATED BY
option for the
SELECT...INTO
OUTFILE
statement.
fieldsEnclosedBy:
"character
"
A single character (or an empty string) with which the
utility encloses each of the fields in the exported data
file. The default is as for the specified dialect, or the
empty string if the dialect option is omitted. This option
is equivalent to the FIELDS ENCLOSED BY
option for the
SELECT...INTO
OUTFILE
statement.
fieldsOptionallyEnclosed: [ true | false
]
Whether the character given for
fieldsEnclosedBy
is to enclose all of
the fields in the exported data file
(false
), or to enclose a field only if
it has a string data type such as CHAR
,
BINARY
, TEXT
, or
ENUM
(true
). The
default is as for the specified dialect, or
false
if the dialect option is omitted.
This option makes the fieldsEnclosedBy
option equivalent to the FIELDS OPTIONALLY
ENCLOSED BY
option for the
SELECT...INTO
OUTFILE
statement.
fieldsEscapedBy:
"character
"
The character that is to begin escape sequences in the
exported data file. The default is as for the specified
dialect, or a backslash (\) if the dialect option is
omitted. This option is equivalent to the FIELDS
ESCAPED BY
option for the
SELECT...INTO
OUTFILE
statement. If you set this option to the
empty string, no characters are escaped, which is not
recommended because special characters used by
SELECT...INTO
OUTFILE
must be escaped.
where:
A key-value pair comprising of a valid table identifier,
of the form
,
and a valid SQL condition expression used to filter the
data being exported.
schemaName
.tableName
The SQL is validated only when it is executed. If you are exporting many tables, any SQL-syntax-related issues will only be seen late in the process. As such, it is recommended you test your SQL condition before using it in a long-running export process.
In the following example, where
exports
only those rows of the
tablesakila.actor
and
sakila.actor_info
where the value of
actor_id
is greater than 150, to a
local folder named out
:
util.dumpTables("sakila", ["actor","actor_info"], "out", {"where" : {"sakila.actor": "actor_id > 150", "sakila.actor_info": "actor_id > 150"}})
partitions:
{schemaName.tableName
:
["string
","string
",..]}
A list of valid partition names which limits the export to the specified partitions.
For example, to export only the partitions named
p1
and p2
from the
table schema.table
:
partitions: {schema.table:["p1",
"p2"]}
.
The following example exports the partitions p1 and p2 from table1 and the partition p2 from table2:
util.dumpTables("schema", ["table","table2"], "out", {"partitions" : { "schema.table1": ["p1", "p2"],"schema.table2": ["p2"]}})
ddlOnly: [ true | false ]
Setting this option to true
includes
only the DDL files for the dumped items in the dump, and
does not dump the data. The default is
false
.
dataOnly: [ true | false ]
Setting this option to true
includes
only the data files for the dumped items in the dump, and
does not include DDL files. The default is
false
.
users: [ true | false ]
(Instance dump utility
only) Include (true
) or
exclude (false
) users and their roles
and grants in the dump. The default is
true
, so users are included by default.
The schema dump utility and table dump utility do not
include users, roles, and grants in a dump.
You can use the excludeUsers
or
includeUsers
option to specify
individual user accounts to be excluded or included in the
dump files. These options can also be used with
MySQL Shell's dump loading utility
util.loadDump()
to exclude or include
individual user accounts at the point of import, depending
on the requirements of the target MySQL instance.
If dumping users from a MySQL 5.6 instance, the user performing the copy must have the SUPER privilege.
excludeUsers: array of
strings
(Instance dump utility
only) Exclude the named user accounts from the
dump files. You can use it to exclude user accounts that
are not accepted for import to a HeatWave Service DB System, or that
already exist or are not wanted on the target MySQL
instance. Specify each user account string in the format
"'
for an account that is defined with a user name and host
name, or
user_name
'@'host_name
'""'
for an account that is defined with a user name only. If
you do not supply a host name, all accounts with that user
name are excluded.
user_name
'"
includeUsers: array of
strings
(Instance dump utility
only) Include only the named user accounts in
the dump files. Specify each user account string as for
the excludeUsers
option. This option is
an alternative to excludeUsers
if only
a few user accounts are required in the dump. You can also
specify both options to include some accounts and exclude
others.
excludeSchemas: array of
strings
(Instance dump utility
only) Exclude the named schemas from the dump.
Note that the information_schema
,
mysql
, ndbinfo
,
performance_schema
, and
sys
schemas are always excluded from an
instance dump.
includeSchemas: array of
strings
(Instance dump utility
only) Include only the named schemas in the
dump. You cannot include the
information_schema
,
mysql
, ndbinfo
,
performance_schema
, or
sys
schemas by naming them on this
option. If you want to dump one or more of these schemas,
you can do this using the schema dump utility
util.dumpSchemas()
.
excludeTables: array of
strings
(Instance dump utility and schema
dump utility only) Exclude the named tables
from the dump. Table names must be qualified with a valid
schema name, and quoted with the backtick character if
needed. Tables named by the
excludeTables
option do not have DDL
files or data files in the dump. Note that the data for
the mysql.apply_status
,
mysql.general_log
,
mysql.schema
, and
mysql.slow_log tables
is always
excluded from a schema dump, although their DDL statements
are included, and you cannot include that data by naming
the table in another option or utility.
Schema and table names containing multi-byte characters must be surrounded with backticks.
includeTables: array of
strings
(Instance dump utility and schema dump utility only) Include only the named tables in the dump. Table names must be qualified with a valid schema name, and quoted with the backtick character if needed.
Schema and table names containing multi-byte characters must be surrounded with backticks.
events: [ true | false ]
(Instance dump utility and schema
dump utility only) Include
(true
) or exclude
(false
) events for each schema in the
dump. The default is true
.
excludeEvents: array of
strings
(Instance dump utility and schema dump utility only) Exclude the named events from the dump. Names of events must be qualified with a valid schema name, and quoted with the backtick character if needed.
includeEvents: array of
strings
(Instance dump utility and schema dump utility only) Include only the named events in the dump. Event names must be qualified with a valid schema name, and quoted with the backtick character if needed.
routines: [ true | false ]
(Instance dump utility and schema
dump utility only) Include
(true
) or exclude
(false
) functions and stored procedures
for each schema in the dump. The default is
true
. Note that user-defined functions
are not included, even when routines
is
set to true
.
excludeRoutines: array of
strings
(Instance dump utility and schema dump utility only) Exclude the named functions and stored procedures from the dump. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed.
includeRoutines: array of
strings
(Instance dump utility and schema dump utility only) Include only the named functions and stored procedures in the dump. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed.
all: [ true | false ]
(Table dump utility
only) Setting this option to
true
includes all views and tables from
the specified schema in the dump. The default is
false
. When you use this option, set
the tables
parameter to an empty array,
for example:
shell-js> util.dumpTables("hr", [], "emp", { "all": true })
triggers: [ true | false ]
(All dump utilities)
Include (true
) or exclude
(false
) triggers for each table in the
dump. The default is true
.
excludeTriggers: array of
strings
(All dump utilities)
Exclude the named triggers from the dump. Names of
triggers must be qualified with a valid schema name and
table name (schema.table.trigger
), and
quoted with the backtick character if needed. You can
exclude all triggers for a specific table by specifying a
schema name and table name with this option
(schema.table
).
includeTriggers: array of
strings
(All dump utilities)
Include only the named triggers in the dump. Names of
triggers must be qualified with a valid schema name and
table name (schema.table.trigger
), and
quoted with the backtick character if needed. You can
include all triggers for a specific table by specifying a
schema name and table name with this option
(schema.table
).
libraries: [ true | false ]
(Instance dump utility and schema dump utility only)
Include (true, default) or exclude (false) libraries from
the dump. If set to true, dumps all libraries in the
source instance. If the libraries
option is not provided, it is assumed to be true and all
libraries in the source instance are included in the dump.
If the target version is set to a version which does not support libraries, and the dump contains libraries, a warning is displayed.
See Using JavaScript Libraries for information on libraries.
includeLibraries: array of
strings
(Instance dump utility and schema dump utility only)
List of library objects to be included in the dump in the
format
.
schema
.library
For example:
"includeLibraries": [ "`sakila`.`library1`", "`sakila`.`library2`" ]
excludeLibraries: array of
strings
(Instance dump utility and schema dump utility only)
List of library objects to be excluded from the dump in
the format
.
schema
.library
For example:
"excludeLibraries": [ "`sakila`.`library1`", "`sakila`.`library2`" ]
osBucketName:
"string
"
The name of the Oracle Cloud Infrastructure Object Storage bucket to which the
dump is to be written. By default, the
[DEFAULT]
profile in the Oracle Cloud
Infrastructure CLI configuration file located at
~/.oci/config
is used to establish a
connection to the bucket. You can substitute an
alternative profile to be used for the connection with the
ociConfigFile
and
ociProfile
options. For instructions to
set up a CLI configuration file, see
SDK
and CLI Configuration File.
osNamespace:
"string
"
The Oracle Cloud Infrastructure namespace where the Object Storage bucket named
by osBucketName
is located. The
namespace for an Object Storage bucket is displayed in the
Bucket Information tab of the bucket details page in the
Oracle Cloud Infrastructure console, or can be obtained using the Oracle Cloud Infrastructure command
line interface.
ociConfigFile:
"string
"
An Oracle Cloud Infrastructure CLI configuration file that contains the profile
to use for the connection, instead of the one in the
default location ~/.oci/config
.
ociProfile:
"string
"
The profile name of the Oracle Cloud Infrastructure profile to use for the
connection, instead of the [DEFAULT]
profile in the Oracle Cloud Infrastructure CLI
configuration file used for the connection.
ociAuth
:
"string
"
The authentication method to use when connecting to Oracle Cloud Infrastructure.
This option requires osBucketName
is
configured with a valid value.
The following options are available:
api_key
: OCI connections use the
OCI configuration file. See
Section 4.7.1, “Oracle Cloud Infrastructure Object Storage”.
If osBucketName
is defined with a
valid value, but ociAuth
is not
defined, api_key
is the default
value used.
instance_principal
: OCI connections
use instance principal authentication. See
Instance
Principal Authentication.
This option can not be used if
ociConfigFile
or
ociProfile
are defined.
resource_principal
: OCI connections
use resource principal authentication. See
Resource
Principal Authentication.
This option can not be used if
ociConfigFile
or
ociProfile
are defined.
security_token
: OCI connections use
a temporary, generated session token. See
Session
Token-Based Authentication.
ocimds: [ true | false ]
Setting this option to true
enables
checks and modifications for compatibility with HeatWave Service. The
default is false
.
When migrating to HeatWave Service, it is recommended to always use the latest available version of MySQL Shell.
When this option is set to true
,
DATA DIRECTORY
, INDEX
DIRECTORY
, and ENCRYPTION
options in CREATE TABLE
statements are commented out in the DDL files, to ensure
that all tables are located in the MySQL data directory
and use the default schema encryption. Checks are carried
out for any storage engines in CREATE
TABLE
statements other than
InnoDB
, for grants of
unsuitable privileges to users or roles, and for other
compatibility issues. If any non-conforming SQL statement
is found, an exception is raised and the dump is halted.
Use the dryRun
option to list out all
of the issues with the items in the dump before the
dumping process is started. Use the
compatibility
option to automatically
fix the issues in the dump output.
This option is set to false
by default
and is only enabled if set to true
explicitly.
If any of the dump utilities are run against MySQL 5.7,
with "ocimds": true
,
util.checkForServerUpgrade
is run
automatically. Pre-upgrade checks are run depending on
the type of objects included in the dump.
targetVersion
:
n.n.n
Define the version of the target MySQL instance, in n.n.n format. Such as 8.1.0, for example. If the value is not set, the MySQL Shell version is used.
The compatibility checks are adjusted depending on the
value of targetVersion
.
compatibility: array of
strings
Apply the specified requirements for compatibility with HeatWave Service for all tables in the dump output, altering the dump files as necessary.
The following modifications can be specified as an array of strings:
force_innodb
Change CREATE TABLE
statements to use the
InnoDB
storage engine
for any tables that do not already use it.
skip_invalid_accounts
Remove user accounts created with external
authentication plugins that are not supported in
HeatWave Service. This option also removes user accounts that
do not have passwords set, except where an account
with no password is identified as a role, in which
case it is dumped using the
CREATE ROLE
statement.
strip_definers
This option is not required if the destination HeatWave Service instance is version 8.2.0 or higher.
SET_USER_ID
,
deprecated in MySQL 8.2.0 and removed in 8.4.0, is
replaced by
SET_ANY_DEFINER
and
ALLOW_NONEXISTENT_DEFINER
.
This change impacts the way MySQL Shell handles
dumps for use in HeatWave Service (ocimds:
true
) because the administrator user has
the SET_ANY_DEFINER
privilege and is able to execute statements with
the DEFINER
clause. This was
not possible in previous versions.
Remove the DEFINER
clause from
views, routines, events, and triggers, so these
objects are created with the default definer (the
user invoking the schema), and change the
SQL SECURITY
clause for views and
routines to specify INVOKER
instead of DEFINER
. HeatWave Service
requires special privileges to create these objects
with a definer other than the user loading the
schema. If your security model requires that views
and routines have more privileges than the account
querying or calling them, you must manually modify
the schema before loading it.
strip_restricted_grants
Remove specific privileges that are restricted by
HeatWave Service from GRANT
statements, so users and their roles cannot be given
these privileges (which would cause user creation to
fail). This option also removes
REVOKE
statements for
system schemas (mysql
and
sys
) if the administrative user
account on an Oracle Cloud Infrastructure Compute instance does not itself
have the relevant privileges, so cannot remove them.
strip_tablespaces
Remove the TABLESPACE
clause from
CREATE TABLE
statements, so all tables are created in their
default tablespaces. HeatWave Service has some restrictions on
tablespaces.
ignore_missing_pks
Make the instance, schema, or table dump utility
ignore any missing primary keys when the dump is
carried out, so that the ocimds
option can still be used without the dump stopping
due to this check. Dumps created with this
modification cannot be loaded into a HeatWave Service High
Availability instance, because primary keys are
required for HeatWave Service High Availability, which uses
Group Replication. To add the missing primary keys
instead, use the
create_invisible_pks
modification, or consider creating primary keys in
the tables on the source server.
ignore_wildcard_grants
If enabled, ignores errors from grants on schemas
with wildcards, which are interpreted differently in
systems where the partial_revokes
system variable is enabled.
strip_invalid_grants
If enabled, strips grant statements which would fail when users are loaded. Such as grants referring to a specific routine which does not exist.
unescape_wildcard_grants
If enabled, strips escape characters in grants on
schemas, replacing escaped \_
and
\%
wildcards in schema names with
_
and %
wildcard characters. When the
partial_revokes
system variable is enabled, the \
character is treated as a literal, which could lead
to unexpected results. It is strongly recommended to
check each such grant before enabling this option.
create_invisible_pks
Add a flag in the dump metadata to notify MySQL Shell’s dump loading utility to add primary keys in invisible columns, for each table that does not contain a primary key. This modification enables a dump where some tables lack primary keys to be loaded into a HeatWave Service High Availability instance. Primary keys are required for HeatWave Service High Availability, which uses Group Replication.
The dump data is unchanged by this modification, as
the tables do not contain the invisible columns
until they have been processed by the dump loading
utility. The invisible columns (which are named
"my_row_id
") have no impact on
applications that use the uploaded tables.
Adding primary keys in this way does not yet enable inbound replication of the modified tables to a High Availability instance, as that feature currently requires the primary keys to exist in both the source server and the replica server. If possible, instead of using this modification, consider creating primary keys in the tables on the source server, before dumping them again. You can do this with no impact to applications by using invisible columns to hold the primary keys. This is a best practice for performance and usability, and helps the dumped database to work seamlessly with HeatWave Service.
MySQL Shell’s dump loading utility can only be
used to load dumps created with the
create_invisible_pks
option on
a target MySQL instance version 8.0.24 or later,
due to a limitation on hidden columns in MySQL
8.0.23.
force_non_standard_fks
In MySQL 8.4.0,
restrict_fk_on_non_standard_key
was added, prohibiting creation of non-standard foreign
keys when enabled. That is, keys that reference non-unique
keys or partial fields of composite keys. HeatWave Service DB Systems
have this variable enabled by default, which causes dumps
with such tables to fail to load. This option disables
checks for non-standard foreign keys, and configures the
dump loader to set the session value of
restrict_fk_on_non_standard_key
variable to OFF. Creation of foreign keys with
non-standard keys may cause replication to fail.
MySQL Shell supports dumping MySQL data to S3-compatible buckets, such as Amazon Web Services (AWS) S3.
MySQL Shell supports AWS S3 configuration in command line options, environment variables, and configuration files. Command line options override environment variables, configuration files, and default options.
For information on configuration requirements, see Section 4.7, “Cloud Service Configuration”.
s3BucketName:
"string
"
The name of the S3 bucket to which the dump is to be
written. By default, the default
profile of the config
and
credentials
files located at
~/.aws/
are used to establish a
connection to the S3 bucket. You can substitute
alternative configurations and credentials for the
connection with the s3ConfigFile
and
s3CredentialsFile
options. For
instructions on installing and configuring the AWS CLI,
see
Getting
started with the AWS CLI.
s3CredentialsFile:"string
"
A credentials file that contains the user's credentials to
use for the connection, instead of the one in the default
location, ~/.aws/credentials
.
Typically, the credentials file contains the
aws_access_key_id
and
aws_secret_access_key
to use for the
connection.
s3ConfigFile:
"string
"
A configuration file that contains the profile to use for
the connection, instead of the one in the default
location, such as ~/.aws/config
.
Typically, the config file contains the region and output
type to use for the connection.
s3Profile:
"string
"
The profile name of the s3 CLI profile to use for the
connection, instead of the default
profile.
s3Region:
"string
"
The name of the region to use for the connection.
s3EndpointOverride:
"string
"
The URL of the endpoint to use instead of the default.
When connecting to the Oracle Cloud Infrastructure S3 compatibility API, the
endpoint takes the following format:
https://
.
Replace namespace
.compat.objectstorage.region
.oraclecloud.comnamespace
with the
Object Storage namespace and
region
with your region
identifier. For example, the region identifier for the US
East (Ashburn) region is us-ashburn-1
.
For a namespace named axaxnpcrorw5 in the US East (Ashburn) region:
https://axaxnpcrorw5.compat.objectstorage.us-ashburn-1.oraclecloud.com
.
The following example shows the dump of a MySQL instance to a
folder, test
, in an S3 bucket,
Bucket001
, with some compatibility options:
util.dumpInstance("test",{s3bucketName: "Bucket001", threads: 4, compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks"]})
The following example shows the dump of a MySQL instance to a
prefix, test
, in an object storage bucket,
Bucket001
, using a configuration profile,
oci
, the
s3EndpointOverride
to direct the connection
to the OCI endpoint of the required tenancy and region, and some
compatibility options:
util.dumpInstance("test",{s3BucketName: "Bucket001", s3EndpointOverride: "https://axaxnpcrorw5.compat.objectstorage.us-ashburn-1.oraclecloud.com", s3Profile: "oci", threads: 4, compatibility: ["strip_restricted_grants", "strip_definers", "ignore_missing_pks"]})
MySQL Shell supports dumping to Microsoft Azure Blob Storage.
MySQL Shell supports Microsoft Azure Blob Storage configuration in command line options, environment variables, and configuration files. Command line options override environment variables, and configuration files.
For information on configuration requirements and the order of precedence of the configuration types, see Section 4.7, “Cloud Service Configuration”.
azureContainerName:
"string
"
Mandatory. The name of the Azure container to which the dump is to be written. The container must exist.
azureConfigFile:
"string
"
Optional. A configuration file that contains the storage
connection parameters, instead of the one in the default
location, such as ~/.azure/config
. If
this is not defined, the default configuration file is
used.
azureContainerName
must be defined, and
not be empty.
azureStorageAccount:
"string
"
Optional. The name of the Azure storage account to use for the operation.
azureStorageSasToken:
"string
"
Optional. Azure Shared Access Signature (SAS) token to be used for the authentication of the operation, instead of a key.
In the following example, the configuration uses a configuration
string for the connection parameters, which means the dump
command only requires the azureContainerName
.
Example config
file:
[cloud]
name = AzureCloud
[storage]
connection_string=alphanumericConnectionString
Example dumpInstance
command, which exports
the contents of the instance to a folder named
prefix1
, in a container named
mysqlshellazure
:
util.dumpInstance("prefix1", {azureContainerName: "mysqlshellazure", threads: 4})
The outputURL
can also be a bucket or prefix
Pre-Authenticated Request (PAR). This enables you to dump your
data directly to an OCI Object Storage bucket.
The PAR must be defined with the following permissions enabled:
Permit object reads and writes
Enable Object Listing
If a PAR is defined as outputURL
, the
following options are not supported and will result in an error
if used:
osBucketName
s3BucketName
azureContainerName
Only bucket and prefix PARs are supported as
outputURL
. It is not possible to use an
object PAR.
If the PAR is not supported, or does not have the correct permissions defined, an OCI error is returned.
If the target bucket is not empty, the operation fails and an error is returned. If objects exist with the defined prefix, the operation fails and an error is returned.
When you define a prefix par, the generated PAR URL does not contain the defined prefix. You must add it to the URL manually.
The following example dumps the instance to the defined bucket PAR:
util.dumpInstance("https://objectstorage.region
.oraclecloud.com/p/secret
/n/myTenancy/b/myBucket/o/")
The following example dumps the schema sakila
to the defined bucket PAR:
util.dumpSchemas(["sakila"], "https://objectstorage.region
.oraclecloud.com/p/secret
/n/myTenancy/b/myBucket/o/")
The following example dumps the table
sakila.actor
to the defined bucket PAR:
util.dumpTables("sakila", ["actor"], "https://objectstorage.region
.oraclecloud.com/p/secret
/n/myTenancy/b/myBucket/o/")
When you define a prefix par, the generated PAR URL does not contain the defined prefix. You must add it to the URL manually.
The following example dumps the instance to the prefix
MyPrefix
, in the defined bucket PAR:
util.dumpInstance("https://objectstorage.region
.oraclecloud.com/p/secret
/n/myTenancy/b/myBucket/o/MyPrefix
/")
The following example dumps the schema sakila
to the prefix MyPrefix
, in the defined bucket
PAR:
util.dumpSchemas(["sakila"], "https://objectstorage.region
.oraclecloud.com/p/secret
/n/myTenancy/b/myBucket/o/MyPrefix
/")
The following example dumps the table
sakila.actor
to the prefix
MyPrefix
, in the defined bucket PAR:
util.dumpTables("sakila", ["actor"], "https://objectstorage.region
.oraclecloud.com/p/secret
/n/myTenancy/b/myBucket/o/MyPrefix
/")
Error numbers in the range 52000-52999 are specific to
MySQL Shell's instance dump utility
util.dumpInstance()
, schema dump utility
util.dumpSchemas()
, and table dump utility
util.dumpTables()
. The following errors might
be returned:
Error number: 52000
; Symbol:
SHERR_DUMP_LOCK_TABLES_MISSING_PRIVILEGES
Message: User %s is missing the following privilege(s) for %s: %s.
Error number: 52001
; Symbol:
SHERR_DUMP_GLOBAL_READ_LOCK_FAILED
Message: Unable to acquire global read lock
Error number: 52002
; Symbol:
SHERR_DUMP_LOCK_TABLES_FAILED
Message: Unable to lock tables: %s.
Error number: 52003
; Symbol:
SHERR_DUMP_CONSISTENCY_CHECK_FAILED
Message: Consistency check has failed.
Error number: 52004
; Symbol:
SHERR_DUMP_COMPATIBILITY_ISSUES_FOUND
Message: Compatibility issues were found
Error number: 52005
; Symbol:
SHERR_DUMP_COMPATIBILITY_OPTIONS_FAILED
Message: Could not apply some of the compatibility options
Error number: 52006
; Symbol:
SHERR_DUMP_WORKER_THREAD_FATAL_ERROR
Message: Fatal error during dump
Error number: 52007
; Symbol:
SHERR_DUMP_MISSING_GLOBAL_PRIVILEGES
Message: User %s is missing the following global privilege(s): %s.
Error number: 52008
; Symbol:
SHERR_DUMP_MISSING_SCHEMA_PRIVILEGES
Message: User %s is missing the following privilege(s) for schema %s: %s.
Error number: 52009
; Symbol:
SHERR_DUMP_MISSING_TABLE_PRIVILEGES
Message: User %s is missing the following privilege(s) for table %s: %s.
Error number: 52010
; Symbol:
SHERR_DUMP_NO_SCHEMAS_SELECTED
Message: Filters for schemas result in an empty set.
Error number: 52011
; Symbol:
SHERR_DUMP_MANIFEST_PAR_CREATION_FAILED
Message: Failed creating PAR for object '%s': %s
Error number: 52012
; Symbol:
SHERR_DUMP_DW_WRITE_FAILED
Message: Failed to write %s into file %s
Error number: 52013
; Symbol:
SHERR_DUMP_IC_FAILED_TO_FETCH_VERSION
Message: Failed to fetch version of the server.
Error number: 52014
; Symbol:
SHERR_DUMP_SD_CHARSET_NOT_FOUND
Message: Unable to find charset: %s
Error number: 52015
; Symbol:
SHERR_DUMP_SD_WRITE_FAILED
Message: Got errno %d on write
Error number: 52016
; Symbol:
SHERR_DUMP_SD_QUERY_FAILED
Message: Could not execute '%s': %s
Error number: 52017
; Symbol:
SHERR_DUMP_SD_COLLATION_DATABASE_ERROR
Message: Error processing select @@collation_database; results
Error number: 52018
; Symbol:
SHERR_DUMP_SD_CHARACTER_SET_RESULTS_ERROR
Message: Unable to set character_set_results to: %s
Error number: 52019
; Symbol:
SHERR_DUMP_SD_CANNOT_CREATE_DELIMITER
Message: Can't create delimiter for event: %s
Error number: 52020
; Symbol:
SHERR_DUMP_SD_INSUFFICIENT_PRIVILEGE
Message: %s has insufficient privileges to %s!
Error number: 52021
; Symbol:
SHERR_DUMP_SD_MISSING_TABLE
Message: %s not present in information_schema
Error number: 52022
; Symbol:
SHERR_DUMP_SD_SHOW_CREATE_TABLE_FAILED
Message: Failed running: show create table %s with error: %s
Error number: 52023
; Symbol:
SHERR_DUMP_SD_SHOW_CREATE_TABLE_EMPTY
Message: Empty create table for table: %s
Error number: 52024
; Symbol:
SHERR_DUMP_SD_SHOW_FIELDS_FAILED
Message: SHOW FIELDS FROM failed on view: %s
Error number: 52025
; Symbol:
SHERR_DUMP_SD_SHOW_KEYS_FAILED
Message: Can't get keys for table %s: %s
Error number: 52026
; Symbol:
SHERR_DUMP_SD_SHOW_CREATE_VIEW_FAILED
Message: Failed: SHOW CREATE TABLE %s
Error number: 52027
; Symbol:
SHERR_DUMP_SD_SHOW_CREATE_VIEW_EMPTY
Message: No information about view: %s
Error number: 52028
; Symbol:
SHERR_DUMP_SD_SCHEMA_DDL_ERROR
Message: Error while dumping DDL for schema '%s': %s
Error number: 52029
; Symbol:
SHERR_DUMP_SD_TABLE_DDL_ERROR
Message: Error while dumping DDL for table '%s'.'%s': %s
Error number: 52030
; Symbol:
SHERR_DUMP_SD_VIEW_TEMPORARY_DDL_ERROR
Message: Error while dumping temporary DDL for view '%s'.'%s': %s
Error number: 52031
; Symbol:
SHERR_DUMP_SD_VIEW_DDL_ERROR
Message: Error while dumping DDL for view '%s'.'%s': %s
Error number: 52032
; Symbol:
SHERR_DUMP_SD_TRIGGER_COUNT_ERROR
Message: Unable to check trigger count for table: '%s'.'%s'
Error number: 52033
; Symbol:
SHERR_DUMP_SD_TRIGGER_DDL_ERROR
Message: Error while dumping triggers for table '%s'.'%s': %s
Error number: 52034
; Symbol:
SHERR_DUMP_SD_EVENT_DDL_ERROR
Message: Error while dumping events for schema '%s': %s
Error number: 52035
; Symbol:
SHERR_DUMP_SD_ROUTINE_DDL_ERROR
Message: Error while dumping routines for schema '%s': %s
Error number: 52036
; Symbol:
SHERR_DUMP_ACCOUNT_WITH_APOSTROPHE
Message: Account %s contains the ' character, which is not supported
Error number: 52037
; Symbol:
SHERR_DUMP_USERS_MARIA_DB_NOT_SUPPORTED
Message: Dumping user accounts is currently not supported in MariaDB. Set the 'users' option to false to continue.
Error number: 52038
; Symbol:
SHERR_DUMP_INVALID_GRANT_STATEMENT
Message: Dump contains an invalid grant statement. Use the 'strip_invalid_grants' compatibility option to fix this.
Error number: 52039
; Symbol:
SHERR_DUMP_IC_INVALID_VIEWS
Message: Dump contains one or more invalid views. Fix them manually, or use the 'excludeTables' option to exclude them.
Error numbers in the range 54000-54999 are for connection and
network errors experienced by MySQL Shell's dump loading
utility util.loadDump()
, or by MySQL Shell's
instance dump utility util.dumpInstance()
,
schema dump utility util.dumpSchemas()
, and
table dump utility util.dumpTables()
. In most
cases, the error code matches the HTTP error involved – for
example, error 54404 occurs when the target of a URL is not
found (HTTP 404 Not Found). The following errors might be
returned: