MySQL Shell 9.3
This section describes the MySQL Shell copy utilities:
The copy utilities enable you to copy DDL and data between MySQL instances, without the need for intermediate storage. The data is streamed from source to destination.
Approximately 32MB of memory is pre-allocated to store metadata files which are discarded as they are read and the copy is processed.
It is possible to copy from a source to an HeatWave Service DB System. If you defined a DB System as the target, the utility detects this and enables HeatWave Service compatibility checks by default. See Section 12.5, “Instance Dump Utility, Schema Dump Utility, and Table Dump Utility” for more information on these checks.
The copy utilities combine dump and load utilities into a single operation, for ease of use. The majority of the options available to the load and dump utilities are also available to the copy utilities and are documented in the following sections.
The copy utilities use LOAD DATA LOCAL
INFILE
statements to upload data, so the
local_infile
system
variable must be set to ON
on the target
server. You can do this by issuing the following statement
on the target instance before running the copy utility:
SET GLOBAL local_infile = 1;
To avoid a known potential security issue with LOAD
DATA LOCAL
, when the MySQL server replies to the
utility's LOAD DATA
requests with file
transfer requests, the utility only sends the predetermined
data chunks, and ignores any specific requests attempted by
the server. For more information, see
Security Considerations for LOAD DATA LOCAL.
The copy utilities only support General Availability (GA) releases of MySQL Server versions.
MySQL 5.7 or later is required for the destination MySQL instance where the copy will be loaded.
Object names in the instance or schema must be in the
latin1
or utf8
characterset.
Data consistency is guaranteed only for tables that use the
InnoDB
storage engine.
The minimum required set of privileges that the user account
used to run the utility must have on all the schemas
involved is as follows:
EVENT
,
RELOAD
,
SELECT
,
SHOW VIEW
, and
TRIGGER
.
If the consistent
option is set to
true
, which is the default, the
LOCK TABLES
privilege on
all copied tables can substitute for the
RELOAD
privilege if the
latter is not available.
If the user account does not have the
BACKUP_ADMIN
privilege
and LOCK INSTANCE FOR BACKUP
cannot
be executed, the utilities make an extra consistency
check during the copy. If this check fails, an instance
copy is stopped, but a schema copy or a table copy
continues and returns an error message to alert the user
that the consistency check failed.
If the consistent
option is set to
false
, the
BACKUP_ADMIN
and
RELOAD
privileges are not
required.
The user account used to run the utility needs the
REPLICATION CLIENT
privilege
in order for the utility to be able to include the binary
log file name and position in the metadata. If the user ID
does not have that privilege, the copy continues but does
not include the binary log information. The binary log
information can be used after loading the copied data into
the replica server to set up replication with a non-GTID
source server, using the
ASSIGN_GTIDS_TO_ANONYMOUS_TRANSACTIONS
option of the CHANGE REPLICATION SOURCE
TO
statement.
The utilities convert columns with data types that are not
safe to be stored in text form (such as
BLOB
) to Base64. The size of these
columns therefore must not exceed approximately 0.74 times
the value of the
max_allowed_packet
system
variable (in bytes) that is configured on the target MySQL
instance.
For compatibility with HeatWave Service, all tables must use the
InnoDB
storage engine. If you
defined a DB System as the target, the utility detects this,
enables HeatWave Service compatibility checks by default, and checks
for any exceptions found in the source, and the
compatibility
option alters the copy to
replace other storage engines with
InnoDB
.
For the instance and schema copy utilities, for compatibility with HeatWave Service, all tables in the instance or schema must be located in the MySQL data directory and must use the default schema encryption.
HeatWave Service uses partial_revokes=ON
, which
means database-level user grants on schemas which contain
wildcards, such as _
or
%
, are reported as errors.
You can also use the compatibility options,
ignore_wildcard_grants
and
strip_invalid_grants
See Options for HeatWave Service and Oracle Cloud Infrastructure for more information.
A number of other security related restrictions and
requirements apply to items such as tablespaces and
privileges for compatibility with HeatWave Service. The
compatibility
option automatically alters
the copy to resolve some of the compatibility issues. You
might need (or prefer) to make some changes manually. For
more details, see the description for the
compatibility
option.
For HeatWave Service High Availability, which uses Group Replication,
primary keys are required on every table. MySQL Shell
checks and reports an error for any tables in the copy that
are missing primary keys. The
compatibility
option can be set to ignore
missing primary keys if you do not need them, or to add
primary keys in invisible columns where they are not
present. For details, see the description for the
compatibility
option. If possible,
instead of managing this in the utility, consider creating
primary keys in the tables on the source server before
copying them.
If the source is MySQL 5.7, and the target is a DB System,
util.checkForServerUpgrade
is run
automatically. Pre-upgrade checks are run depending on the
type of objects included in the copy.
Progress resumption is not supported by the copy utilities.
The copy instance, copy schema, and copy table utilities use the MySQL Shell global session to obtain the connection details of the MySQL server from which the copy is carried out. You must open the global session (which can have an X Protocol connection or a classic MySQL protocol connection) before running one of the utilities. The utilities open their own sessions for each thread, copying options such as connection compression and SSL options from the global session, and do not make any further use of the global session.
util.copyInstance(connectionData[,
options])
: Enables copying of an entire instance
to another server.
connectionData
: Defines the
connection details for the destination server you want
to copy to.
This can be one of the following:
A simple user@host
string.
A connection URI such as
mysql://user@host:port?option=value,option=value
A connection dictionary, such as {
"scheme": "mysql", "user": "u", "host": "h", "port":
1234, "option": "value" }
util.copySchemas(schemaList, connectionData[,
options])
: Enables copying of one or more schemas
to another server.
schemaList
: Defines the list of
schemas to copy from the current server to the
destination server.
util.copyTables(schemaName, tablesList,
connectionData[, options])
: Enables copying of one
or more tables from a schema to another server.
schemaName
: Defines the name of the
schema from which to copy tables.
tablesList
: Defines the names of the
tables from the named schema to copy to the destination
server.
dryRun: [ true | false ]
Displays information about the copy with the specified set
of options, and about the results of HeatWave Service compatibility
checks, but does not proceed with the copy. Setting this
option enables you to list out all of the compatibility
issues before starting the copy. The default is
false
.
showProgress: [ true | false ]
Display (true
) or hide
(false
) progress information for the
copy. The default is true
if
stdout
is a terminal
(tty
), such as when MySQL Shell is in
interactive mode, and false
otherwise.
The progress information includes the estimated total
number of rows to be copied, the number of rows copied so
far, the percentage complete, and the throughput in rows
and bytes per second.
threads: int
The number of parallel threads to use to copy chunks of data from the MySQL instance. Each thread has its own connection to the MySQL instance. The default is 4.
The copy utilities require twice the number of threads, one thread to copy and one thread to write. If threads is set to N, 2N threads are used.
maxRate:
"string
"
The maximum number of bytes per second per thread for data
read throughput during the copy. The unit suffixes
k
for kilobytes, M
for megabytes, and G
for gigabytes can
be used (for example, setting 100M
limits throughput to 100 megabytes per second per thread).
Setting 0
(which is the default value),
or setting the option to an empty string, means no limit
is set.
defaultCharacterSet:
"string
"
The character set to be used during the session
connections that are opened by MySQL Shell to the target
server. The default is utf8mb4
. The
session value of the system variables
character_set_client
,
character_set_connection
,
and character_set_results
are set to this value for each connection. The character
set must be permitted by the
character_set_client
system variable and supported by the MySQL instance.
checksum: [true|false]
If enabled, on dump, a metadata file,
@.checksums.json
is generated with
the copy. This file contains the checksum data for the
copy, enabling data verification.
The following conditions apply if checksum:
true
during the copy process:
If ddlOnly:false
and
chunking:true
, a checksum is
generated for each copied table and partition chunk.
If ddlOnly:false
and
chunking:false
, a checksum is
generated for each copied table and table partition.
If ddlOnly:true
, a checksum is
generated for each copied table and table partition.
If enabled, the utility checks the generated checksum data after the corresponding data is loaded. The verification is limited to data which was dumped, ignoring generated data such as invisible primary keys.
Errors are returned if a checksum does not match or if a table is missing and cannot be verified.
If checksum: true
but no data was
loaded, either due to loadData: false
or no data being dumped, the utility verifies the dump's
checksum information against the current contents of the
affected tables.
If a table does not exist, an error is displayed for each missing table.
If checksum: true
and
dryRun: true
, the checksum is not
verified. A message is displayed stating that no
verification took place.
consistent: [ true | false ]
Enable (true
) or disable
(false
) consistent data copies by
locking the instance for backup during the copy. The
default is true
.
When true
is set, the utility sets a
global read lock using the FLUSH TABLES WITH READ
LOCK
statement (if the user ID used to run the
utility has the RELOAD
privilege), or a series of table locks using LOCK
TABLES
statements (if the user ID does not have
the RELOAD
privilege but
does have LOCK TABLES
). The
transaction for each thread is started using the
statements SET SESSION TRANSACTION ISOLATION
LEVEL REPEATABLE READ
and START
TRANSACTION WITH CONSISTENT SNAPSHOT
. When all
threads have started their transactions, the instance is
locked for backup (as described in
LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE Statements) and the global
read lock is released.
If the user account does not have the
BACKUP_ADMIN
privilege and
LOCK INSTANCE FOR BACKUP
cannot be
executed, the utilities make an extra consistency check
during the copy. If this check fails, an instance copy is
stopped, but a schema or table copy continues and returns
an error message to alert the user that the consistency
check failed.
skipConsistencyChecks: [ true | false ]
Enable (true
) or disable
(false
) the extra consistency check
performed when consistent: true
.
Default is false
.
This option is ignored if consistent:
false
.
schema:
"string
"
The target schema into which the contents of the copied schema must be loaded.
If the schema does not exist, it is created, and the copied schema is loaded to that new schema. If the new schema name differs from the schema name in the copy, the copy is loaded to the new schema, but no changes are made to the loaded data. That is, any reference to the old schema name remains in the data. All stored procedures, views, and so on, refer to the original schema, not the new one.
This load option is supported for single schema copies, or
for filtering options which result in a single schema.
That is, if you are using copyInstance
to copy data to a new instance, you can copy all the data
to a single schema if the source contains only one schema,
or the defined filters result in a single schema being
copied to the destination.
skipBinlog: [ true | false ]
Skips binary logging on the target MySQL instance for the
sessions used by the utility during the course of the
copy, by issuing a SET sql_log_bin=0
statement. The default is false
, so
binary logging is active by default. For HeatWave Service DB Systems,
this option is not used, and the import stops with an
error if you attempt to set it to true
.
For other MySQL instances, always set
skipBinlog
to true
if you are applying the
gtid_executed
GTID set
from the source MySQL instance on the target MySQL
instance, either using the
updateGtidSet
option or manually. When
GTIDs are in use on the target MySQL instance
(gtid_mode=ON
), setting this option to
true
prevents new GTIDs from being
generated and assigned as the import is being carried out,
so that the original GTID set from the source server can
be used. The user account must have the required
permissions to set the sql_log_bin
system variable.
ignoreVersion: [ true | false ]
Copy even if the major version number of the source from
which the data was copied is non-consecutive to the major
version number of the destination, such as 5.6 to 8.1. The
default is false
, meaning that an error
is issued and the copy stops if the major version number
is different. When this option is set to
true
, a warning is issued and the copy
proceeds. Note that the copy will only be successful if
the copied schemas have no compatibility issues with the
new major version.
ignoreVersion
is not required for
copying between consecutive major versions, such as 5.7
to 8.1.
Before attempting a copy using the
ignoreVersion
option, use
MySQL Shell's upgrade checker utility
checkForServerUpgrade()
to check the
source instance and fix any compatibility issues
identified by the utility before attempting to copy.
dropExistingObjects: [ true | false ]
The default value is false.
Copy the instance even if it contains user accounts or DDL objects that already exist in the target instance. If this option is set to false, any existing object results in an error. Setting it to true drops existing user accounts and objects before creating them.
Schemas are not dropped.
It is not possible to enable
dropExistingObjects
if
ignoreExistingObjects
or
dataOnly
are enabled.
ignoreExistingObjects: [ true | false ]
Copy even if the copy contains objects that already exist
in the target instance. The default is
false
, meaning that an error is issued
and the copy stops when a duplicate object is found. When
this option is set to true
, duplicate
objects are reported but no error is generated and the
copy proceeds. This option should be used with caution,
because the utility does not check whether the contents of
the object in the target MySQL instance and in the dump
files are different, so it is possible for the resulting
copy to contain incorrect or invalid data. An alternative
strategy is to use the excludeTables
option to exclude tables that you have already copied
where you have verified the object in the dump files is
identical with the imported object in the target MySQL
instance. The safest choice is to remove duplicate objects
from the target MySQL instance before restarting the copy.
handleGrantErrors: [ abort | drop_account | ignore
]
The action taken in the event of errors related to
GRANT
or REVOKE
errors.
abort
: (default) stops the copy
process and displays an error.
drop_account
: deletes the account
and continues the copy process.
ignore
: ignores the error and
continues the copy process.
maxBytesPerTransaction:
number
The maximum number of bytes that can be copied from a data
chunk in a single LOAD DATA
statement. If a data file exceeds the
maxBytesPerTransaction
value, multiple
LOAD DATA
statements load
data from the file in chunks less than or equal to the
maxBytesPerTransaction
value.
The unit suffixes k
for kilobytes,
M
for megabytes, and
G
for gigabytes can be used. The
minimum value is 4096 bytes. If a lesser value is
specified, an exception is thrown. If the
maxBytesPerTransaction
option is unset,
the bytesPerChunk
value is used
instead.
If a data file contains a row that is larger than the
maxBytesPerTransaction
setting, the
row's data is requested in a single
LOAD DATA
statement. A
warning is emitted for the first row encountered that
exceeds the maxBytesPerTransaction
setting.
An intended use for this option is to load data in smaller
chunks when a data file is too large for the target
server's limits, such as the limits defined by the
server's
group_replication_transaction_size_limit
or max_binlog_cache_size
settings. For example, If you receive the error
"MySQL Error 1197 (HY000): Multi-statement
transaction required more than 'max_binlog_cache_size'
bytes of storage" when loading data, set
maxBytesPerTransaction
to a value less
than or equal to the server instance’s
max_binlog_cache_size
setting.
sessionInitSql: list of
strings
A list of SQL statements to run at the start of each client session used for copying data into the target MySQL instance. You can use this option to change session variables. For example, the following statements skip binary logging on the target MySQL instance for the sessions used by the utility during the course of the import, and increase the number of threads available for index creation:
sessionInitSQL: ["SET SESSION sql_log_bin=0;", "SET SESSION innodb_ddl_threads=8,"]
If an error occurs while running the SQL statements, the copy stops and returns an error message.
tzUtc: [ true | false ]
Include a statement at the start of the copy to set the
time zone to UTC. All timestamp data in the output is
converted to this time zone. The default is
true
. Setting the time zone to UTC
facilitates moving data between servers with different
time zones, or handling a set of data that has multiple
time zones. Set this option to false
to
keep the original timestamps if preferred.
chunking: [ true | false ]
Enable (true
) or disable
(false
) chunking for table data, which
splits the data for each table into multiple files. The
default is true
. Use
bytesPerChunk
to specify the chunk
size. If you set the chunking option to
false
, chunking does not take place and
the utility creates one data file for each table.
If a table has no primary key or unique index, chunking is
done based on the number of rows in the table, the average
row length, and the bytesPerChunk
value.
bytesPerChunk:
"string
"
Sets the approximate number of bytes to be written to each
data file when chunking is enabled. The unit suffixes
k
for kilobytes, M
for megabytes, and G
for gigabytes can
be used. The default is 64 MB (64M
),
and the minimum is 128 KB (128k
).
Specifying this option sets chunking
to
true
implicitly.
loadIndexes: [ true | false ]
Create (true
) or do not create
(false
) secondary indexes for tables.
The default is true
. When this option
is set to false
, secondary indexes are
not created during the import, and you must create them
afterwards. This can be useful if you are loading the DDL
files and data files separately, and if you want to make
changes to the table structure after loading the DDL
files. Afterwards, you can create the secondary indexes by
running the dump loading utility again with
loadIndexes
set to
true
and
deferTableIndexes
set to
all
.
MySQL Shell utilizes MySQL Server's parallel index creation. All indexes in a table are added simultaneously.
See Configuring Parallel Threads for Online DDL Operations for restrictions and configuration.
deferTableIndexes: [ off | fulltext | all
]
Defer the creation of secondary indexes until after the
table data is loaded. This can reduce loading times.
off
means all indexes are created
during the table load. The default setting
fulltext
defers full-text indexes only.
all
defers all secondary indexes and
only creates primary indexes during the table load, and
also indexes defined on columns containing auto-increment
values.
analyzeTables: [ off | on | histogram ]
Execute ANALYZE TABLE
for tables when
they have been loaded. on
analyzes all
tables, and histogram
analyzes only
tables that have histogram information stored in the dump.
The default is off
. You can run the
dump loading utility with this option to analyze the
tables even if the data has already been loaded.
updateGtidSet: [ off | append | replace ]
Apply the gtid_executed
GTID set from the source MySQL instance, as recorded in
the dump metadata, to the
gtid_purged
GTID set on
the target MySQL instance. The
gtid_purged
GTID set holds the GTIDs of all transactions that have
been applied on the server, but do not exist on any binary
log file on the server. The default is
off
, meaning that the GTID set is not
applied.
Do not use this option when Group Replication is running on the target MySQL instance.
For MySQL instances that are not HeatWave Service DB System
instances, when you set append
or
replace
to update the GTID set, also
set the skipBinlog
option to
true
. This ensures the GTIDs on the
source server match the GTIDs on the target server. For
HeatWave Service DB System instances, this option is not used.
For a target MySQL instance from MySQL 8.0, you can set
the option to append
, which appends the
gtid_executed
GTID set
from the source MySQL instance to the
gtid_purged
GTID set on
the target MySQL instance. The
gtid_executed
GTID set to be applied, which is shown in the
gtidExecuted
field in the
@.json
dump file, must not intersect
with the gtid_executed
set already on the target MySQL instance. For example, you
can use this option when importing a schema from a
different source MySQL instance to a target MySQL instance
that already has schemas from other source servers.
You can also use replace
for a target
MySQL instance from MySQL 8.0, to replace the
gtid_purged
GTID set on
the target MySQL instance with the
gtid_executed
GTID set
from the source MySQL instance. To do this, the
gtid_executed
GTID set
from the source MySQL instance must be a superset of the
gtid_purged
GTID set on the target MySQL instance, and must not
intersect with the set of transactions in the target's
gtid_executed
GTID set
that are not in its
gtid_purged
GTID set.
For a target MySQL instance at MySQL 5.7, set the option
to replace
, which replaces the
gtid_purged
GTID set on
the target MySQL instance with the
gtid_executed
GTID set
from the source MySQL instance. In MySQL 5.7, to do this
the gtid_executed
and
gtid_purged
GTID sets on the target MySQL instance must be empty, so
the instance must be unused with no previously imported
GTID sets.
To apply the GTID set, after the import, use
MySQL Shell's \sql
command (or enter
SQL mode) to issue the following statement on the
connected MySQL instance, copying the
gtid_executed
GTID set from the
gtidExecuted
field in the
@.json
dump file in the dump
metadata:
shell-js> \sql SET @@GLOBAL.gtid_purged= "+gtidExecuted_set
";
This statement, which works from MySQL 8.0, adds the
source MySQL Server instance's
gtid_executed
GTID set to
the target MySQL instance's
gtid_purged
GTID set. For
MySQL 5.7, the plus sign (+
) must be
omitted, and the
gtid_executed
and
gtid_purged
GTID sets on the target MySQL instance must be empty. For
more details, see the description of the
gtid_purged
system
variable in the release of the target MySQL instance.
compatibility: array of
strings
Apply the specified requirements for compatibility with HeatWave Service for all tables in the copy, altering the dump files as necessary.
The following modifications can be specified as an array of strings:
force_innodb
Change CREATE TABLE
statements to use the
InnoDB
storage engine
for any tables that do not already use it.
skip_invalid_accounts
Remove user accounts created with external
authentication plugins that are not supported in
HeatWave Service. This option also removes user accounts that
do not have passwords set, except where an account
with no password is identified as a role, in which
case it is copied using the
CREATE ROLE
statement.
strip_definers
Remove the DEFINER
clause from
views, routines, events, and triggers, so these
objects are created with the default definer (the
user invoking the schema), and change the
SQL SECURITY
clause for views and
routines to specify INVOKER
instead of DEFINER
. HeatWave Service
requires special privileges to create these objects
with a definer other than the user loading the
schema. If your security model requires that views
and routines have more privileges than the account
querying or calling them, you must manually modify
the schema before copying it.
strip_restricted_grants
Remove specific privileges that are restricted by
HeatWave Service from GRANT
statements, so users and their roles cannot be given
these privileges (which would cause user creation to
fail). This option also removes
REVOKE
statements for
system schemas (mysql
and
sys
) if the administrative user
account on an Oracle Cloud Infrastructure Compute instance does not itself
have the relevant privileges, so cannot remove them.
strip_tablespaces
Remove the TABLESPACE
clause from
CREATE TABLE
statements, so all tables are created in their
default tablespaces. HeatWave Service has some restrictions on
tablespaces.
ignore_missing_pks
Make the instance, schema, or table copy utility
ignore any missing primary keys when the dump is
carried out. Dumps created with this modification
cannot be loaded into a HeatWave Service High Availability
instance, because primary keys are required for
HeatWave Service High Availability, which uses Group
Replication. To add missing primary keys
automatically, use the
create_invisible_pks
modification, or consider creating primary keys in
the tables on the source server.
ignore_wildcard_grants
If enabled, ignores errors from grants on schemas
with wildcards, which are interpreted differently in
systems where the partial_revokes
system variable is enabled.
strip_invalid_grants
If enabled, strips grant statements which would fail when users are copied. Such as grants referring to a specific routine which does not exist.
create_invisible_pks
Adds primary keys in invisible columns for each table that does not contain a primary key. This modification enables a copy where some tables lack primary keys to be loaded into a HeatWave Service High Availability instance. Primary keys are required for HeatWave Service High Availability, which uses Group Replication.
The data is unchanged by this modification, as the
tables do not contain the invisible columns until
they have been processed by the copy utility. The
invisible columns (which are named
"my_row_id
") have no impact on
applications that use the uploaded tables.
where: {"schemaName.tableName": "string"}
A key-value pair comprising of a valid table identifier,
of the form
,
and a valid SQL condition expression used to filter the
data being copied.
schemaName
.tableName
The SQL is validated only when it is executed. If you are copying many tables, any SQL-syntax-related issues will only be seen late in the process. As such, it is recommended you test your SQL condition before using it in a long-running export process.
partitions:
{schemaName.tableName
:
["string
","string
",..]}
A key-value pair comprising of a valid table identifier,
of the form
,
and a list of valid partitions.
schemaName
.tableName
For example, to copy only the partitions named
p1
and p2
from the
table schema.table
:
partitions:
{'
schema
.table
':["p1",
"p2"]}
ddlOnly: [ true | false ]
Setting this option to true
includes
only the DDL files for the items in the copy, and does not
copy the data. The default is false
.
dataOnly: [ true | false ]
Setting this option to true
includes
only the data files for the items in the copy, and does
not include DDL files. The default is
false
.
users: [ true | false ]
(Instance copy utility
only) Include (true
) or
exclude (false
) users and their roles
and grants in the copy. The default is
true
. The schema and table copy
utilities do not include users, roles, or grants in a
copy.
You can use the excludeUsers
or
includeUsers
option to specify
individual user accounts to be excluded from or included
in the copy.
If copying users from a MySQL 5.6 instance, the user performing the copy must have the SUPER privilege.
excludeUsers: array of
strings
(Instance copy utility
only) Exclude the named user accounts from the
copy. Use to exclude user accounts that are not accepted
for import to a HeatWave Service DB System, or that already exist or
are not wanted on the target MySQL instance. Specify each
user account string in the format
"'
for an account that is defined with a user name and host
name, or
user_name
'@'host_name
'""'
for an account that is defined with a user name only. If
you do not supply a host name, all accounts with that user
name are excluded.
user_name
'"
includeUsers: array of
strings
(Instance copy utility
only) Include only the named user accounts in
the copy. Specify each user account string as for the
excludeUsers
option. Use as an
alternative to excludeUsers
if only a
few user accounts are required in the copy. You can also
specify both options to include some accounts and exclude
others.
excludeSchemas: array of
strings
(Instance copy utility
only) Exclude the named schemas from the copy.
Note that the information_schema
,
mysql
, ndbinfo
,
performance_schema
, and
sys
schemas are always excluded from an
instance copy.
includeSchemas: array of
strings
(Instance copy utility
only) Include only the named schemas in the
copy. You cannot include the
information_schema
,
mysql
, ndbinfo
,
performance_schema
, or
sys
schemas by naming them on this
option. If you want to copy one or more of these schemas,
you can do this using the schema copy utility
util.copySchemas()
.
excludeTables: array of
strings
(Instance and schema copy utilities
only) Exclude the named tables (DDL and data)
from the copy. Table names must be qualified with a valid
schema name, and quoted with the backtick character if
needed. Note that the data for the
mysql.apply_status
,
mysql.general_log
,
mysql.schema
, and
mysql.slow_log tables
is always
excluded from a schema copy, although their DDL statements
are included, and you cannot include that data by naming
the table in another option or utility.
includeTables: array of
strings
(Instance and schema copy utilities only) Include only the named tables in the copy. Table names must be qualified with a valid schema name, and quoted with the backtick character if needed.
events: [ true | false ]
(Instance and schema copy utilities
only) Include (true
) or
exclude (false
) events for each schema
in the copy. The default is true
.
excludeEvents: array of
strings
(Instance and schema copy utilities only) Exclude the named events from the copy. Names of events must be qualified with a valid schema name, and quoted with the backtick character if needed.
includeEvents: array of
strings
(Instance and schema copy utilities only) Include only the named events in the copy. Event names must be qualified with a valid schema name, and quoted with the backtick character if needed.
routines: [ true | false ]
(Instance and schema copy utilities
only) Include (true
) or
exclude (false
) functions and stored
procedures for each schema in the copy. The default is
true
. Note that user-defined functions
are not included, even when routines
is
set to true
.
excludeRoutines: array of
strings
(Instance and schema copy utilities only) Exclude the named functions and stored procedures from the copy. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed.
includeRoutines: array of
strings
(Instance and schema copy utilities only) Include only the named functions and stored procedures in the copy. Names of routines must be qualified with a valid schema name, and quoted with the backtick character if needed.
all: [ true | false ]
(Table copy utility
only) Setting this option to
true
includes all views and tables from
the specified schema in the copy. The default is
false
. When you use this option, set
the tables
parameter to an empty array.
triggers: [ true | false ]
(All copy utilities)
Include (true
) or exclude
(false
) triggers for each table in the
copy. The default is true
.
excludeTriggers: array of
strings
(All copy utilities)
Exclude the named triggers from the copy. Names of
triggers must be qualified with a valid schema name and
table name (schema.table.trigger
), and
quoted with the backtick character if needed. You can
exclude all triggers for a specific table by specifying a
schema name and table name with this option
(schema.table
).
includeTriggers: array of
strings
(All copy utilities)
Include only the named triggers in the copy. Names of
triggers must be qualified with a valid schema name and
table name (schema.table.trigger
), and
quoted with the backtick character if needed. You can
include all triggers for a specific table by specifying a
schema name and table name with this option
(schema.table
).
libraries: [ true | false ]
(Instance copy utility and schema copy utilities only)
Include (true, default) or exclude (false) libraries from
the dump. If set to true, copies all libraries in the
copy. If the libraries
option is not
provided, it is assumed to be true and all libraries are
copied.
If the target version does not support libraries, a warning is displayed.
See Using JavaScript Libraries for information on libraries.
includeLibraries: array of
strings
(Instance copy utility and schema copy utilities only)
List of library objects to be included in the dump in the
format
.
schema
.library
For example:
"includeLibraries": [ "`sakila`.`library1`", "`sakila`.`library2`" ]
excludeLibraries: array of
strings
(Instance copy utility and schema copy utilities only)
List of library objects to be excluded from the dump in
the format
.
schema
.library
For example:
"excludeLibraries": [ "`sakila`.`library1`", "`sakila`.`library2`" ]
The following examples show how to use the copy utilities:
Copying an instance from local to HeatWave Service High Availability DB System:
JS> util.copyInstance('mysql://User001@DBSystemIPAddress',{threads: 6, deferTableIndexes: "all", compatibility: ["strip_restricted_grants", "strip_definers", "create_invisible_pks"]})
This example copies an instance to a DB System, with the
user User001
and a series of
compatibility options which make the instance compatible
with a DB System. create_invisible_pks
is
included because a High Availability DB System uses Group
Replication, which requires that each table have a Primary
Key. This option adds an invisible primary key to each
table.
Copying a schema to the target instance and renaming the schema:
util.copySchemas(['sakila'], 'user@localhost:4101',{schema: "mySakilaSchema"})
This example copies the contents of a schema from the source
to a schema with a different name on the destination,
localhost:4101
.
Copying a table from a schema to another schema on the destination:
util.copyTables('sakila', ['actor'], 'root@localhost:4101',{schema: "mySakilaSchema"})
This example copies the actor
table from
the sakila
schema, to the
mySakilaSchema
on the destination,
localhost:4101
.