MySQL 8.0 Reference Manual Including MySQL NDB Cluster 8.0
These performance tips supplement the general guidelines for fast inserts in Section 10.2.5.1, “Optimizing INSERT Statements”.
When importing data into InnoDB
, turn off
autocommit mode, because it performs a log flush to disk for
every insert. To disable autocommit during your import
operation, surround it with
SET
autocommit
and
COMMIT
statements:
SET autocommit=0;
... SQL import statements ...
COMMIT;
The mysqldump option
--opt
creates dump files
that are fast to import into an InnoDB
table, even without wrapping them with the
SET
autocommit
and
COMMIT
statements.
If you have UNIQUE
constraints on
secondary keys, you can speed up table imports by
temporarily turning off the uniqueness checks during the
import session:
SET unique_checks=0;
... SQL import statements ...
SET unique_checks=1;
For big tables, this saves a lot of disk I/O because
InnoDB
can use its change buffer to write
secondary index records in a batch. Be certain that the data
contains no duplicate keys.
If you have FOREIGN KEY
constraints in
your tables, you can speed up table imports by turning off
the foreign key checks for the duration of the import
session:
SET foreign_key_checks=0;
... SQL import statements ...
SET foreign_key_checks=1;
For big tables, this can save a lot of disk I/O.
Use the multiple-row INSERT
syntax to reduce communication overhead between the client
and the server if you need to insert many rows:
INSERT INTO yourtable VALUES (1,2), (5,5), ...;
This tip is valid for inserts into any table, not just
InnoDB
tables.
When doing bulk inserts into tables with auto-increment
columns, set
innodb_autoinc_lock_mode
to
2 (interleaved) instead of 1 (consecutive). See
Section 17.6.1.6, “AUTO_INCREMENT Handling in InnoDB” for
details.
When performing bulk inserts, it is faster to insert rows in
PRIMARY KEY
order.
InnoDB
tables use a
clustered index,
which makes it relatively fast to use data in the order of
the PRIMARY KEY
. Performing bulk inserts
in PRIMARY KEY
order is particularly
important for tables that do not fit entirely within the
buffer pool.
For optimal performance when loading data into an
InnoDB
FULLTEXT
index,
follow this set of steps:
Define a column FTS_DOC_ID
at table
creation time, of type BIGINT UNSIGNED NOT
NULL
, with a unique index named
FTS_DOC_ID_INDEX
. For example:
CREATE TABLE t1 ( FTS_DOC_ID BIGINT unsigned NOT NULL AUTO_INCREMENT, title varchar(255) NOT NULL DEFAULT '', text mediumtext NOT NULL, PRIMARY KEY (`FTS_DOC_ID`) ) ENGINE=InnoDB; CREATE UNIQUE INDEX FTS_DOC_ID_INDEX on t1(FTS_DOC_ID);
Load the data into the table.
Create the FULLTEXT
index after the
data is loaded.
When adding FTS_DOC_ID
column at table
creation time, ensure that the
FTS_DOC_ID
column is updated when the
FULLTEXT
indexed column is updated, as
the FTS_DOC_ID
must increase
monotonically with each
INSERT
or
UPDATE
. If you choose not
to add the FTS_DOC_ID
at table creation
time and have InnoDB
manage DOC IDs for
you, InnoDB
adds the
FTS_DOC_ID
as a hidden column with the
next CREATE
FULLTEXT INDEX
call. This approach, however,
requires a table rebuild which can impact performance.
If loading data into a new MySQL
instance, consider disabling redo logging using
ALTER
INSTANCE {ENABLE|DISABLE} INNODB REDO_LOG
syntax.
Disabling redo logging helps speed up data loading by
avoiding redo log writes. For more information, see
Disabling Redo Logging.
This feature is intended only for loading data into a new MySQL instance. Do not disable redo logging on a production system. It is permitted to shutdown and restart the server while redo logging is disabled, but an unexpected server stoppage while redo logging is disabled can cause data loss and instance corruption.
Use MySQL Shell to import data. MySQL Shell's parallel
table import utility util.importTable()
provides rapid data import to a MySQL relational table for
large data files. MySQL Shell's dump loading utility
util.loadDump()
also offers parallel load
capabilities. See MySQL Shell Utilities.