MySQL 9.3 Reference Manual Including MySQL NDB Cluster 9.3
MySQL Enterprise Backup is a commercially-licensed backup utility for MySQL Server, available with MySQL Enterprise Edition. This section explains how to back up and subsequently restore a Group Replication member using MySQL Enterprise Backup. The same technique can be used to quickly add a new member to a group.
Backing up a Group Replication member is similar to backing up a stand-alone MySQL instance. The following instructions assume that you are already familiar with how to use MySQL Enterprise Backup to perform a backup; if that is not the case, please review Backing Up a Database Server. Also note the requirements described in Grant MySQL Privileges to Backup Administrator and Using MySQL Enterprise Backup with Group Replication.
Consider the following group with three members,
s1
, s2
, and
s3
, running on hosts with the same names:
mysql> SELECT member_host, member_port, member_state FROM performance_schema.replication_group_members;
+-------------+-------------+--------------+
| member_host | member_port | member_state |
+-------------+-------------+--------------+
| s1 | 3306 | ONLINE |
| s2 | 3306 | ONLINE |
| s3 | 3306 | ONLINE |
+-------------+-------------+--------------+
Using MySQL Enterprise Backup, create a backup of s2
by issuing
on its host, for example, the following command:
s2> mysqlbackup --defaults-file=/etc/my.cnf --backup-image=/backups/my.mbi_`date +%d%m_%H%M` \
--backup-dir=/backups/backup_`date +%d%m_%H%M` --user=root -p \
--host=127.0.0.1 backup-to-image
Assume one of the members (s3
in the following
example) is irreconcilably corrupted. The most recent backup of
group member s2
can be used to restore
s3
. Here are the steps for performing the
restore:
Copy the backup of s2 onto the host for s3. The exact way to copy the backup depends on the operating system and tools available to you. In this example, we assume the hosts are both Linux servers and use SCP to copy the files between them:
s2/backups> scp my.mbi_2206_1429 s3:/backups
Restore the backup. Connect to the target
host (the host for s3
in this case), and
restore the backup using MySQL Enterprise Backup. Here are the steps:
Stop the corrupted server, if it is still running. For example, on Linux distributions that use systemd:
s3> systemctl stop mysqld
Preserve the two configuration files in the corrupted
server's data directory, auto.cnf
and
mysqld-auto.cnf
(if it exists), by
copying them to a safe location outside of the data
directory. This is for preserving the
server's UUID
and Section 7.1.9.3, “Persisted System Variables” (if
used), which are needed in the steps below.
Delete all contents in the data directory of
s3
. For example:
s3> rm -rf /var/lib/mysql/*
If the system variables
innodb_data_home_dir
,
innodb_log_group_home_dir
,
and innodb_undo_directory
point to any directories other than the data directory,
they should also be made empty; otherwise, the restore
operation fails.
Restore backup of s2
onto the host for
s3
:
s3> mysqlbackup --defaults-file=/etc/my.cnf \
--datadir=/var/lib/mysql \
--backup-image=/backups/my.mbi_2206_1429 \
--backup-dir=/tmp/restore_`date +%d%m_%H%M` copy-back-and-apply-log
The command above assumes that the binary logs and relay
logs on s2
and s3
have the same base name and are at the same location on
the two servers. If these conditions are not met, you
should use the --log-bin
and
--relay-log
options to
restore the binary log and relay log to their original
file paths on s3
. For example, if you
know that on s3
the binary log's base
name is s3-bin
and the relay-log's
base name is s3-relay-bin
, your
restore command should look like:
mysqlbackup --defaults-file=/etc/my.cnf \
--datadir=/var/lib/mysql \
--backup-image=/backups/my.mbi_2206_1429 \
--log-bin=s3-bin --relay-log=s3-relay-bin \
--backup-dir=/tmp/restore_`date +%d%m_%H%M` copy-back-and-apply-log
Being able to restore the binary log and relay log to the right file paths makes the restore process easier; if that is impossible for some reason, see Rebuild the Failed Member to Rejoin as a New Member.
Restore the auto.cnf
file for
s3. To rejoin the replication group, the restored
member must have the same
server_uuid
it used to join
the group before. Supply the old server UUID by copying the
auto.cnf
file preserved in step 2 above
into the data directory of the restored member.
If you cannot supply the failed member's original
server_uuid
to the restored
member by restoring its old auto.cnf
file, you must let the restored member join the group as a
new member; see instructions in
Rebuild the Failed Member to Rejoin as a New Member below on
how to do that.
Restore the mysqld-auto.cnf
file for s3 (only required if s3 used persistent system
variables). The settings for the
Section 7.1.9.3, “Persisted System Variables” that were used to
configure the failed member must be provided to the restored
member. These settings are to be found in the
mysqld-auto.cnf
file of the failed
server, which you should have preserved in step 2 above.
Restore the file to the data directory of the restored server.
See
Restoring Persisted System Variables
on what to do if you do not have a copy of the file.
Start the restored server. For example, on Linux distributions that use systemd:
systemctl start mysqld
If the server you are restoring is a primary member, perform the steps described in Restoring a Primary Member before starting the restored server.
Restart Group Replication. Connect to the
restarted s3
using, for example, a
mysql client, and issue the following
command:
mysql> START GROUP_REPLICATION;
Before the restored instance can become an online member of the group, it needs to apply any transactions that have happened to the group after the backup was taken; this is achieved using Group Replication's distributed recovery mechanism, and the process starts after the START GROUP_REPLICATION statement has been issued. To check the member status of the restored instance, issue:
mysql> SELECT member_host, member_port, member_state FROM performance_schema.replication_group_members; +-------------+-------------+--------------+ | member_host | member_port | member_state | +-------------+-------------+--------------+ | s1 | 3306 | ONLINE | | s2 | 3306 | ONLINE | | s3 | 3306 | RECOVERING | +-------------+-------------+--------------+
This shows that s3
is applying transactions
to catch up with the group. Once it has caught up with the
rest of the group, its member_state
changes
to ONLINE
:
mysql> SELECT member_host, member_port, member_state FROM performance_schema.replication_group_members; +-------------+-------------+--------------+ | member_host | member_port | member_state | +-------------+-------------+--------------+ | s1 | 3306 | ONLINE | | s2 | 3306 | ONLINE | | s3 | 3306 | ONLINE | +-------------+-------------+--------------+
If the server you are restoring is a primary member, once it
has gained synchrony with the group and become
ONLINE
, perform the steps described at
the end of
Restoring a Primary Member to
revert the configuration changes you had made to the server
before you started it.
The member has now been fully restored from the backup and functions as a regular member of the group.
Sometimes, the steps outlined above in
Restoring a Failed Member cannot
be carried out because, for example, the binary log or relay log
is corrupted, or it is just missing from the backup. In such a
situation, use the backup to rebuild the member, and then add it
to the group as a new member. In the steps below, we assume the
rebuilt member is named s3
, like the failed
member, and that it runs on the same host as
s3
:
Copy the backup of s2 onto the host for s3 . The exact way to copy the backup depends on the operating system and tools available to you. In this example we assume the hosts are both Linux servers and use SCP to copy the files between them:
s2/backups> scp my.mbi_2206_1429 s3:/backups
Restore the backup. Connect to the target
host (the host for s3
in this case), and
restore the backup using MySQL Enterprise Backup. Here are the steps:
Stop the corrupted server, if it is still running. For example, on Linux distributions that use systemd:
s3> systemctl stop mysqld
Preserve the configuration file
mysqld-auto.cnf
, if it is found in
the corrupted server's data directory, by copying it to a
safe location outside of the data directory. This is for
preserving the server's
Section 7.1.9.3, “Persisted System Variables”, which are
needed later.
Delete all contents in the data directory of
s3
. For example:
s3> rm -rf /var/lib/mysql/*
If the system variables
innodb_data_home_dir
,
innodb_log_group_home_dir
,
and innodb_undo_directory
point to any directories other than the data directory,
they should also be made empty; otherwise, the restore
operation fails.
Restore the backup of s2
onto the host
of s3
. With this approach, we are
rebuilding
as a
new member, for which we do not need or do not want to use
the old binary and relay logs in the backup; therefore, if
these logs have been included in your backup, exclude them
using the s3
--skip-binlog
and
--skip-relaylog
options:
s3> mysqlbackup --defaults-file=/etc/my.cnf \
--datadir=/var/lib/mysql \
--backup-image=/backups/my.mbi_2206_1429 \
--backup-dir=/tmp/restore_`date +%d%m_%H%M` \
--skip-binlog --skip-relaylog \
copy-back-and-apply-log
If you have healthy binary log and relay logs in the backup that you can transfer onto the target host with no issues, you are recommended to follow the easier procedure as described in Restoring a Failed Member above.
Restore the mysqld-auto.cnf
file for s3 (only required if s3 used persistent system
variables). The settings for the
Section 7.1.9.3, “Persisted System Variables” that were used to
configure the failed member must be provided to the restored
server. These settings are to be found in the
mysqld-auto.cnf
file of the failed
server, which you should have preserved in step 2 above.
Restore the file to the data directory of the restored server.
See
Restoring Persisted System Variables
on what to do if you do not have a copy of the file.
Do NOT restore the corrupted server's
auto.cnf
file to the data directory of
the new member—when the rebuilt s3
joins the group as a new member, it is going to be assigned
a new server UUID.
Start the restored server. For example, on Linux distributions that use systemd:
systemctl start mysqld
If the server you are restoring is a primary member, perform the steps described in Restoring a Primary Member before starting the restored server.
Reconfigure the restored member to join Group Replication. Connect to the restored server with a mysql client and reset the source and replica information with the following statements:
mysql>RESET BINARY LOGS AND GTIDS;
mysql>RESET REPLICA ALL;
For the restored server to be able to recover automatically
using Group Replication's built-in mechanism for
distributed
recovery, configure the server's
gtid_executed
variable. To do
this, use the backup_gtid_executed.sql
file included in the backup of s2
, which is
usually restored under the restored member's data directory.
Disable binary logging, use the
backup_gtid_executed.sql
file to
configure gtid_executed
, and
then re-enable binary logging by issuing the following
statements with your mysql client:
mysql>SET SQL_LOG_BIN=OFF;
mysql>SOURCE
mysql>datadir
/backup_gtid_executed.sqlSET SQL_LOG_BIN=ON;
Then, configure the Group Replication user credentials on the member using the SQL statements shown here:
mysql>CHANGE REPLICATION SOURCE TO SOURCE_USER='
->rpl_user
',SOURCE_PASSWORD='
->password
'FOR CHANNEL 'group_replication_recovery';
Restart Group Replication. Issue the following command to the restored server with your mysql client:
mysql> START GROUP_REPLICATION;
Before the restored instance can become an online member of the group, it needs to apply any transactions that have happened to the group after the backup was taken; this is achieved using Group Replication's distributed recovery mechanism, and the process starts after the START GROUP_REPLICATION statement has been issued. To check the member status of the restored instance, issue:
mysql> SELECT member_host, member_port, member_state FROM performance_schema.replication_group_members; +-------------+-------------+--------------+ | member_host | member_port | member_state | +-------------+-------------+--------------+ | s3 | 3306 | RECOVERING | | s2 | 3306 | ONLINE | | s1 | 3306 | ONLINE | +-------------+-------------+--------------+
This shows that s3
is applying transactions
to catch up with the group. Once it has caught up with the
rest of the group, its member_state
changes
to ONLINE
:
mysql> SELECT member_host, member_port, member_state FROM performance_schema.replication_group_members; +-------------+-------------+--------------+ | member_host | member_port | member_state | +-------------+-------------+--------------+ | s3 | 3306 | ONLINE | | s2 | 3306 | ONLINE | | s1 | 3306 | ONLINE | +-------------+-------------+--------------+
If the server you are restoring is a primary member, once it
has gained synchrony with the group and become
ONLINE
, perform the steps described at
the end of
Restoring a Primary Member to
revert the configuration changes you had made to the server
before you started it.
The member has now been restored to the group as a new member.
Restoring Persisted System Variables.
mysqlbackup does not provide support for
backing up or preserving
Section 7.1.9.3, “Persisted System Variables”—the file
mysqld-auto.cnf
is not included in a
backup. To start the restored member with its persisted variable
settings, you need to do one of the following:
Preserve a copy of the mysqld-auto.cnf
file from the corrupted server, and copy it to the restored
server's data directory.
Copy the mysqld-auto.cnf
file from
another member of the group into the restored server's data
directory, if that member has the same persisted system
variable settings as the corrupted member.
After the restored server is started and before you restart Group Replication, set all the system variables manually to their persisted values through a mysql client.
Restoring a Primary Member. If the restored member is a primary in the group, care must be taken to prevent writes to the restored database during the Group Replication distributed recovery process. Depending on how the group is accessed by clients, there is a possibility of DML statements being executed on the restored member once it becomes accessible on the network, prior to the member finishing its catch-up on the activities it has missed while off the group. To avoid this, before starting the restored server, configure the following system variables in the server option file:
group_replication_start_on_boot=OFF super_read_only=ON event_scheduler=OFF
These settings ensure that the member becomes read-only at startup, and that the event scheduler is turned off while the member catches up with the group during the distributed recovery process. Adequate error handling must also be provided for on the clients, since they are unable to perform DML operations during this period on the member being restored.
Once the restoration process is fully completed and the restored member is synchronized with the rest of the group, you can revert these changes. First, restart the event scheduler using the statement shown here:
mysql> SET global event_scheduler=ON;
After this, you should set the following system variables in the member's option file, so that they have the necessary values for the next time that the member is started:
group_replication_start_on_boot=ON super_read_only=OFF event_scheduler=ON