5.20.2.2 Recovering a Management Domain and Its User Domains (Releases 12.2.1.1.0 and Later)
You can recover a management domain from a snapshot-based backup when severe disaster conditions damage the management domain, or when the server hardware is replaced to such an extent that it amounts to new hardware.
To use this recovery method, it is assumed that you have previously completed the steps in Backing up the Management Domain dom0 Using Snapshot-Based Backup.
- Prepare an NFS server to host the backup archive
mybackup.tar.bz2
.The NFS server must be accessible by IP address. For example, on an NFS server with the IP address nfs_ip, where the directory
/export
is exported from NFS mounts, put themybackup.tar.bz2
file in the/export
directory - Restart the recovery target system using the
diagnostics.iso
file.See Booting a Server using the Diagnostic ISO File in Oracle Exadata System Software User's Guide. - Log in to the diagnostics shell as the
root
user.When prompted, enter the diagnostics shell.For example:
Choose from following by typing letter in '()': (e)nter interactive diagnostics shell. Must use credentials from Oracle support to login (reboot or power cycle to exit the shell), (r)estore system from NFS backup archive, Type e to enter the diagnostics shell and log in as the root user.
If prompted, log in to the system as theroot
user. If you are prompted for theroot
user password and do not have it, then contact Oracle Support Services. - If required, use
/opt/MegaRAID/storcli/storcli64
(or/opt/MegaRAID/MegaCli/MegaCli64
for releases earlier than Oracle Exadata System Software 19c) to configure the disk controller to set up the disks. - Remove the logical volumes, the volume group, and the physical volume, in case they still exist after the disaster.
# lvm vgremove VGExaDb --force # lvm pvremove /dev/sda2 --force
- Remove the existing partitions and clean up the drive.
# parted GNU Parted 2.1 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) rm 1 [12064.253824] sda: sda2 (parted) rm 2 [12070.579094] sda: (parted) q # dd if=/dev/zero of=/dev/sda bs=64M count=2
- Create the two partitions on
/dev/sda
.- Get the end sector for the disk
/dev/sda
from a running dom0 and store it in a variable:# end_sector_logical=$(parted -s /dev/sda unit s print|perl -ne '/^Disk\s+\S+:\s+(\d+)s/ and print $1') # end_sector=$( expr $end_sector_logical - 34 )
The values for the start and end sectors in the commands below were taken from an existing management domain. Because these values can change over time, it is recommended that these values are checked from an existing management domain using the following command:
# parted -s /dev/sda unit S print
- Create the boot partition,
/dev/sda1
.# parted -s /dev/sda mklabel gpt mkpart primary 64s 1048639s set 1 boot on
- Create the partition that will hold the LVMs,
/dev/sda2
.# parted -s /dev/sda mkpart primary 1048640s 3509759966s set 2 lvm on
- Get the end sector for the disk
- Use the
/sbin/lvm
command to re-create the logical volumes andmkfs
to create file systems.- Create the physical volume and the volume group.
# lvm pvcreate /dev/sda2 # lvm vgcreate VGExaDb /dev/sda2
- Create the logical volume for the file system that will contain the
/
(root) directory and label it.# lvm lvcreate -n LVDbSys3 -L30G VGExaDb # mkfs -t ext4 –b 4096 /dev/VGExaDb/LVDbSys3 # e2label /dev/VGExaDb/LVDbSys3 DBSYSOVS
- Create the logical volume for the swap directory, and label it.
# lvm lvcreate -n LVDbSwap1 -L24G VGExaDb # mkswap -L SWAP /dev/VGExaDb/LVDbSwap1
- Create the logical volume for the backup partition, and build a file system on top of it.
# lvm lvcreate -n LVDbSys2 -L30G VGExaDb # mkfs -t ext4 –b 4096 /dev/VGExaDb/LVDbSys2
- Create the logical volume for the reserved partition.
# lvm lvcreate -n LVDoNotRemoveOrUse –L1G VGExaDb
Note:
Do not create any file system on this logical volume. - Create the logical volume for the guest storage repository.
# lvm lvcreate -l 100%FREE -n LVDbExaVMImages VGExaDb
- Create a file system on the
/dev/sda1
partition, and label it.In the
mkfs.ext3
command below, the-I 128
option is needed to set the inode size to 128.# mkfs.ext3 -I 128 /dev/sda1 # tune2fs -c 0 -i 0 /dev/sda1 # e2label /dev/sda1 BOOT
- Create the physical volume and the volume group.
- Create mount points for all the partitions, and mount the respective partitions.
For example, if
/mnt
is used as the top-level directory, the mounted list of partitions might look like:-
/dev/VGExaDb/LVDbSys3
on/mnt
-
/dev/sda1
on/mnt/boot
The following example mounts the root file system, and creates two mount points:
# mount /dev/VGExaDb/LVDbSys3 /mnt -t ext4 # mkdir /mnt/boot # mount /dev/sda1 /mnt/boot -t ext3
-
- Bring up the network on
eth0
and assign the host's IP address and netmask to it.# ifconfig eth0 ip_address_for_eth0 netmask netmask_for_eth0 up # route add -net 0.0.0.0 netmask 0.0.0.0 gw gateway_ip_address
- Mount the NFS server holding the backups.
# mkdir -p /root/mnt # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/location_of_backup /root/mnt
- From the backup which was created in Backing up the Management Domain dom0 Using Snapshot-Based Backup, restore the root directory (
/
) and the boot file system.# tar -pjxvf /root/mnt/backup-of-root-and-boot.tar -C /mnt
- Unmount the restored
/dev/sda1
partition, and remount it on/boot
.# umount /mnt/boot # mkdir -p /boot # mount /dev/sda1 /boot -t ext3
- Set up the
grub
boot loader using the command below:# grub --device-map=/boot/grub/device.map << DOM0_GRUB_INSTALL root (hd0,0) setup (hd0) quit DOM0_GRUB_INSTALL
- Unmount the
/boot
partition.# umount /boot
- Detach the
diagnostics.iso
file.Using the ILOM Web interface, navigate to the Storage Devices dialog and click Disconnect.
The Storage Devices dialog is the interface that you earlier used to attach the
diagnostics.iso
image. See Booting a Server using the Diagnostic ISO File in Oracle Exadata System Software User's Guide. - Check the restored
/etc/fstab
file and comment out any reference to/EXAVMIMAGES
.# cd /mnt/etc
Comment out any line that references
/EXAVMIMAGES
. - Restart the system.
# shutdown -r now
This completes the restoration procedure for the management domain (dom0).
- Convert to Eighth Rack, if required.
If the recovery is on an Oracle Exadata Eighth Rack, then perform the procedure described in Configuring Oracle Exadata Database Machine Eighth Rack Oracle Linux Database Server After Recovery".
- When the server comes back up, build an OCFS2 file system on the
LVDbExaVMImages
logical volume.# mkfs -t ocfs2 -L ocfs2 -T vmstore --fs-features=local /dev/VGExaDb/LVDbExaVMImages --force
- Mount the OCFS2 partition on
/EXAVMIMAGES
.# mount -t ocfs2 /dev/VGExaDb/LVDbExaVMImages /EXAVMIMAGES
- In
/etc/fstab
, uncomment the references to/EXAVMIMAGES
and/dev/mapper/VGExaDb-LVDbExaVMImages
, which you commented out earlier. - Mount the backup NFS server that holds the storage repository (
/EXAVMIMAGES
) backup to restore the/EXAVMIMAGES
file system.# mkdir -p /root/mnt # mount -t nfs -o ro,intr,soft,proto=tcp,nolock nfs_ip:/location_of_backup /root/mnt
- Restore the
/EXAVMIMAGES
file system.To restore all user domains, use this command:
# tar -Spxvf /root/mnt/backup-of-exavmimages.tar -C /EXAVMIMAGES
To restore a single user domain from the backup, use the following command instead:
# tar -Spxvf /root/mnt/backup-of-exavmimages.tar -C /EXAVMIMAGES EXAVMIMAGES/<user-domain-name-to-be-restored>
- Bring up each user domain.
# xm create /EXAVMIMAGES/GuestImages/user_domain_hostname/vm.cfg
At this point all the user domains should come up along with Oracle Grid Infrastructure and the Oracle Database instances.