2.12.8 Clone Oracle Grid Infrastructure to the Replacement Database Server
This procedure describes how to clone Oracle Grid Infrastructure to the replacement database server.
In the following commands, working_server is a working database server, and replacement_server is the replacement database server. The commands in this procedure are run from a working database server as the Grid home owner. When the root
user is needed to run a command, it will be called out.
- Verify the hardware and operating system installation using the cluster verification utility (
cluvfy
).$ cluvfy stage -post hwos -n replacement_server,working_server -verbose
The phrase
Post-check for hardware and operating system setup was successful
should appear at the end of the report. If the cluster verification utility fails to validate the storage on the replacement server, you can ignore those messages. - Verify peer compatibility.
$ cluvfy comp peer -refnode working_server -n replacement_server \ -orainv oinstall -osdba dba | grep -B 3 -A 2 mismatched
The following is an example of the output:
Compatibility check: Available memory [reference node: dm01db02] Node Name Status Ref. node status Comment ------------ ----------------------- ----------------------- ---------- dm01db01 31.02GB (3.2527572E7KB) 29.26GB (3.0681252E7KB) mismatched Available memory check failed Compatibility check: Free disk space for "/tmp" [reference node: dm01db02] Node Name Status Ref. node status Comment ------------ ----------------------- ---------------------- ---------- dm01db01 55.52GB (5.8217472E7KB) 51.82GB (5.4340608E7KB) mismatched Free disk space check failed
If the only failed components are related to the physical memory, swap space and disk space, then it is safe to continue.
- Perform the requisite checks for adding the server.
- Ensure the
GRID_HOME/network/admin/samples
directory has permissions set to 750. - Validate the addition of the database server.
Run the following command as the
oracle
user. The command prompts for the password of theroot
user.$ cluvfy stage -pre nodeadd -n replacement_server -fixup -method root -verbose Enter "ROOT" password:
If the only failed component is related to swap space, then it is safe to continue.
If the command returns an error, then set the following environment variable and rerun the command:
$ export IGNORE_PREADDNODE_CHECKS=Y
- Ensure the
- Add the replacement database server to the cluster.
If you are using Oracle Grid Infrastructure release 12.1 or higher, include the
CLUSTER_NEW_NODE_ROLES
attribute, as shown in the following example.$ cd GRID_HOME/addnode $ ./addnode.sh -silent "CLUSTER_NEW_NODES={replacement_server}" \ "CLUSTER_NEW_VIRTUAL_HOSTNAMES={replacement_server-vip}" \ "CLUSTER_NEW_NODE_ROLES={hub}"
The second command causes Oracle Universal Installer to copy the Oracle Clusterware software to the replacement database server. A message similar to the following is displayed:
WARNING: A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'dm01db01'. If you do not register the inventory, you may not be able to update or patch the products you installed. The following configuration scripts need to be executed as the "root" user in each cluster node: /u01/app/oraInventory/orainstRoot.sh #On nodes dm01db01 /u01/app/12.1.0.2/grid/root.sh #On nodes dm01db01
- Run the configuration scripts.As the
root
user, first disable HAIP, then run theorainstRoot.sh
androot.sh
scripts on the replacement database server using the commands shown in the following example.# export HAIP_UNSUPPORTED=true # /u01/app/oraInventory/orainstRoot.sh Creating the Oracle inventory pointer file (/etc/oraInst.loc) Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. # GRID_HOME/root.sh
Note:
CheckGRID_HOME/install/
log files for the output ofroot.sh
script.If you are running Oracle Grid Infrastructure release 11.2, then the output file created by the script reports that the listener resource on the replaced database server failed to start. This is the expected output.
/u01/app/11.2.0/grid/bin/srvctl start listener -n dm01db01 \ ...Failed /u01/app/11.2.0/grid/perl/bin/perl \ -I/u01/app/11.2.0/grid/perl/lib \ -I/u01/app/11.2.0/grid/crs/install \ /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
After the scripts are run, the following message is displayed:
The Cluster Node Addition of /u01/app/12.1.0.2/grid was successful. Please check '/tmp/silentInstall.log' for more details.
- Check the cluster.
$ GRID_HOME/bin/crsctl check cluster -all ************************************************************** node1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** node2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** node3: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
- If you are running Oracle Grid Infrastructure release 11.2, then re-enable the listener resource.
Run the following commands on the replacement database server.
# GRID_HOME/grid/bin/srvctl enable listener -l LISTENER \ -n replacement_server # GRID_HOME/grid/bin/srvctl start listener -l LISTENER \ -n replacement_server
- Start the disk groups on the replacement server.
- Check disk group status.
In the following example, notice that disk groups are offline on the replacement server.
$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATAC1.dg ONLINE ONLINE node1 STABLE OFFLINE OFFLINE node2 STABLE ora.DBFS_DG.dg ONLINE ONLINE node1 STABLE ONLINE ONLINE node2 STABLE ora.LISTENER.lsnr ONLINE ONLINE node1 STABLE ONLINE ONLINE node2 STABLE ora.RECOC1.dg ONLINE ONLINE node1 STABLE OFFLINE OFFLINE node2 STABLE
- For each offline disk group, run the
START DISKGROUP
command for each disk group that is offline from either the original server or the replacement server.$ srvctl start diskgroup -diskgroup dgname
- Check disk group status.
Parent topic: Re-Imaging the Oracle Exadata Database Server