3.8.3 Adding a New Storage Server to an Eighth Rack Cluster
Perform the following steps to add a new Oracle Exadata X7 or later storage server to an existing Oracle Exadata X7 or later Eighth Rack.
- If configured, drop the PMEM Cache and PMEM log.
$ cellcli -e drop pmemcache all $ cellcli -e drop pmemlog all
- On the new storage server, drop the flash cache, flash log and cell disks.
cellcli -e drop flashcache all cellcli -e drop flashlog all cellcli -e drop celldisk all
- On the new storage server, enable the
eighthrack
attribute.cellcli -e alter cell eighthRack=true
- On the new storage server, create the cell disks.
cellcli -e create celldisk all
- On the new storage server, create the flash log.
cellcli -e create flashlog all
- If applicable, on the new storage server, create the PMEM log.
cellcli -e create pmemlog all
- On any of the existing storage servers, retrieve the value of the cell attribute
flashcachemode
.cellcli -e list cell attributes flashcachemode
The
flashcachemode
attribute on the new storage server is set toWriteThrough
by default. All storage servers should have the sameflashcachemode
attribute setting.If the existing storage servers are using
WriteBack
mode, then you should change the attributeflashcachemode
on the new storage server, as shown here:cellcli -e alter cell flashcachemode=writeback
- On the new storage server, create the flash cache.
cellcli -e create flashcache all
- If the storage servers use PMEM cache, then retrieve the value of the cell attribute
pmemcachemode
.cellcli -e list cell attributes pmemcachemode
The
pmemcachemode
attribute on the new storage server is set toWriteThrough
by default. All storage servers should have the samepmemcachemode
attribute setting.If the existing storage servers are using
WriteBack
mode, then you should change the attributepmemcachemode
on the new storage server, as shown here:cellcli -e alter cell pmemcachemode=writeback
- If the storage servers use PMEM cache, then, on the new storage server, create the PMEM cache.
cellcli -e create pmemcache all
- On any of the existing storage servers, obtain information on the grid disk configuration.
cellcli -e list griddisk attributes name,offset,size,cachingpolicy
- On the new storage server, create the grid disks (repeat for each set of grid disks to match the configuration of the existing storage servers).
In the following command, replace the italicized text with the corresponding values obtained in step 11.
cellcli -e CREATE GRIDDISK ALL HARDDISK PREFIX=matching_prefix_of_the_ corresponding_existing_diskgroup, size=size_followed_by_G_or_T, cachingPolicy=\'value_from_command_above_for_this_disk_group\', comment =\"Cluster cluster_name diskgroup diskgroup_name\"
- On the new storage server, validate the grid disks have the same configuration of the grid disks as the existing storage servers (by comparing with the information obtained in step 11.
cellcli -e list griddisk attributes name,offset,size,cachingpolicy
- (X2 to X8 servers only) If the environment has partition keys (pkeys) implemented, configure pkeys for the RDMA Network Fabric interfaces. Refer to step 6 from Implementing InfiniBand Partitioning across OVM RAC clusters on Exadata (My Oracle Support Doc ID 2075398.1) for this task.
- On the new storage server, identify the IP address for both ports for either InfiniBand Network Fabric or RoCE Network Fabric.
cellcli -e list cell attributes name,ipaddress1,ipaddress2
- Add the IP addresses from step 15 to the
/etc/oracle/cell/network-config/cellip.ora
file on every database server.Perform these steps on any database server in the cluster:
cd /etc/oracle/cell/network-config
cp cellip.ora cellip.ora.orig
cp cellip.ora cellip.ora-bak
- Add the new entries to
/etc/oracle/cell/network-config/cellip.ora-bak
. - Copy the edited file to the
cellip.ora
file on all database s using the following command, where database_nodes refers to a file containing the names of each database server in the cluster, with each name on a separate line:/usr/local/bin/dcli -g
database_nodes-l root -f cellip.ora-bak -d /etc/oracle/cell/network-config/cellip.ora
- Connect to any of the Oracle ASM instances and ensure the grid disks from the new storage server are discoverable.
SQL> set pagesize 30 SQL> set linesize 132 SQL> col path format a70 SQL> SELECT inst_id,path FROM gv$asm_disk WHERE header_status='CANDIDATE' 2> ORDER BY inst_id,path; INST_ID PATH ---------- ---------------------------------------------------------------------- 1 o/192.168.17.235;192.168.17.236/DATAC1_CD_00_celadm11 1 o/192.168.17.235;192.168.17.236/DATAC1_CD_01_celadm11 1 o/192.168.17.235;192.168.17.236/DATAC1_CD_02_celadm11 1 o/192.168.17.235;192.168.17.236/DATAC1_CD_03_celadm11 1 o/192.168.17.235;192.168.17.236/DATAC1_CD_04_celadm11 1 o/192.168.17.235;192.168.17.236/DATAC1_CD_05_celadm11 1 o/192.168.17.235;192.168.17.236/RECOC1_CD_00_celadm11 1 o/192.168.17.235;192.168.17.236/RECOC1_CD_01_celadm11 1 o/192.168.17.235;192.168.17.236/RECOC1_CD_02_celadm11 1 o/192.168.17.235;192.168.17.236/RECOC1_CD_03_celadm11 1 o/192.168.17.235;192.168.17.236/RECOC1_CD_04_celadm11 1 o/192.168.17.235;192.168.17.236/RECOC1_CD_05_celadm11 2 o/192.168.17.235;192.168.17.236/DATAC1_CD_00_celadm11 2 o/192.168.17.235;192.168.17.236/DATAC1_CD_01_celadm11 2 o/192.168.17.235;192.168.17.236/DATAC1_CD_02_celadm11 2 o/192.168.17.235;192.168.17.236/DATAC1_CD_03_celadm11 2 o/192.168.17.235;192.168.17.236/DATAC1_CD_04_celadm11 2 o/192.168.17.235;192.168.17.236/DATAC1_CD_05_celadm11 2 o/192.168.17.235;192.168.17.236/RECOC1_CD_00_celadm11 2 o/192.168.17.235;192.168.17.236/RECOC1_CD_01_celadm11 2 o/192.168.17.235;192.168.17.236/RECOC1_CD_02_celadm11 2 o/192.168.17.235;192.168.17.236/RECOC1_CD_03_celadm11 2 o/192.168.17.235;192.168.17.236/RECOC1_CD_04_celadm11 2 o/192.168.17.235;192.168.17.236/RECOC1_CD_05_celadm11
- Connect to one of the Oracle ASM instances and add the new disks to the existing disk groups.
SQL> ALTER DISKGROUP datac1 ADD DISK ‘o/192.168.17.235;192.168.17. 236/DATAC1*’; SQL> ALTER DISKGROUP recoc1 ADD DISK ‘o/192.168.17.235;192.168.17. 236/RECOC1*’;
Note:
The rebalance operation triggered by adding the disks will run at the default Oracle Maximum Availability Architecture (MAA) best practice power (should be 4). If the application service level performance is not a concern, then consider increasing the power for a faster rebalance. - Obtain a report of the number of disks per failure group. 6 disks per failure group are expected for High Capacity (HC) Storage Servers and 4 disks per failure group are expected for Extreme Flash (EF) Storage Servers.
SQL> SELECT d.group_number,dg.name,failgroup,mode_status,COUNT(*) 2> FROM v$asm_disk d,v$asm_diskgroup dg 3> WHERE d.group_number=dg.group_number 4> AND failgroup_type='REGULAR' 5> GROUP BY d.group_number,dg.name,failgroup,mode_status; GROUP_NUMBER NAME FAILGROUP MODE_ST COUNT(*) ------------ ------------------- -------------------- ------- -------- 1 DATAC1 CELADM08 ONLINE 6 1 DATAC1 CELADM09 ONLINE 6 1 DATAC1 CELADM10 ONLINE 6 1 DATAC1 CELADM11 ONLINE 6 2 RECOC1 CELADM08 ONLINE 6 2 RECOC1 CELADM09 ONLINE 6 2 RECOC1 CELADM10 ONLINE 6 2 RECOC1 CELADM11 ONLINE 6
- If Oracle Auto Service Request (ASR) alerting was set up on the existing storage servers, configure cell Oracle ASR alerting for the storage server being added.
- From any existing storage server, list the cell
snmpsubscriber
attribute.CellCLI> LIST CELL ATTRIBUTES snmpsubscriber
- Apply the same
snmpsubscriber
attribute value to the new storage server by running the following command, replacing snmpsubscriber with the value from the previous command.CellCLI> ALTER CELL snmpsubscriber=snmpsubscriber
Note:
In the snmpsubscriber value, enclose the host name or IP address in quotation marks if it contains non-alphanumeric characters. For example:
CellCLI> ALTER CELL snmpSubscriber=((host="asr-host.example.com",port=162,community=public,type=asr,asrmPort=16161))
- From any existing storage server, list the cell attributes required for configuring cell alerting.
CellCLI> LIST CELL ATTRIBUTES - notificationMethod,notificationPolicy,mailServer,smtpToAddr, - smtpFrom,smtpFromAddr,smtpUseSSL,smtpPort
- Apply the same values to the new storage server by running the following command,
substituting the placeholders with the values found from the existing storage
server.
CellCLI> ALTER CELL - notificationMethod='notificationMethod', - notificationPolicy='notificationPolicy', - mailServer='mailServer', - smtpToAddr='smtpToAddr', - smtpFrom='smtpFrom', - smtpFromAddr='smtpFromAddr', - smtpUseSSL=smtpUseSSL, - smtpPort=smtpPort
- From any existing storage server, list the cell