10 Managing Storage
Understand the storage options and how to manage storage for your Oracle Database Appliance deployment.
- About Managing Storage
Understand Oracle Database Appliance storage options. - About Managing Oracle ASM Disks
Understand the Oracle ASM disk management features that Oracle Database Appliance supports. - Managing Storage on Single-Node Systems
Understand the storage options for your Oracle Database Appliance X11-S and X11-L systems. - Managing Storage on High-Availability Systems
Understand the storage for your Oracle Database Appliance X11-HA system.
About Managing Storage
Understand Oracle Database Appliance storage options.
Oracle Database Appliance uses raw storage to protect data in the following ways:
-
Fast Recovery Area (FRA) backup. FRA is a storage area (directory on disk or Oracle ASM diskgroup) that contains redo logs, control file, archived logs, backup pieces and copies, and flashback logs.
-
Mirroring. Double or triple mirroring provides protection against mechanical issues.
The amount of available storage is determined by the location of the FRA backup (external or internal) and if double or triple mirroring is used. External NFS storage is supported for online backups, data staging, or additional database files.
Oracle Database Appliance X11-L and X11-HA models provide storage expansion options from the base configuration. In addition, on Oracle Database Appliance X11-HA multi-node platforms, you can add an optional storage expansion shelf.
The redundancy level for FLASH is based on the DATA and RECO selection. If you choose High redundancy (triple mirroring), then FLASH is also High redundancy.
Parent topic: Managing Storage
About Managing Oracle ASM Disks
Understand the Oracle ASM disk management features that Oracle Database Appliance supports.
Oracle Database Appliance enables you to manage your Oracle ASM disks.
Bringing Oracle ASM Disk Groups Online Automatically
Oracle Database Appliance periodically checks the status of Oracle ASM disks in disk groups. If any Oracle ASM disk is OFFLINE due to transient disk errors, then Oracle Database Appliance attempts to bring the disk ONLINE.
Optimizing Oracle ASM Disk Group Rebalance Operations
odacli modify-agentconfig-parameters -n ASMRM_CPU_RQ -v 50 -d "CPU RUN QUEUE THRESHOLD" -u
odacli modify-agentconfig-parameters -n ASMRM_MAX_HDD_DISK_RQ -v 2 -d "HDD DISK QUEUE THRESHOLD" -u
odacli modify-agentconfig-parameters -n ASMRM_MAX_SSD_DISK_RQ -v 32 -d "SSD DISK QUEUE THRESHOLD" -u
odacli modify-agentconfig-parameters -n ASMRM_MAX_NVME_DISK_RQ -v 50 -d "NVME DISK QUEUE THRESHOLD" -u
The above command options set custom threshold limits for rebalance monitoring of Oracle ASM disks.
You can monitor rebalance operations using the odacli describe-schedule -i
Schedule ID
and odacli
list-scheduled-executions
commands.
Parent topic: Managing Storage
Managing Storage on Single-Node Systems
Understand the storage options for your Oracle Database Appliance X11-S and X11-L systems.
- About Storage on Oracle Database Appliance X11-S and X11-L
Understand the storage for your Oracle Database Appliance single-node system. - Adding Small Form Factor (SFF) NVMe Storage Disks
Depending on the available drives, you can expand Oracle Database Appliance X11-L storage to add Small Form Factor (SFF) NVMe disks or replace existing NVMe disks. - Adding Add-in-Card (AIC) NVMe Storage Disks
You can expand Oracle Database Appliance X11-L storage with two or four Add-in-Card (AIC) NVMe disks. Oracle Database Appliance X11-L supports a maximum of four AICs. - Replacing Small Form Factor (SFF) NVMe Storage Disks
Understand how you can replace existing SFF NVMe disks on Oracle Database Appliance.
Parent topic: Managing Storage
About Storage on Oracle Database Appliance X11-S and X11-L
Understand the storage for your Oracle Database Appliance single-node system.
Oracle Database Appliance X11-S has two 6.8TB NVMe disks that host DATA and RECO disk groups. There are ten partitions that you can divide between DATA and RECO for Oracle ASM storage information. By default, DATA has an eight partition configuration and RECO has a two partition configuration. The storage capacity is fixed and cannot be expanded.
Oracle Database Appliance X11-L has two form factor-based NVMe disks namely Small Form Factor (SFF) and AIC (Add-in-Card). Both form factor NVMe disks are of 6.8TB storage capacity. SFF is a single disk of 6.8TB capacity whereas AIC has two NVMe disks each of 3.4TB capacity, with a combined storage capacity of 6.8TB. The default configuration for Oracle Database Appliance X11-L is two 6.8TB NVMe disks that host DATA and RECO disk groups.
When you first deploy and configure X11-L in this release, you can set the storage on X11-L in multiple of 2 packs of NVMe and AIC drives, such as 2, 4, and 6 disks, up to a maximum of 8 disks.
Oracle Database Appliance X11-L supports four SFF NVMe disks and four AIC NVMe disks. You must populate all four SFF NVMe disks before you add AIC NVMe disks to the system.
The table describes the NVMe storage configurations with expansion memory and storage options for single-node systems.
Table 10-1 Storage Options for Oracle Database Appliance X11-S and X11-L
Configuration | Oracle Database Appliance X11-S | Oracle Database Appliance X11-L |
---|---|---|
Base Configuration |
2 x 6.8 TB NVMe = 13.6 TB NVMe |
2 x 6.8 TB NVMe = 13.6 TB NVMe |
Storage addition options |
None |
6x6.8TB NVMe storage drives for total storage of 54.4TB NVMe. You must populate all four SFF NVMe disks before you add AIC NVMe disks to the system. For the additional two SFF NVMe, order the following: Qty:1 (Two 6.8TB 2.5-inch NVMe PCIe SFF SSD with marlin bracket for Oracle Database Appliance X11-L) For the additional four NVMe AIC SSDs (these are PCIe NVMe Flash Cards that require cover removal to install), order the following: Qty:1 for two, Qty:2 for four: Qty:1 - (Two 6.8 TB NVMe PCIe Cards for Oracle Database Appliance X11-L) |
Parent topic: Managing Storage on Single-Node Systems
Adding Small Form Factor (SFF) NVMe Storage Disks
Depending on the available drives, you can expand Oracle Database Appliance X11-L storage to add Small Form Factor (SFF) NVMe disks or replace existing NVMe disks.
Use the ODAADMCLI commands to perform appliance storage maintenance tasks, including perform storage diagnostics and collect diagnostic logs for storage components.
Preparing for a Storage Upgrade
Review and perform these best practices before adding storage.
-
Check the disk health of the existing storage disks.
# odaadmcli show disk
-
Run the the
odaadmcli show disk
andasmcmd lsdsk -p
commands to view and review the storage disk information in OAKD and Oracle Automatic Storage Management (Oracle ASM).# odaadmcli show disk
# asmcmd lsdsk -p
-
Use ORAchk to confirm Oracle ASM and Oracle Clusterware health.
Adding Small Form Factor (SFF) NVMe Storage Disks
The default configuration for Oracle Database Appliance X11-S or X11-L includes two NVMe disks. You cannot expand storage for Oracle Database Appliance X11-S.
For Oracle Database Appliance X11-L, you can expand storage by adding two SFF NVMe disks followed by two or four Add-in-Cards (AIC).
Important:
You must populate all four SFF slots before adding AIC.WARNING:
Pulling a drive before powering it off will crash the kernel, which can lead to data corruption. Do not pull the drive when the LED is an amber or green color. When you need to replace an NVMe drive, use the software to power off the drive before pulling the drive from the slot. If you have more than one disk to replace, complete the replacement of one disk before starting replacement of the next disk.
Follow all these steps to add SFF NVMe disks:
- Before adding the NVMe disks, ensure that the
current disks are online in
oakd
and Oracle ASM. Otherwise, the prechecks fail. For example, for 2-disks expansion to slots 2 and 3, the disks in slots 0 and 1 must be online in Oracle ASM andoakd
. - Insert each disk one at a time in the appropriate slot and wait for the disk to power ON.
- The disk automatically powers on when you insert the disk in the slot. Wait
for one minute and then check disk status. If the disk is in the ON state, then you
need not power on the disk manually. If the disk state is OFF as per the disk
status, and then power on the disk manually, and then check the status
again.
# odaadmcli power disk status slot_number
# odaadmcli power disk on slot_number
For example, to add two (2) NVMe disks, insert the disks in slots 2 and 3.# odaadmcli power disk status pd_02 # odaadmcli power disk on pd_02 # odaadmcli power disk status pd_03 # odaadmcli power disk on pd_03
- Repeat steps 2 and 3 for each disk to be added.
- Run the
odaadmcli expand storage
command to add the new storage disks. Note: You must run this step to add the storage disk. Otherwise, the newly-added disk is not visible to OAKD and hence does not display when you run theodaadmcli show disk
orodaadmcli show storage
commands. The newly-added disk is recognized by OAKD after theodaadmcli expand storage
command completes running successfully.# odaadmcli expand storage -ndisk number_of_disks
For example, to add two (2) NVMe drives:#odaadmcli expand storage -ndisk 2 Running precheck, it may take a few minutes. Precheck passed. Check the progress of expansion of storage by executing 'odaadmcli show disk' Waiting for expansion to finish. It may take several minutes to complete depending upon the number of disks being expanded
- Check the status of the new disk in OAKD with the
odaadmcli show disk
command. The disk must have the statusOnline
andGood
in OAKD.# odaadmcli show disk NAME PATH TYPE STATE STATE_DETAILS pd_00 /dev/nvme0n1 NVD ONLINE Good pd_01 /dev/nvme1n1 NVD ONLINE Good pd_02 /dev/nvme3n1 NVD ONLINE Good pd_03 /dev/nvme2n1 NVD ONLINE Good #
- Verify that the disks in slots 2 and 3 are added to
Oracle Automatic Storage Management (Oracle ASM) as follows. The new disk in Oracle ASM
must be in
CACHED MEMBER ONLINE NORMAL
state.- Run
asm_script
to verify that the disks in slots 2 and 3 are added to Oracle ASM. Verify that both disks are successfully added (CACHED and MEMBER). Following is example of default configuration of 80:20 where eight partitions (p1 to p8) are part of the DATA disk group and two partitions (p9 and p10) are part of the RECO disk group.# su gridUser /opt/oracle/oak/bin/stordiag/asm_script.sh 0 6 # su grid /opt/oracle/oak/bin/stordiag/asm_script.sh 0 6 SQL*Plus: Release 19.0.0.0.0 - Production on Wed Dec 11 08:26:05 2024 Version 19.25.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.25.0.0.0 SQL> SQL> SQL> SQL> SQL> PATH NAME GROUP_NUMBER STATE MODE_ST MOUNT_S HEADER_STATU ---------------------------------------- ----------------------------------- ------------ -------- ------- ------- ------------ AFD:NVD_S02_S6UENA0TC001P1 NVD_S02_S6UENA0TC001P1 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P10 NVD_S02_S6UENA0TC001P10 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P2 NVD_S02_S6UENA0TC001P2 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P3 NVD_S02_S6UENA0TC001P3 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P4 NVD_S02_S6UENA0TC001P4 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P5 NVD_S02_S6UENA0TC001P5 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P6 NVD_S02_S6UENA0TC001P6 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P7 NVD_S02_S6UENA0TC001P7 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P8 NVD_S02_S6UENA0TC001P8 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S02_S6UENA0TC001P9 NVD_S02_S6UENA0TC001P9 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P1 NVD_S03_S6UENA0TC001P1 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P10 NVD_S03_S6UENA0TC001P10 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P2 NVD_S03_S6UENA0TC001P2 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P3 NVD_S03_S6UENA0TC001P3 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P4 NVD_S03_S6UENA0TC001P4 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P5 NVD_S03_S6UENA0TC001P5 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P6 NVD_S03_S6UENA0TC001P6 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P7 NVD_S03_S6UENA0TC001P7 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P8 NVD_S03_S6UENA0TC001P8 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S03_S6UENA0TC001P9 NVD_S03_S6UENA0TC001P9 2 NORMAL ONLINE CACHED MEMBER SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.25.0.0.0 [root@node1 ~]#
- Use the
odaadmcli show validation storage errors
command to view hard storage errors. Hard errors include having the wrong type of disk inserted into a particular slot, an invalid disk model, or an incorrect disk size.# odaadmcli show validation storage errors
- Use the
odaadmcli show validation storage failures
command to view soft validation errors. A typical soft disk error would be an invalid version of the disk firmware.# odaadmcli show validation storage failures
- Confirm that the
oak_storage_conf.xml
file shows the number of disks added. For example, if you added 2 disks to the base configuration, then theoak_storage_conf.xml
file must shownumberOfDisks
as 4.#cat /opt/oracle/oak/conf/oak_storage_conf.xml <!-- This file is created by the Oracle Database Appliance software as part of system provisioning based on system provisioning requests. Values of element nodes can bechanged by OAK in response to storage configuration change operation. DO NOT EDIT THIS FILE. --> <CometConfiguration> <OakStorageConfigInfo type="string" dimension="vector" readonly="true" required="true" default=""> <!-- Number of disks part of OAK --> <numberOfDisks>4</numberOfDisks> <!-- Number of partitions per disk part of DATA diskgroup in multiple partition scheme --> <!-- Number of partitions per disk part of RECO diskgroup in multiple partition scheme --> <!-- are derived from number of partitions per disk which are part of ASM DATA diskgroup --> <numOfDataDiskPartitionInAsm>8</numOfDataDiskPartitionInAsm> </OakStorageConfigInfo> </CometConfiguration> #
- Run
Parent topic: Managing Storage on Single-Node Systems
Adding Add-in-Card (AIC) NVMe Storage Disks
You can expand Oracle Database Appliance X11-L storage with two or four Add-in-Card (AIC) NVMe disks. Oracle Database Appliance X11-L supports a maximum of four AICs.
Use the ODAADMCLI commands to perform appliance storage maintenance tasks, including perform storage diagnostics and collect diagnostic logs for storage components.
Preparing for a Storage Upgrade
Review and perform these best practices before adding storage.
-
Update Oracle Database Appliance to the latest Patch Bundle before expanding storage.
# odacli describe-component
-
Check the disk health of the existing storage disks.
# odaadmcli show disk
-
Run the
odaadmcli show diskgroup
command to display and review Oracle Automatic Storage Management (Oracle ASM) disk group information. - Use the
asmcmd
command to verify that all the disks for the four SFF slots are part of Oracle ASM. -
Use Oracle ORAchk to confirm Oracle ASM and Oracle Clusterware health.
Adding Add-in-Cards (AIC) NVMe Storage Disks
For Oracle Database Appliance X11-L, you can expand storage by adding two SFF NVMe disks followed by two or four Add-in-Cards (AIC). You can expand storage by first adding 2 AIC NVMe disks or 4 AIC, up to a maximum of 4 AIC disks. When you expand storage, adding odd numbers of AIC drives is not supported.
Important:
You must populate all four SFF slots before adding AIC.WARNING:
Pulling a drive before powering it off will crash the kernel, which can lead to data corruption. Do not pull the drive when the LED is an amber or green color. When you need to replace an NVMe drive, use the software to power off the drive before pulling the drive from the slot. If you have more than one disk to replace, complete the replacement of one disk before starting replacement of the next disk.
See Also:
Chapter Installing Oracle Database Appliance Into a Rack in the Oracle Database Appliance X11 Owner's Guide in the Oracle Database Appliance Documentation Library for this release for requirements before adding any optional PCIe add-in card storage- x16-PCIe slot 2:NVMe AIC (first)
- x8-PCIe slot 3: NVMe AIC (second)
- x8-PCIe slot 9: NVMe AIC (third)
- x16-PCIe slot 1: NVMe AIC (fourth)
Follow these steps to add AIC NVMe disks:
- On successful installation of AIC, the system restarts. Check
that Oracle Clusterware is up and
running.
crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
- Verify that the
oakd
process is running.# odaadmcli show disk
If theoakd
process is not running, then start it:# odaadmcli start oak
- Run the
odaadmcli show disk
command to check thatoakd
has discovered all AIC NVMe disks. For 2 AIC disks, there are four (4) NVMe disks of 3.4TB, and for four AIC disks, there are eight (8) 3.4TB NVME disks.For two AIC:# odaadmcli show storage ==== BEGIN STORAGE DUMP ======== Host Description: Oracle Corporation:ORACLE SERVER E6-2L Total number of controllers: 8 Id = 0 Pci Slot = 100 Serial Num = PHCP4246002S7P6CGN Vendor = Solidigm Model = SOLIDIGM SB5PH27X076TOC FwVers = G70YR112 strId = nvme:71:00.00 Pci Address = 71:00.0 Id = 1 Pci Slot = 101 Serial Num = PHCP4302001Y7P6CGN Vendor = Solidigm Model = SOLIDIGM SB5PH27X076TOC FwVers = G70YR112 strId = nvme:72:00.00 Pci Address = 72:00.0 Id = 2 Pci Slot = 103 Serial Num = PHCP424400EE7P6CGN Vendor = Solidigm Model = SOLIDIGM SB5PH27X076TOC FwVers = G70YR112 strId = nvme:73:00.00 Pci Address = 73:00.0 Id = 3 Pci Slot = 102 Serial Num = PHCP424600527P6CGN Vendor = Solidigm Model = SOLIDIGM SB5PH27X076TOC FwVers = G70YR112 strId = nvme:74:00.00 Pci Address = 74:00.0 Id = 4 Pci Slot = 2 Serial Num = PHAZ2233000R6P4AGN-1 Vendor = Intel Model = INTEL SSDPFCKE064T1S FwVers = 9CV1R310 strId = nvme:c1:00.00 Pci Address = c1:00.0 Id = 5 Pci Slot = 22 Serial Num = PHAZ2233000R6P4AGN-2 Vendor = Intel Model = INTEL SSDPFCKE064T1S FwVers = 9CV1R310 strId = nvme:c2:00.00 Pci Address = c2:00.0 Id = 7 Pci Slot = 3 Serial Num = PHAZ2333000R6P4AGN-1 Vendor = Intel Model = INTEL SSDPFCKE064T1S FwVers = 9CV1R310 strId = nvme:e2:00.00 Pci Address = e2:00.0 Id = 6 Pci Slot = 23 Serial Num = PHAZ2333000R6P4AGN-2 Vendor = Intel Model = INTEL SSDPFCKE064T1S FwVers = 9CV1R310 strId = nvme:e3:00.00 Pci Address = e3:00.0 Total number of expanders: 0 Total number of PDs: 8 /dev/nvme0n1 Solidigm NVD 6801gb slot: 0 pci-addr : 71 SOLIDIGM SB5PH27X076TOC SFF /dev/nvme1n1 Solidigm NVD 6801gb slot: 1 pci-addr : 72 SOLIDIGM SB5PH27X076TOC SFF /dev/nvme3n1 Solidigm NVD 6801gb slot: 2 pci-addr : 74 SOLIDIGM SB5PH27X076TOC SFF /dev/nvme2n1 Solidigm NVD 6801gb slot: 3 pci-addr : 73 SOLIDIGM SB5PH27X076TOC SFF /dev/nvme8n1 Solidigm NVD 3400gb slot: 4 pci-addr : f1 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme9n1 Solidigm NVD 3400gb slot: 5 pci-addr : f2 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme6n1 Solidigm NVD 3400gb slot: 6 pci-addr : e1 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme7n1 Solidigm NVD 3400gb slot: 7 pci-addr : e2 SOLIDIGM SB5PHC7E068TON AIC ==== END STORAGE DUMP ========= # # odaadmcli show disk NAME PATH TYPE STATE STATE_DETAILS pd_00 /dev/nvme0n1 NVD ONLINE Good pd_01 /dev/nvme1n1 NVD ONLINE Good pd_02 /dev/nvme2n1 NVD ONLINE Good pd_03 /dev/nvme3n1 NVD ONLINE Good pd_04_c1 /dev/nvme5n1 NVD UNKNOWN NewDiskInserted pd_04_c2 /dev/nvme4n1 NVD UNKNOWN NewDiskInserted pd_05_c1 /dev/nvme9n1 NVD UNKNOWN NewDiskInserted pd_05_c2 /dev/nvme8n1 NVD UNKNOWN NewDiskInserted
For four AIC:# odaadmcli show storage ==== BEGIN STORAGE DUMP ======== Host Description: Oracle Corporation:ORACLE SERVER E6-2L Total number of controllers: 12 Id = 8 Pci Slot = 9 Serial Num = PHCR429000036P8AGN-1 Vendor = Solidigm Model = SOLIDIGM SB5PHC7E068TON FwVers = G79YR112 strId = nvme:61:00.00 Pci Address = 61:00.0 Id = 9 Pci Slot = 29 Serial Num = PHCR429000036P8AGN-2 Vendor = Solidigm Model = SOLIDIGM SB5PHC7E068TON FwVers = G79YR112 strId = nvme:62:00.00 Pci Address = 62:00.0 Id = 0 Pci Slot = 100 Serial Num = PHCP430500077P6CGN Vendor = Solidigm Model = SOLIDIGM SB5PH27X076TOC FwVers = G70YR112 strId = nvme:71:00.00 Pci Address = 71:00.0 Id = 5 Pci Slot = 101 Serial Num = PHCP424400JU7P6CGN Vendor = Solidigm Model = SOLIDIGM SB5PH27X076TOC FwVers = G70YR112 strId = nvme:72:00.00 Pci Address = 72:00.0 Id = 6 Pci Slot = 103 Serial Num = PHCP424600977P6CGN Vendor = Solidigm Model = SOLIDIGM SB5PH27X076TOC FwVers = G70YR112 strId = nvme:73:00.00 Pci Address = 73:00.0 Id = 7 Pci Slot = 102 Serial Num = PHCP430500067P6CGN Vendor = Solidigm Model = SOLIDIGM SB5PH27X076TOC FwVers = G70YR112 strId = nvme:74:00.00 Pci Address = 74:00.0 Id = 3 Pci Slot = 1 Serial Num = PHCR4286000B6P8AGN-1 Vendor = Solidigm Model = SOLIDIGM SB5PHC7E068TON FwVers = G79YR112 strId = nvme:91:00.00 Pci Address = 91:00.0 Id = 4 Pci Slot = 21 Serial Num = PHCR4286000B6P8AGN-2 Vendor = Solidigm Model = SOLIDIGM SB5PHC7E068TON FwVers = G79YR112 strId = nvme:92:00.00 Pci Address = 92:00.0 Id = 1 Pci Slot = 2 Serial Num = PHCR4280002W6P8AGN-1 Vendor = Solidigm Model = SOLIDIGM SB5PHC7E068TON FwVers = G79YR112 strId = nvme:e1:00.00 Pci Address = e1:00.0 Id = 2 Pci Slot = 22 Serial Num = PHCR4280002W6P8AGN-2 Vendor = Solidigm Model = SOLIDIGM SB5PHC7E068TON FwVers = G79YR112 strId = nvme:e2:00.00 Pci Address = e2:00.0 Id = 10 Pci Slot = 3 Serial Num = PHCR4280004S6P8AGN-1 Vendor = Solidigm Model = SOLIDIGM SB5PHC7E068TON FwVers = G79YR112 strId = nvme:f1:00.00 Pci Address = f1:00.0 Id = 11 Pci Slot = 23 Serial Num = PHCR4280004S6P8AGN-2 Vendor = Solidigm Model = SOLIDIGM SB5PHC7E068TON FwVers = G79YR112 strId = nvme:f2:00.00 Pci Address = f2:00.0 Total number of expanders: 0 Total number of PDs: 12 /dev/nvme0n1 Solidigm NVD 6801gb slot: 0 pci-addr : 71 SOLIDIGM SB5PH27X076TOC SFF /dev/nvme1n1 Solidigm NVD 6801gb slot: 1 pci-addr : 72 SOLIDIGM SB5PH27X076TOC SFF /dev/nvme3n1 Solidigm NVD 6801gb slot: 2 pci-addr : 74 SOLIDIGM SB5PH27X076TOC SFF /dev/nvme2n1 Solidigm NVD 6801gb slot: 3 pci-addr : 73 SOLIDIGM SB5PH27X076TOC SFF /dev/nvme10n1 Solidigm NVD 3400gb slot: 4 pci-addr : e1 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme11n1 Solidigm NVD 3400gb slot: 5 pci-addr : e2 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme8n1 Solidigm NVD 3400gb slot: 6 pci-addr : f1 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme9n1 Solidigm NVD 3400gb slot: 7 pci-addr : f2 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme6n1 Solidigm NVD 3400gb slot: 8 pci-addr : 61 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme7n1 Solidigm NVD 3400gb slot: 9 pci-addr : 62 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme12n1 Solidigm NVD 3400gb slot: 10 pci-addr : 91 SOLIDIGM SB5PHC7E068TON AIC /dev/nvme13n1 Solidigm NVD 3400gb slot: 11 pci-addr : 92 SOLIDIGM SB5PHC7E068TON AIC ==== END STORAGE DUMP =========
For example, to add two (2) AIC disks, you must specifyndisk
values as 4 because AIC has two NVMe disks, so the total value of two AIC disks is four NVMe disks.#odaadmcli expand storage -ndisk 4 Running precheck, it may take a few minutes. Precheck passed. Check the progress of expansion of storage by executing 'odaadmcli show disk' Waiting for expansion to finish. It may take several minutes to complete depending upon the number of disks being expanded
- Run the
odaadmcli show disk
command to ensure that all disks are listed, are online, and are in a good state.# odaadmcli show disk NAME PATH TYPE STATE STATE_DETAILS pd_00 /dev/nvme0n1 NVD ONLINE Good pd_01 /dev/nvme1n1 NVD ONLINE Good pd_02 /dev/nvme3n1 NVD ONLINE Good pd_03 /dev/nvme14n2 NVD ONLINE Good pd_04_c1 /dev/nvme10n1 NVD ONLINE Good pd_04_c2 /dev/nvme11n1 NVD ONLINE Good pd_05_c1 /dev/nvme8n1 NVD ONLINE Good pd_05_c2 /dev/nvme9n1 NVD ONLINE Good #
- Verify that the two AIC disks are added to Oracle
Automatic Storage Management (Oracle ASM) as follows:
- Run
asm_script
to verify that the disks in slots 3 and 4 are added to Oracle ASM. Verify that both disks are successfully added (CACHED and MEMBER). Following is an example of default configuration of 80:20 where eight partitions (p1 to p8) are part of the DATA disk group and two partitions (p9 and p10) are part of the RECO diskgroup.# su gridUser /opt/oracle/oak/bin/stordiag/asm_script.sh 0 6 # su grid /opt/oracle/oak/bin/stordiag/asm_script.sh 0 6 SQL*Plus: Release 19.0.0.0.0 - Production on Tue Dec 10 11:28:07 2024 Version 19.25.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.25.0.0.0 SQL> SQL> SQL> SQL> SQL> PATH NAME GROUP_NUMBER STATE MODE_ST MOUNT_S HEADER_STATU ---------------------------------------- ----------------------------------- ------------ -------- ------- ------- ------------ AFD:NVD_S04_C1_PHAZ22330P1 NVD_S04_C1_PHAZ22330P1 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C1_PHAZ22330P2 NVD_S04_C1_PHAZ22330P2 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C1_PHAZ22330P3 NVD_S04_C1_PHAZ22330P3 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C1_PHAZ22330P4 NVD_S04_C1_PHAZ22330P4 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C1_PHAZ22330P5 NVD_S04_C1_PHAZ22330P5 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHAZ22330P10 NVD_S04_C2_PHAZ22330P10 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHAZ22330P6 NVD_S04_C2_PHAZ22330P6 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHAZ22330P7 NVD_S04_C2_PHAZ22330P7 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHAZ22330P8 NVD_S04_C2_PHAZ22330P8 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHAZ22330P9 NVD_S04_C2_PHAZ22330P9 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHAZ23330P1 NVD_S05_C1_PHAZ23330P1 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHAZ23330P2 NVD_S05_C1_PHAZ23330P2 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHAZ23330P3 NVD_S05_C1_PHAZ23330P3 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHAZ23330P4 NVD_S05_C1_PHAZ23330P4 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHAZ23330P5 NVD_S05_C1_PHAZ23330P5 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHAZ23330P10 NVD_S05_C2_PHAZ23330P10 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHAZ23330P6 NVD_S05_C2_PHAZ23330P6 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHAZ23330P7 NVD_S05_C2_PHAZ23330P7 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHAZ23330P8 NVD_S05_C2_PHAZ23330P8 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHAZ23330P9 NVD_S05_C2_PHAZ23330P9 2 NORMAL ONLINE CACHED MEMBER AFD:SSD_QRMDSK_P2 0 NORMAL ONLINE CLOSED FORMER AFD:SSD_QRMDSK_P1 0 NORMAL ONLINE CLOSED FORMER SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.25.0.0.0
- Use the
odaadmcli show validation storage errors
command to view hard storage errors. Hard errors include having the wrong type of disk inserted into a particular slot, an invalid disk model, or an incorrect disk size.# odaadmcli show validation storage errors
- Use the
odaadmcli show validation storage failures
command to view soft validation errors. A typical soft disk error would be an invalid version of the disk firmware.# odaadmcli show validation storage failures
- Confirm that the
oak_storage_conf.xml
file shows the number of disks added. For example, if you added two AIC to four SFF, then theoak_storage_conf.xml
file must shownumberOfDisks
as 8.#cat /opt/oracle/oak/conf/oak_storage_conf.xml <!-- This file is created by the ODA software as part of system provisioning based on system provisioning requests. Values of element nodes can be changed by OAK in response to storage configuration change operation. DO NOT EDIT THIS FILE. --> <CometConfiguration> <OakStorageConfigInfo type="string" dimension="vector" readonly="true" required="true" default=""> <!-- Number of disks part of OAK --> <numberOfDisks>8</numberOfDisks> <!-- Number of partitions per disk part of DATA diskgroup in multiple partition scheme --> <!-- Number of partitions per disk part of RECO diskgroup in multiple partition scheme --> <!-- are derived from number of partitions per disk which are part of ASM DATA diskgroup --> <numOfDataDiskPartitionInAsm>8</numOfDataDiskPartitionInAsm> </OakStorageConfigInfo> </CometConfiguration> #
For example, to add four (4) AIC disks, you must specifyndisk
values as 8 because AIC has two NVMe disks, so the total value of four AIC disks is eight NVMe disks.#odaadmcli expand storage -ndisk 8 Running precheck, it may take a few minutes. Precheck passed. Check the progress of expansion of storage by executing 'odaadmcli show disk' Waiting for expansion to finish. It may take several minutes to complete depending upon the number of disks being expanded
- Run the
odaadmcli show disk
command to ensure that all disks are listed, are online, and are in a good state.# odaadmcli show disk pd_00 /dev/nvme0n1 NVD ONLINE Good pd_01 /dev/nvme1n1 NVD ONLINE Good pd_02 /dev/nvme2n1 NVD ONLINE Good pd_03 /dev/nvme3n1 NVD ONLINE Good pd_04_c1 /dev/nvme4n1 NVD ONLINE Good pd_04_c2 /dev/nvme5n1 NVD ONLINE Good pd_05_c1 /dev/nvme9n1 NVD ONLINE Good pd_05_c2 /dev/nvme8n1 NVD ONLINE Good pd_06_c1 /dev/nvme11n1 NVD ONLINE Good pd_06_c2 /dev/nvme10n1 NVD ONLINE Good pd_07_c1 /dev/nvme12n1 NVD ONLINE Good pd_07_c2 /dev/nvme13n1 NVD ONLINE Good
- Verify that the disks are added to
Oracle Automatic Storage Management (Oracle ASM)
as follows:
- Run
asm_script
to verify that the AIC disks are added to Oracle ASM. Verify that both disks are successfully added (CACHED and MEMBER). Following is example of default configuration of 80:20 where eight partitions (p1 to p8) are part of the DATA disk group and two partitions (p9 and p10) are part of the RECO diskgroup.# su gridUser /opt/oracle/oak/bin/stordiag/asm_script.sh 0 6 # su grid /opt/oracle/oak/bin/stordiag/asm_script.sh 0 6 SQL*Plus: Release 19.0.0.0.0 - Production on Tue Dec 10 11:56:05 2024 Version 19.25.0.0.0 Copyright (c) 1982, 2024, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.25.0.0.0 SQL> SQL> SQL> SQL> SQL> AFD:NVD_S04_C1_PHCR42800P1 NVD_S04_C1_PHCR42800P1 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C1_PHCR42800P2 NVD_S04_C1_PHCR42800P2 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C1_PHCR42800P3 NVD_S04_C1_PHCR42800P3 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C1_PHCR42800P4 NVD_S04_C1_PHCR42800P4 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C1_PHCR42800P5 NVD_S04_C1_PHCR42800P5 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHCR42800P10 NVD_S04_C2_PHCR42800P10 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHCR42800P6 NVD_S04_C2_PHCR42800P6 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHCR42800P7 NVD_S04_C2_PHCR42800P7 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHCR42800P8 NVD_S04_C2_PHCR42800P8 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S04_C2_PHCR42800P9 NVD_S04_C2_PHCR42800P9 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHCR42800P1 NVD_S05_C1_PHCR42800P1 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHCR42800P2 NVD_S05_C1_PHCR42800P2 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHCR42800P3 NVD_S05_C1_PHCR42800P3 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHCR42800P4 NVD_S05_C1_PHCR42800P4 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C1_PHCR42800P5 NVD_S05_C1_PHCR42800P5 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHCR42800P10 NVD_S05_C2_PHCR42800P10 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHCR42800P6 NVD_S05_C2_PHCR42800P6 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHCR42800P7 NVD_S05_C2_PHCR42800P7 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHCR42800P8 NVD_S05_C2_PHCR42800P8 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S05_C2_PHCR42800P9 NVD_S05_C2_PHCR42800P9 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C1_PHCR42900P1 NVD_S06_C1_PHCR42900P1 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C1_PHCR42900P2 NVD_S06_C1_PHCR42900P2 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C1_PHCR42900P3 NVD_S06_C1_PHCR42900P3 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C1_PHCR42900P4 NVD_S06_C1_PHCR42900P4 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C1_PHCR42900P5 NVD_S06_C1_PHCR42900P5 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C2_PHCR42900P10 NVD_S06_C2_PHCR42900P10 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C2_PHCR42900P6 NVD_S06_C2_PHCR42900P6 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C2_PHCR42900P7 NVD_S06_C2_PHCR42900P7 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C2_PHCR42900P8 NVD_S06_C2_PHCR42900P8 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S06_C2_PHCR42900P9 NVD_S06_C2_PHCR42900P9 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C1_PHCR42860P1 NVD_S07_C1_PHCR42860P1 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C1_PHCR42860P2 NVD_S07_C1_PHCR42860P2 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C1_PHCR42860P3 NVD_S07_C1_PHCR42860P3 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C1_PHCR42860P4 NVD_S07_C1_PHCR42860P4 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C1_PHCR42860P5 NVD_S07_C1_PHCR42860P5 1 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C2_PHCR42860P10 NVD_S07_C2_PHCR42860P10 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C2_PHCR42860P6 NVD_S07_C2_PHCR42860P6 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C2_PHCR42860P7 NVD_S07_C2_PHCR42860P7 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C2_PHCR42860P8 NVD_S07_C2_PHCR42860P8 2 NORMAL ONLINE CACHED MEMBER AFD:NVD_S07_C2_PHCR42860P9 NVD_S07_C2_PHCR42860P9 2 NORMAL ONLINE CACHED MEMBER AFD:SSD_QRMDSK_P2 0 NORMAL ONLINE CLOSED FORMER AFD:SSD_QRMDSK_P1 0 NORMAL ONLINE CLOSED FORMER
- Use the
odaadmcli show validation storage errors
command to view hard storage errors. Hard errors include having the wrong type of disk inserted into a particular slot, an invalid disk model, or an incorrect disk size.# odaadmcli show validation storage errors
- Use the
odaadmcli show validation storage failures
command to view soft validation errors. A typical soft disk error would be an invalid version of the disk firmware.# odaadmcli show validation storage failures
- Confirm that the
oak_storage_conf.xml
file shows the number of disks added. For example, if you added four AIC to four SFF, then theoak_storage_conf.xml
file must shownumberOfDisks
as 12, that is, four SFF NVME disks and eight NVMe disks for four AIC.#cat /opt/oracle/oak/conf/oak_storage_conf.xml <!-- This file is created by the ODA software as part of system provisioning based on system provisioning requests. Values of element nodes can be changed by OAK in response to storage configuration change operation. DO NOT EDIT THIS FILE. --> <CometConfiguration> <OakStorageConfigInfo type="string" dimension="vector" readonly="true" required="true" default=""> <!-- Number of disks part of OAK --> <numberOfDisks>12</numberOfDisks> <!-- Number of partitions per disk part of DATA diskgroup in multiple partition scheme --> <!-- Number of partitions per disk part of RECO diskgroup in multiple partition scheme --> <!-- are derived from number of partitions per disk which are part of ASM DATA diskgroup --> <numOfDataDiskPartitionInAsm>12</numOfDataDiskPartitionInAsm> </OakStorageConfigInfo> </CometConfiguration> #
- Run
- Run
Parent topic: Managing Storage on Single-Node Systems
Replacing Small Form Factor (SFF) NVMe Storage Disks
Understand how you can replace existing SFF NVMe disks on Oracle Database Appliance.
Preparing for a Storage Upgrade
-
Check the disk health of the existing storage disks.
# odaadmcli show disk
-
Run the the
odaadmcli show disk
andasmcmd lsdsk -p
commands to view and review the storage disk information in OAKD and Oracle Automatic Storage Management (Oracle ASM).# odaadmcli show disk
# asmcmd lsdsk -p
-
Use ORAchk to confirm Oracle ASM and Oracle Clusterware health.
Replacing NVMe Storage Disks
Follow all these steps to replace NVMe storage disks:
WARNING:
Pulling a drive before powering it off will crash the kernel, which can lead to data corruption. Do not pull the drive when the LED is an amber or green color. When you need to replace an NVMe drive, use the software to power off the drive before pulling the drive from the slot. If you have more than one disk to replace, complete the replacement of one disk before starting replacement of the next disk.
- Power OFF the NVMe disk before removing it from the slot.
- Wait for one minute for OAKD to complete the operation for disk removal.
- Insert the new disk in the slot.
- Wait for at least 2-3 minutes between inserting each disk for OAKD to complete the operation to add the disk to Oracle ASM and OAK.
- Check the status of the new disk in OAKD with the
odaadmcli show disk
command. The disk must have the statusOnline
andGood
in OAKD. Check the status of the new disk in Oracle ASM with theasmcmd lsdsk -p
command. The disk must be inCACHED MEMBER ONLINE NORMAL
state.# odaadmcli show disk
# asmcmd lsdsk -p
Parent topic: Managing Storage on Single-Node Systems
Managing Storage on High-Availability Systems
Understand the storage for your Oracle Database Appliance X11-HA system.
- About Storage Options for Oracle Database Appliance X11-HA
Oracle Database Appliance High-Availability systems have options for high performance and high capacity storage configurations. - Adding Solid-State Drives (SSDs) for Data Storage
Add a pack of solid-state drives (SSDs) for data storage into the existing Oracle Database Appliance X11-HA base configuration to fully populate the base storage shelf. - Adding the Storage Expansion Shelf
After the base storage shelf is fully populated, you can add the storage expansion shelf to expand your data storage on your high-availability platform.
Parent topic: Managing Storage
About Storage Options for Oracle Database Appliance X11-HA
Oracle Database Appliance High-Availability systems have options for high performance and high capacity storage configurations.
The base configuration of Oracle Database Appliance X11-HA hardware model has six slots (slots 0-5) with 7.68 TB drives of SSD raw storage. If you choose to order and deploy the full storage capacity, then you can fill the remaining 18 slots (slots 6-23) with either SSD or HDD drives. For even more storage, you can add a storage expansion shelf to double the storage capacity of your appliance.
In all configurations, the base storage and the storage expansion shelf each have six SSDs for DATA/RECO in the SSD option or FLASH in the HDD option.
Oracle Database Appliance X11-HA does not allocate dedicated SSD drives for REDO disk groups. Instead, the space for REDO logs is allocated on SSD drives as required.
For Oracle ASM storage, the REDO logs are stored in the available disk group space during database creation, based on the database shape selected. For Oracle ACFS storage, the space for REDO logs is allocated during the database storage creation assuming the minimum db shape (odb1s). If you create the database storage without database, then the space allocated for REDO logs is 4 GB, assuming the minimum db shape (odb1s). Subsequently, when you create a database with your required database shape on the existing database storage, the REDO logs space is extended based on shape of the database.
On Oracle Database Appliance X11-HA High Performance configurations, with only SSD drives, the DATA and RECO disk groups use all the SSD drives whether 6, 12, 18, 24, or 48 with storage expansion shelf. REDO logs are stored in the RECO disk group.
On Oracle Database Appliance X11-HA High Capacity configurations, with both HDD and SSD drives, the DATA and RECO disk groups use the HDD drives, and the SSD drives store the FLASH disk group. REDO logs are stored in the FLASH disk group.
On both High Performance and High Capacity configurations, REDO logs are always created on SSD drives, similar to earlier Oracle Database Appliance hardware models. REDO logs are always created with high redundancy irrespective of the redundancy level of the disk group, whether RECO or FLASH.
High Performance
A high performance configuration uses solid state drives (SSDs) for DATA and RECO storage. The base configuration has six disks, each with 7.68 TB SSD raw storage for DATA and RECO.
You can add up to three (3) 6-Pack SSDs on the base configuration, for a total of 184.32 TB SSD raw storage. If you need more storage, you can double the capacity by adding an expansion shelf of SSD drives. The expansion shelf provides an additional 24 SSDs, each with 7.68TB raw storage for DATA and RECO, for a total of another 184.32 TB SSD raw storage.
Adding an expansion shelf requires that the base storage shelf and expansion shelf are fully populated with SSD drives. When you expand the storage, there is no downtime.
A system fully configured for high performance has 368.64 TB SSD raw storage for DATA and RECO.
High Capacity
A high capacity configuration uses a combination of SSD and HDD drives.
The base configuration has six disks, each with 7.68 TB SSD raw storage for FLASH.
The following expansion options are available:
-
Base shelf: additional 396 TB HDD raw storage for DATA and RECO (18 HDDs, each with 22 TB storage)
-
Expansion Storage shelf: additional shelf storage configuration must be identical to the storage configuration of the base shelf.
A system fully configured for high capacity has a total of 884.16 TB raw storage for DATA, RECO, and FLASH, with 92.16 TB SSD and 792 TB HDD.
Table 10-2 Storage Options for Oracle Database Appliance X11-HA
Configuration | Oracle Database Appliance X11-HA SSD-Only Configuration for High Performance | Oracle Database Appliance X11-HA SSD and HDD Configuration for High Capacity |
---|---|---|
Base configuration |
Base storage shelf contains 6 SSDs of 7.68 TB.
|
Base storage shelf is fully populated with 6-pack SSDs of 7.68 TB and 18-drives of HDDs with 22 TB.
|
Storage addition options |
Base shelf contains 6 SSDs. Additional 18 SSDs must be added in packs of 6.
|
Not applicable. Base storage shelf is fully populated. |
Storage shelf expansion options |
|
|
Converting High Performance to High Capacity system
- Takea backup of your database on the default storage of 6 SSD configuration on the provisioned system.
- Run the cleanup utility with the
--erasedata
option on both nodes to erase all SSD disks of OAK and Oracle ASM headers. The--erasedata
option completely erases the SSD disk so any data on the disks is lost and becomes unrecoverable. - After cleanup of both nodes, add 18 HDD disks to the base storage shelf. These 18 HDDs must be brand new disks from Oracle without any OAK and Oracle ASM header.
- Reimage both nodes with the required OAK version.
- Reprovision the system again.
Parent topic: Managing Storage on High-Availability Systems
Adding Solid-State Drives (SSDs) for Data Storage
Add a pack of solid-state drives (SSDs) for data storage into the existing Oracle Database Appliance X11-HA base configuration to fully populate the base storage shelf.
If you need to add storage to the base configuration, you can order one, two, or three 6-pack of SSDs to complete the base configuration on Oracle Database Appliance X11-HA.
You must fully populate the base configuration before you can add an expansion shelf to Oracle Database Appliance X11-HA. If you add an expansion shelf, the shelf must have the same disk storage configuration as the base configuration.
Note:
For a high-performance configuration, you can add SSDs to the base storage shelf or add a storage expansion shelf. For high-capacity base configuration with 6-SSDs, if you want to expand storage to use HDDs, then you must reimage and deploy the appliance.Parent topic: Managing Storage on High-Availability Systems
Adding the Storage Expansion Shelf
After the base storage shelf is fully populated, you can add the storage expansion shelf to expand your data storage on your high-availability platform.
The expansion shelf is available on Oracle Database Appliance high-availability platforms, such as Oracle Database Appliance X11-HA. The addition of the storage expansion shelf includes checks across both nodes. It is important to confirm that SSH does work across the nodes and all users can connect as expected using their shared password.
You must fully populate the base configuration before you can add an expansion shelf. If you add an expansion shelf, the shelf must have the same disk storage configuration as the base storage shelf.
Note:
Oracle recommends that you add a storage expansion shelf when you have relatively little activity on your databases. When the system discovers the new storage, Oracle Automatic Storage Management (Oracle ASM) automatically rebalances the disk groups. The rebalance operation may degrade database performance until the operation completes.Parent topic: Managing Storage on High-Availability Systems