4 Known Issues with Oracle Database Appliance in This Release
The following are known issues deploying, updating, and managing Oracle Database Appliance in this release.
- Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release. - Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance. - Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
Known Issues When Patching Oracle Database Appliance
Understand the known issues when patching Oracle Database Appliance to this release.
- Error in server patching
When patching the server on Oracle Database Appliance, an error may be encountered. - Error in attaching a vdisk after DB system patching
After upgrading a DB system on Oracle Database Appliance, the vdisks attached to the DB system may not continue to be attached. - Free space issue during database patching
When patching the database on Oracle Database Appliance, an error may be encountered. - Error in running patching prechecks
When running patching prechecks on Oracle Database Appliance, an error may be encountered. - Error in DB system after server patching
After patching the server on Oracle Database Appliance, an error may be encountered on the DB system. - Error in server patching
When patching the server on Oracle Database Appliance, an error may be encountered. - Error in server patching
When patching the server on Oracle Database Appliance, an error may be encountered. - Error in upgrading a database
When upgrading a database, an error may be encountered. - Error in database patching
When patching a database on Oracle Database Appliance, an error may be encountered. - Component version not updated after patching
After patching the Oracle Database Appliance server, theodacli describe-component
command does not display the correct Intel Model 0x1528 Ethernet Controller version, if the current version is 8000047B or 8000047C. - Error in server patching
When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered. - AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.27, theodacli update-dbhome
command may fail. - Errors when running ORAchk or the odacli create-prepatchreport command
When you run ORAchk or theodacli create-prepatchreport
command, an error is encountered. - Error in patching prechecks report
The patchung prechecks report may display an error. - Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message may be displayed. - Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered. - Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
Error in server patching
When patching the server on Oracle Database Appliance, an error may be encountered.
Failure Message
The following error message is displayed:
DCS-10001:Internal error encountered: Failed to install/update rpms
Command Details
# odacli update-server
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Rebuild the RPM database:
mkdir /var/lib/rpm_backup cp -a /var/lib/rpm/__db* /var/lib/rpm_backup/ rm -f /var/lib/rpm/__db* rpm -qa rpm --rebuilddb rpm -qa
- Verify that the ipmitool is present on the system. If
the following command returns no output, then it means that the
ipmitool RPM is not installed.
rpm -qa | grep ipmitool
- If ipmitool is not present, then install the RPM as
follows:
rpm -ivh /opt/oracle/oak/pkgrepos/os/19.27/osrpms/hmpipmitool-1.8.18.0-29.el8.x86_64.rpm
- Start Oracle Clusterware on all nodes. For
high-availability systems, run the following command on both
nodes:
grid_home/crsctl start crs
- Run the
odacli update-server
command again:odacli update-server -v 19.27.0.0.0
- Rebuild the RPM database:
mkdir /var/lib/rpm_backup cp -a /var/lib/rpm/__db* /var/lib/rpm_backup/ rm -f /var/lib/rpm/__db* rpm -qa rpm --rebuilddb rpm -qa
- Start Oracle Clusterware on all nodes. For
high-availability systems, run the following command on both
nodes:
grid_home/crsctl start crs
- Run the
odacli update-server
command again:odacli update-server -v 19.27.0.0.0
Bug Number
This issue is tracked with Oracle bug 37967861.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in attaching a vdisk after DB system patching
After upgrading a DB system on Oracle Database Appliance, the vdisks attached to the DB system may not continue to be attached.
Problem Description
After DB system upgrade, the existing vdisks are not attached. Only
the vdisk metadata associated with the DB system is preserved. The virtual
device name may be different from the name before you run the odacli
upgrade-dbsystem
command.
Command Details
# odacli upgrade-dbsystem
Hardware Models
All Oracle Database Appliance hardware models X9-2, X8-2, and X7-2
Workaround
Detach the vdisk manually with the --force
option
from the VM to reconcile the metadata. Then, attach the vdisk to the
respective VM. Then, manually mount the file system on the device in the DB
system.
Bug Number
This issue is tracked with Oracle bug 36885595.
Parent topic: Known Issues When Patching Oracle Database Appliance
Free space issue during database patching
When patching the database on Oracle Database Appliance, an error may be encountered.
Problem Description
When patching the database or dbhome on Oracle Database Appliance, the datapatch sanity check or the datapatch application may fail because of insufficient free space for TEMP tablespace.
Failure Message
The following error message may be displayed in the
sqlpatch_debug.log
:
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP_ENC
Or, in the sanity_checks.log
:
Check: Tablespace Status - ERROR
Command Details
# odacli update-dbhome
# odacli update-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
TEMP_ENC
and then resume the
patching operation using the command odacli
update-database
.alter database tempfile 4 resize 400M;
alter session set container=CHSTPDB;
alter database tempfile 5 resize 400M;
Bug Number
This issue is tracked with Oracle bug 37616088.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in running patching prechecks
When running patching prechecks on Oracle Database Appliance, an error may be encountered.
Problem Description
On Oracle Database Appliance DB system running Oracle Database 23.8, patching prechecks report for DB home failed at creating destination DB home. Error DCS-10267 is observed in the patching prechecks report. An error message may be displayed.
Failure Message
The following error message is displayed:
ProvDbHome by using RHP Failure
When describe the generated prepatch report, failed precheck item can be seen as below,
Evaluate DBHome patching with Failed DCS-10267 - failed to run the patch
RHP precheck with Oracle FPP for Oracle
home ID
<UUID>
For input string: "<dbhome_name>"
Command Details
# odacli create-prepatchreport -d
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Create a destination DB home with corresponding version and DB
edition:
odacli create-dbhome -v version -de dbEdition
- Generate the patching prechecks report for the database to be patched,
one database at a
time:
odacli create-prepatchreport -db -dbid database_ID -to dest_dbhome_ID
- If there are no critical failures in the patching prechecks report, then
proceed with the patching
operation:
odacli update-database -i database_ID -to dest_dbhome_ID
Bug Number
This issue is tracked with Oracle bug 38013437.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in DB system after server patching
After patching the server on Oracle Database Appliance, an error may be encountered on the DB system.
Failure Message
The following error message is displayed:
DCS-10172:DCS infrastructure is not ready: The infrastructure is still initializing
Command Details
# odacli update-server
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- On the DB system, stop the DCS agent
service:
# systemctl stop initdcsagent
- Delete the entry
HWADDR=null
from the/etc/sysconfig/network-scripts/ifcfg-ib*
configuration files:# sed -i '/HWADDR=null/d' /etc/sysconfig/network-scripts/ifcfg-ib*
- Restart the network
service:
# systemctl restart network
- Start the DCS agent
service:
# systemctl start initdcsagent
- Wait for about 5 minutes and then verify that the DCS agent
infrastructure is initialized, and both Oracle HAMI members are
ONLINE.
# /opt/oracle/dcs/hami/bin/hamictl.sh status
- Verify that Oracle Clusterware service is
online:
# CRS_HOME/bin/crsctl check cluster -all ************************************************************** Node0: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** Node1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
- If Oracle Clusterware is not online on any DB system, then restart
Oracle Clusterware on that DB
system:
# CRS_HOME/bin/crsctl stop crs -f # CRS_HOME/bin/crsctl start crs
Bug Number
This issue is tracked with Oracle bug 38064361.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
When patching the server on Oracle Database Appliance, an error may be encountered.
Problem Description
When patching the server on Oracle Database Appliance, the kdump may fail to start during node restart, and an error message may be displayed.
Failure Message
There may be an error locating the modules.dep
for the newly installed kernel, and the following error message is
displayed:
# systemctl status kdump -l
kdump.service - Crash recovery kernel arming
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2024-10-15 11:51:15 IST; 8min ago
Process: 6280 ExecStart=/usr/bin/kdumpctl start (code=exited, status=1/FAILURE)
Main PID: 6280 (code=exited, status=1/FAILURE)
Oct 15 11:51:12 systemd[1]: Starting Crash recovery kernel arming...
Oct 15 11:51:12 kdumpctl[6471]: kdump: No kdump initial ramdisk found.
Oct 15 11:51:12 kdumpctl[6471]: kdump: Rebuilding /boot/initramfs-5.4.17-2136.335.4.el8uek.x86_64kdump.img
Oct 15 11:51:13 kdumpctl[6566]: kdump: Warning: There might not be enough space to save a vmcore.
Oct 15 11:51:13 kdumpctl[6566]: kdump: The size of /dev/mapper/VolGroupSys-LogVolRoot should be greater than 393610208 kilo bytes.
Oct 15 11:51:15 dracut[8055]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth dash resume ifcfg earlykdump" --compress=xz --mount "/dev/mapper/VolGroupSys-LogVolRoot /sysroot ext4 rw,relatime,nofail,x-systemd.before=initrd-fs.target" --no-hostonly-default-device --add-device /dev/md0 -f /boot/initramfs-5.4.17-2136.335.4.el8uek.x86_64kdump.img 5.4.17-2136.335.4.el8uek.x86_64
Oct 15 11:51:15 kdumpctl[7997]: dracut: /lib/modules/5.4.17-2136.335.4.el8uek.x86_64//modules.dep is missing. Did you run depmod?
Oct 15 11:51:15 dracut[8055]: /lib/modules/5.4.17-2136.335.4.el8uek.x86_64//modules.dep is missing. Did you run depmod?
Oct 15 11:51:15 kdumpctl[6471]: kdump: mkdumprd: failed to make kdump initrd
Oct 15 11:51:15 kdumpctl[6471]: kdump: Starting kdump: [FAILED]
Oct 15 11:51:15 systemd[1]: kdump.service: Main process exited, code=exited, status=1/FAILURE
Oct 15 11:51:15 systemd[1]: kdump.service: Failed with result 'exit-code'.
Oct 15 11:51:15 systemd[1]: Failed to start Crash recovery kernel arming.
Command Details
# odacli update-server
Hardware Models
All Oracle Database Appliance hardware models
Workaround
# systemctl restart kdump
# systemctl status kdump -l
kdump.service - Crash recovery kernel arming
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
Active: active (exited) since Sat 2024-10-19 09:34:23 IST; 8s ago
Process: 2028 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS)
Main PID: 2028 (code=exited, status=0/SUCCESS)
Oct 19 09:34:21 dracut[2762]: rd.lvm.lv=VolGroupSys/LogVolRoot
Oct 19 09:34:21 dracut[2762]: rd.md.uuid=1e7140f4:2f5386a9:3093dd8d:ee3b9b29
Oct 19 09:34:22 dracut[2762]: *** Install squash loader ***
Oct 19 09:34:22 dracut[2762]: *** Squashing the files inside the initramfs ***
Oct 19 09:34:23 dracut[2762]: *** Squashing the files inside the initramfs done ***
Oct 19 09:34:23 dracut[2762]: *** Creating image file '/boot/initramfs-5.4.17-2136.335.4.el8uek.x86_64kdump.img' ***
Oct 19 09:34:23 dracut[2762]: *** Creating initramfs image file '/boot/initramfs-5.4.17-2136.335.4.el8uek.x86_64kdump.img' done ***
Oct 19 09:34:23 kdumpctl[2104]: kdump: kexec: loaded kdump kernel
Oct 19 09:34:23 kdumpctl[2104]: kdump: Starting kdump: [OK]
Oct 19 09:34:23 systemd[1]: Started Crash recovery kernel arming.
Bug Number
This issue is tracked with Oracle bug 36998253.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
When patching the server on Oracle Database Appliance, an error may be encountered.
Problem Description
When patching the server on Oracle Database Appliance, and the DCS agent loads, the scheduler service may fail to start and an error message may be displayed.
Failure Message
The dcs-agent.log
file displays the following
error message:
-----------------------
2024-07-29 14:24:30,351 WARN [backgroundjob-zookeeper-pool-7-thread-2] [] o.j.s.JobZooKeeper: JobRunr encountered a problematic exception. Please create a bug report (if possible, provide the code to reproduce this and the stacktrace) - Processing will continue.
java.lang.NullPointerException: null
at org.jobrunr.server.zookeeper.tasks.ZooKeeperTask.pollIntervalInSecondsTimeBoxIsAboutToPass(ZooKeeperTask.java:93)
at org.jobrunr.server.zookeeper.tasks.ZooKeeperTask.getJobsToProcess(ZooKeeperTask.java:84)
at org.jobrunr.server.zookeeper.tasks.ZooKeeperTask.processJobList(ZooKeeperTask.java:57)
at org.jobrunr.server.zookeeper.tasks.ProcessOrphanedJobsTask.runTask(ProcessOrphanedJobsTask.java:29)
at org.jobrunr.server.zookeeper.tasks.ZooKeeperTask.run(ZooKeeperTask.java:47)
at org.jobrunr.server.JobZooKeeper.lambda$runMasterTasksIfCurrentServerIsMaster$0(JobZooKeeper.java:76)
at java.util.Arrays$ArrayList.forEach(Arrays.java:3880)
at org.jobrunr.server.JobZooKeeper.runMasterTasksIfCurrentServerIsMaster(JobZooKeeper.java:76)
at org.jobrunr.server.JobZooKeeper.run(JobZooKeeper.java:56)
-----------------------
Command Details
# odacli update-server
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart the DCS
agent:
systemctl restart initdcsagent
- Verify that the DCS agent is
running:
odacli ping-agent odacli list-jobs odacli describe-component
Bug Number
This issue is tracked with Oracle bug 36896020.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in upgrading a database
When upgrading a database, an error may be encountered.
Problem Description
When you create Oracle ASM databases, the RECO directory may not have been created on systems provisioned with the OAK stack. This directory is created when the first RECO record is written. After successfully upgrading these systems using Data Preserving Reprovisioning to Oracle Database Appliance release 19.15 or later, if you attempt to upgrade the database, an error message may be displayed.
Failure Message
When the odacli upgrade-database
command is run,
the following error message is displayed:
# odacli upgrade-database -i 16288932-61c6-4a9b-beb0-4eb19d95b2bd -to b969dd9b-f9cb-4e49-8e0d-575a0940d288
DCS-10001:Internal error encountered: dbStorage metadata not in place:
DCS-12013:Metadata validation error encountered: dbStorage metadata missing
Location info for database database_unique_name..
Command Details
# odacli upgrade-database
Hardware Models
All Oracle Database Appliance X6-2HA and X5-2 hardware models
Workaround
- Verify that the
odacli list-dbstorages
command displaysnull
for the redo location for the database that reported the error. For example, the following output displays a null or empty value for the database unique nameF
.# odacli list-dbstorages ID Type DBUnique Name Status Destination Location Total Used Available ---------------------------------------- ------ -------------------- ... ... ... 198678d9-c7c7-4e74-9bd6-004485b07c14 ASM F CONFIGURED DATA +DATA/F 4.89 TB 1.67 GB 4.89 TB REDO +REDO/F 183.09 GB 3.05 GB 180.04 GB RECO 8.51 TB ... ... ...
In the above output, the RECO record has a null value.
- Manually create the RECO directory for this database. If the
database unique name is
dbuniq
, then run theasmcmd
command as thegrid
user.asmcmd
- Run the
mkdir
command.asmcmd> mkdir +RECO/dbuniq
- Verify that the
odacli list-dbstorages
command output does not display a null or empty value for the database. - Rerun the
odacli upgrade-database
command.
Bug Number
This issue is tracked with Oracle bug 34923078.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in database patching
When patching a database on Oracle Database Appliance, an error may be encountered.
Problem Description
When applying the datapatch during patching of database on Oracle Database Appliance, an error message may be displayed.
Failure Message
When the odacli update-database
command is run,
the following error message is displayed:
Failed to execute sqlpatch for database …
Command Details
# odacli update-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the following SQL*Plus
command:
alter system set nls_sort='BINARY' SCOPE=SPFILE;
- Restart the database using srvctl command.
- Retry applying the datapatch with
dbhome/OPatch/datapatch -verbose -db dbUniqueName
.
Bug Number
This issue is tracked with Oracle bug 35060742.
Parent topic: Known Issues When Patching Oracle Database Appliance
Component version not updated after patching
After patching the Oracle Database Appliance server, the odacli
describe-component
command does not display the correct Intel Model
0x1528 Ethernet Controller version, if the current version is 8000047B or
8000047C.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Manually update the Ethernet controllers to 00005DD or 800005DE
using the fwupdate
command.
This issue is tracked with Oracle bug 34402352.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in server patching
When patching Oracle Database Appliance which already has STIG V1R2 deployed, an error may be encountered.
odacli update-server -f version
, an error may be
displayed.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
The STIG V1R2 rule OL7-00-040420 tries to change the permission of
the file /etc/ssh/ssh_host_rsa_key
from '640' to '600'
which causes the error. During patching, run the command chmod 600
/etc/ssh/ssh_host_rsa_key
command on both nodes.
This issue is tracked with Oracle bug 33168598.
Parent topic: Known Issues When Patching Oracle Database Appliance
AHF error in prepatch report for the update-dbhome command
When you patch server to Oracle Database Appliance release 19.27, the odacli update-dbhome
command may
fail.
Verify the Alternate Archive Failed AHF-4940: One or more log archive
Destination is Configured to destination and alternate log archive
Prevent Database Hangs destination settings are not as recommended
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Run the
odacli update-dbhome
command with the-f
option./opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 7c67c5b4-f585-4ba9-865f-c719c63c0a6e -v 19.27.0.0.0 -f
This issue is tracked with Oracle bug 33144170.
Parent topic: Known Issues When Patching Oracle Database Appliance
Errors when running ORAchk or the odacli create-prepatchreport command
When you run ORAchk or the odacli create-prepatchreport
command, an error is encountered.
One or more log archive destination and alternate log archive destination settings are not as recommended
Software home check failed
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
odacli update-dbhome
, odacli
create-prepatchreport
, odacli update-server
commands with the
-sko
option. For
example:odacli update-dbhome -j -v 19.27.0.0.0 -i dbhome_id -sko
This issue is tracked with Oracle bugs 30931017, 31631618, and 31921112.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error in patching prechecks report
The patchung prechecks report may display an error.
Failure in the pre-patch report caused by “AHF-5190: operating system boot device order is not configured as recommended”
Hardware Models
Oracle Database Appliance X-7 hardware models
Workaround
Run the odacli update-server
or odacli
update-dbhome
command with the -f
option.
This issue is tracked with Oracle bug 33631256.
Parent topic: Known Issues When Patching Oracle Database Appliance
Error message displayed even when patching Oracle Database Appliance is successful
Although patching of Oracle Database Appliance was successful, an error message may be displayed.
odacli
update-dcscomponents
command:
# time odacli update-dcscomponents -v 19.27.0.0.0
^[[ADCS-10008:Failed to update DCScomponents: 19.27.0.0.0
Internal error while patching the DCS components :
DCS-10231:Cannot proceed. Pre-checks for update-dcscomponents failed. Refer
to /opt/oracle/dcs/log/-dcscomponentsPreCheckReport.log on node 1 for
details.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
This is a timing issue with setting up the SSH equivalence.
Run the odacli update-dcscomponents
command again and
the operation completes successfully.
This issue is tracked with Oracle bug 32553519.
Parent topic: Known Issues When Patching Oracle Database Appliance
Server status not set to Normal when patching
When patching Oracle Database Appliance, an error is encountered.
When patching the appliance, the odacli
update-server
command fails with the
following error:
DCS-10001:Internal error encountered: Server upgrade state is not NORMAL node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
-
Run the command:
Grid_home/bin/cluvfy stage -post crsinst -collect cluster -gi_upgrade -n all
-
Ignore the following two warnings:
Verifying OCR Integrity ...WARNING PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR. Verifying Single Client Access Name (SCAN) ...WARNING RVG-11368 : A SCAN is recommended to resolve to "3" or more IP
-
Run the command again till the output displays only the two warnings above. The status of Oracle Custerware status should be
Normal
again. -
You can verify the status with the command:
Grid_home/bin/crsctl query crs activeversion -f
This issue is tracked with Oracle bug 30099090.
Parent topic: Known Issues When Patching Oracle Database Appliance
Patching of M.2 drives not supported
Patching of M.2 drives (local disks SSDSCKJB48 and SSDSCKJB480G7) is not supported.
These drives are displayed when you run the odacli
describe-component
command. Patching of neither of the two known
versions 0112 and 0121 of the M.2 disk is supported.
Hardware Models
Oracle Database Appliance bare metal deployments
Workaround
None
This issue is tracked with Oracle bug 30249232.
Parent topic: Known Issues When Patching Oracle Database Appliance
Known Issues When Deploying Oracle Database Appliance
Understand the known issues when provisioning or deploying Oracle Database Appliance.
- Error in enabling high-availability on a TDE-enabled database
When enabling high-availability on a TDE-enabled database on Oracle Database Appliance, an error may be encountered. - Error in provisioning bare metal and DB system
If the NTP servers used to provision the bare metal system or DB system are provided with the FQDN, then an error is encountered during the provisioning job. - Error in provisioning job due to NTP server inavailability
If the NTP servers used to provision the bare metal system or DB system are unavailable, then an error is encountered during the provisioning job. - Error in creating DB system
When creating a DB system, an error may be encountered. - Error in Oracle Data Guard operation after modifying the Oracle ASM port
When running theodacli modify-asmport
command on Oracle Database Appliance configured with Oracle Data Guard, an error may be encountered. - Error in database creation on multi-user access enabled system
When creating a database on multi-user access enabled system on Oracle Database Appliance, an error may be encountered. - Error in configuring Oracle ASR
When configuring Oracle ASR, an error may be encountered when registering Oracle ASR Manager due to an issue while contacting the transport server. - Error in creating database
When creating a database on Oracle Database Appliance, an error may be encountered. - Error in creating two DB systems
When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered. - Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered. - Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after runningcleanup.pl
. - Error encountered after running cleanup.pl
Errors encountered in runningodacli
commands after runningcleanup.pl
. - Errors in clone database operation
Clone database operation fails due to errors.
Error in enabling high-availability on a TDE-enabled database
When enabling high-availability on a TDE-enabled database on Oracle Database Appliance, an error may be encountered.
Problem Description
When you enable high-availability on a TDE-enabled database that uses Oracle Key Vault to store TDE keys, an error message may be displayed.
Failure Message
DCS-12721:OKV command "okv admin endpoint create" failed to run: Failed to create endpoint endpoint_name
Command Details
# odacli modify-database -n dbname -ha
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Instead of enabling high-availability after creating the database, enable high-availability during database creation itself.
Bug Number
This issue is tracked with Oracle bug 37182129.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in provisioning bare metal and DB system
If the NTP servers used to provision the bare metal system or DB system are provided with the FQDN, then an error is encountered during the provisioning job.
Problem Description
Install oracle-ahf
" task. In the DB system creation
job, the following error message is displayed:
DCS-10001:Internal error encountered: Job 'Provision DB System 'db_system_name'' failed.
DCS-10001:Internal error encountered: Chronyd failed to sync the clock. :
Failure Message
DCS-10001:Internal error encountered: Job 'Provision DB System 'db_system_name'' failed.
DCS-10001:Internal error encountered: Chronyd failed to sync the clock. :
Command Details
odacli create-appliance -r path_to_json_payload
odacli create-dbsystem -p path_to_json_payload
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Provide the NTP servers in the JSON payload for the bare metal system and DB systems in the IP form, instead of the FQDN.
Bug Number
This issue is tracked with Oracle bug 37837999.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in provisioning job due to NTP server inavailability
If the NTP servers used to provision the bare metal system or DB system are unavailable, then an error is encountered during the provisioning job.
Problem Description
Install oracle-ahf
" task. In the DB system creation
job, the following error message is displayed:
DCS-10001:Internal error encountered: Job 'Provision DB System 'db_system_name'' failed.
DCS-10001:Internal error encountered: Chronyd failed to sync the clock. :
Failure Message
DCS-10001:Internal error encountered: Job 'Provision DB System 'db_system_name'' failed.
DCS-10001:Internal error encountered: Chronyd failed to sync the clock. :
Command Details
odacli create-appliance -r path_to_json_payload
odacli create-dbsystem -p path_to_json_payload
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Workaround 1: Do not provide any NTP server details in the JOSN file when you provision the bare metal system or the DB system.
- Run the provisioning job using the IP address of one NTP
server.
# /sbin/chronyd -Q "server 1.1.1.1 iburst"
- If the server is reachable and valid, the chronyd example output
will show the difference between the system clock and the NTP
clock:
2025-04-03T20:50:11Z System clock wrong by -0.000065 seconds (ignored)
- Check the validity of the agent
certificates.
# openssl x509 -noout -in /opt/oracle/dcs/odamysqlcert/client/dcsagent-client-cert.pem -startdate -enddate notBefore=May 15 02:11:13 2025 GMT notAfter=Mar 24 02:11:13 2035 GMT
- If the correct time falls out of the valid dates from the
certificates, follow these steps if you are provisioning a bare
metal system:
- Stop the agent and MySQL
services.
# systemctl stop initdcsagent.service # systemctl stop oda-mysql.service
- Manually set the clock on both
nodes:
# date --set="correct_date"
- Make a backup of the existing MySQL certificates on
both
nodes.
# mv /opt/oracle/dcs/odamysqlcert /opt/oracle/dcs/odamysqlcert_old
- Run the command
/opt/oracle/dcs/mysql/cert/gencerts.sh
. - Confirm that the new certificates have the correct
dates on both
nodes.
# openssl x509 -noout -in /opt/oracle/dcs/odamysqlcert/client/dcsagent-client-cert.pem -startdate -enddate notBefore=May 15 02:11:13 2025 GMT notAfter=Mar 24 02:11:13 2035 GMT
- Start the MySQL and agent services on both
nodes.
# systemctl stop initdcsagent.service # systemctl stop oda-mysql.service
- Confirm that the agent is
working.
# odacli describe-component
- Proceed with the provisioning.
- Stop the agent and MySQL
services.
- Follow these steps if you are provisioning a DB system:
- Make sure the bare metal system has the correct date
on both nodes. If not, verify the bare metal system
certificates will not become invalid after adjusting
the clock. If the certificates areinvalid, follow
step 4 to generate new bare metal system
certificates. If the certificates are valid, fix the
date on both bare metal system nodes, then restart
the
services:
# date --set="correct date" # systemctl restart oda-mysql.service # systemctl restart initdcsagent.service
- Proceed with the DB system creation.
- Make sure the bare metal system has the correct date
on both nodes. If not, verify the bare metal system
certificates will not become invalid after adjusting
the clock. If the certificates areinvalid, follow
step 4 to generate new bare metal system
certificates. If the certificates are valid, fix the
date on both bare metal system nodes, then restart
the
services:
Bug Number
This issue is tracked with Oracle bug 37763394.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating DB system
When creating a DB system, an error may be encountered.
Problem Description
DCS-10001:THE CONNECTION IS CLOSED
This error may occur when the bare metal system is provisioned with NTP configured, or there is a time difference between bare metal system and the standard NTP server, or the DB system is created after NTP is configured.
Failure Message
[DB System n1 creation] - DCS-10001:Internal error encountered: Job 'Provision DB System 'n1'' (f91fd1db-78ec-452d-bcdb-975947849370) failed.
Command Details
odacli create-dbsystem
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Provision the bare metal system without configuring NTP.
If there is a time difference between the bare metal system and the standard NTP server, then add several minutes to the current date.
Enable chrony.
- Before enabling chrony, add or update the chrony configuration as
follows:
-------- # cat /etc/chrony.conf server 10.246.6.36 iburst driftfile /var/lib/chrony/drift makestep 1.0 -1 rtcsync logdir /var/log/chrony ---------
- Run the systemctl command to enable and start chronyd
service:
date; systemctl enable chronyd systemctl start chronyd systemctl status chronyd sleep 10; date;
- Create DB system with NTP configured.
Bug Number
This issue is tracked with Oracle bug 37166091.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in Oracle Data Guard operation after modifying the Oracle ASM port
When running the odacli modify-asmport
command on Oracle
Database Appliance configured with Oracle Data Guard, an error may be
encountered.
Problem Description
If you run the odacli modify-asmport
command on an
appliance configured with Oracle Data Guard that uses MAX
PROTECTION
mode, then this could cause a disruption in
primary site due to the standby Oracle Clusterware being restarted as part
of the Oracle ASM port change.
Failure Message
The following error message may be displayed in the alert logs for the database on the primary host:
ORA-16072: a minimum of one standby database destination is required
terminating the instance due to ORA error 16072
Task Level Failure Message
The job may fail at the Stop CRS on DB System(s)
step. The complete details of the error are displayed in the Message section
of the command output.
Command Details
# odacli modify-asmport
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Start the database instance on the primary host.
Bug Number
This issue is tracked with Oracle bug 36931905.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in database creation on multi-user access enabled system
When creating a database on multi-user access enabled system on Oracle Database Appliance, an error may be encountered.
Problem Description
When you create a database on a multi-user access enabled system, an error message may be displayed.
Failure Message
When the user name of database owner contains both lowercase and uppercase letters, the error message may be as follows:
[jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - [FATAL] Error in Process: /u01/app/KvEl6/product/19.0.0.0/dbhome_2/bin/orapwd
[jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - Enter password for SYS:
[jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - OPW-00010: Could not create the password file.
[jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - ORA-00600: internal error code, arguments: [kfzpCreate02], [0], [], [], [], [], [], [], [], [], [], []
[jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - ORA-15260: permission denied on ASM disk group
[jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 679
[jobid-74f31148-ebe0-4507-9296-b9ad4ca7e03b] - ORA-06512: at line 2
PRCZ-4001 : failed to execute command "/u01/app/6RXNI/product/19.0.0.0/dbhome_15//bin/dbca" using the privileged execution plugin "odaexec" on nodes "scaoda901c7n1" within 5,000 seconds
PRCZ-2103 : Failed to execute command "/u01/app/6RXNI/product/19.0.0.0/dbhome_15//bin/dbca" on node "scaoda901c7n1" as user "6RXNI". Detailed error:
[FATAL] [DBT-05801] There are no ASM disk groups detected.
CAUSE: ASM may not be configured, or ASM disk groups are not created yet.
ACTION: Create ASM disk groups, or change the storage location to File System.
[FATAL] [DBT-05801] There are no ASM disk groups detected.
CAUSE: ASM may not be configured, or ASM disk groups are not created yet.
ACTION: Create ASM disk groups, or change the storage location to File System.
Command Details
# odacli create-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not start custom user name with number digit or have mixed-case letters in the custom user name.
Bug Number
This issue is tracked with Oracle bug 36878796.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in configuring Oracle ASR
When configuring Oracle ASR, an error may be encountered when registering Oracle ASR Manager due to an issue while contacting the transport server.
Failure Message
The following error message is displayed:
DCS-10045:Validation error encountered: Registration failed : Please check the agent logs for details.
Command Details
# odacli configure-asr
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Retry configuring Oracle ASR using theodacli configure-asr
command.
Bug Number
This issue is tracked with Oracle bug 36363437.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating database
When creating a database on Oracle Database Appliance, an error may be encountered.
Problem Description
When creating a database on Oracle Database Appliance, the
operation may fail after the createDatabaseByRHP
task.
However, the odacli list-databases
command displays the
status as CONFIGURED for the failed database in the job results.
Failure Message
When you run the odacli create-database
command,
the following error message is displayed:
DCS-10001:Internal error encountered: Failed to clear all listeners from database
Command Details
# odacli create-database
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Check the job description of the odacli
create-database
command using the odacli
describe-job
command. Fix the issue for the task failure in
the odacli create-database
command. Delete the database
with the command odacli delete-database -n db_name
and retry the odacli create-database
command.
Bug Number
This issue is tracked with Oracle bug 34709091.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in creating two DB systems
When creating two DB systems concurrently in two different Oracle ASM disk groups, an error is encountered.
CRS-2672: Attempting to start 'vm_name.kvm' on 'oda_server'
CRS-5017: The resource action "vm_name.kvm start" encountered the following
error:
CRS-29200: The libvirt virtualization library encountered the following
error:
Timed out during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainCreate)
. For details refer to "(:CLSN00107:)" in
"/u01/app/grid/diag/crs/<oda_server>/crs/trace/crsd_orarootagent_root.trc".
CRS-2674: Start of 'vm_name.kvm' on 'oda_server' failed
CRS-2679: Attempting to clean 'vm_name.kvm' on 'oda_server'
CRS-2681: Clean of 'vm_name.kvm' on 'oda_server' succeeded
CRS-4000: Command Start failed, or completed with errors.
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Do not create two DB systems concurrently. Instead, complete the creation of one DB system and then create the other.
This issue is tracked with Oracle bug 33275630.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in adding JBOD
When you add a second JBOD to your Oracle Database Appliance deployment on which a DB system is running, an error is encountered.
ORA-15333: disk is not visible on client instance
Hardware Models
All Oracle Database Appliance hardware models bare metal and dbsystem
Workaround
Shut down dbsystem before adding the second JBOD.systemctl restart initdcsagent
This issue is tracked with Oracle bug 32586762.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error in provisioning appliance after running cleanup.pl
Errors encountered in provisioning applince after running
cleanup.pl
.
After running cleanup.pl
, provisioning the appliance fails because
of missing Oracle Grid Infrastructure image (IMGGI191100). The following error
message is displayed:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
After running cleanup.pl, and before provisioning the appliance, update the repository as follows:
# odacli update-repository -f /**gi**
This issue is tracked with Oracle bug 32707387.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Error encountered after running cleanup.pl
Errors encountered in running odacli
commands after running cleanup.pl
.
After running cleanup.pl
, when you try to use odacli
commands, the following error is encountered:
DCS-10042:User oda-cliadmin cannot be authorized.
Hardware Models
All Oracle Database Appliance hardware models for bare metal deployments
Workaround
Run the following commands to set up the credentials for the user oda-cliadmin
on the agent wallet:
# rm -rf /opt/oracle/dcs/conf/.authconfig
# /opt/oracle/dcs/bin/setupAgentAuth.sh
This issue is tracked with Oracle bug 29038717.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Errors in clone database operation
Clone database operation fails due to errors.
If the source database is single-instance or Oracle RAC One Node, or running on the remote node, the clone database operation fails, because the paths are not created correctly in the control file.
Clone database operation may also fail with errors if the source database creation time stamp is too close to the clone operation (at least within 60 minutes).
Hardware Models
All Oracle Database Appliance high-availability hardware models for bare metal deployments
Workaround
Create the clone database from the source database instance that is running on the same node from which the clone database creation is triggered.
SQL> alter system checkpoint;
This issue is tracked with Oracle bugs 29002563, 29002004, 29001906, 29001855, 29001631, 28995153, 28986643, 30309971, and 30228362.
Parent topic: Known Issues When Deploying Oracle Database Appliance
Known Issues When Managing Oracle Database Appliance
Understand the known issues when managing or administering Oracle Database Appliance.
- Error in interconnect network
DCS agent may not be able to run jobs because of an interconnect network issue. - Error in configuring multiple standby databases on Oracle Data Guard
When configuring multiple standby databases for Oracle Data Guard on Oracle Database Appliance, an error may be encountered. - Error in upgrading Oracle Data Guard
When upgrading Oracle Data Guard, an error may be encountered. - Error in configuring two standby databases on Oracle Data Guard
When configuring two standby databases for Oracle Data Guard on Oracle RAC One Node databases or single-instance high-availability databases on Oracle Database Appliance, an error may be encountered. - Error in deconfiguring Oracle Data Guard
When deconfiguring multiple standby databases for Oracle Data Guard on Oracle Database Appliance, an error may be encountered. - Error in relocating and re-keying a TDE-enabled database
When relocating and re-keying a TDE-enabled database on Oracle Database Appliance, an error may be encountered. - Error in deleting a TDE-enabled database
When deleting a TDE-enabled database on Oracle Database Appliance, an error may be encountered. - Error in deleting database home
When deleting a database home on Oracle Database Appliance, an error may be encountered. - Error in configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error may be encountered. - Error in configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error may be encountered. - Error in cleaning up a deployment
When cleaning up a Oracle Database Appliance, an error is encountered. - Error in display of file log path
File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths. - Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered. - Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role. - Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page. - The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Error in interconnect network
DCS agent may not be able to run jobs because of an interconnect network issue.
Problem Description
When you run the odacli ping-agent
command, an
error may be encountered.
Failure message
DCS-10033:Service DCS agent is down.
Command Details
# odacli ping-agent
Hardware Models
All Oracle Database Appliance hardware models with high-availability
Workaround
- Validate that the issue is due to interconnect not
working. From the first node, run the
command:
# arping -I icbond0 192.168.16.25 -c 10
The output is similar to the following:ARPING 192.168.16.25 from 192.168.16.24 icbond0 Sent 10 probes (10 broadcast(s)) Received 0 response(s)
- On both nodes, modify the
/etc/sysconfig/network-scripts/ifcfg-icbond0
file to addarp_interval=100 to BONDING_OPTS
. The update is as follows:BONDING_OPTS="mode=active-backup miimon=100 primary=p1p1 arp_interval=100"
- On both nodes, restart the
network:
# systemctl restart network
- On both nodes, restart the agent and wait for a few
minutes:
# systemctl restart initdcsagent
Bug Number
This issue is tracked with Oracle bug 37611921.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring multiple standby databases on Oracle Data Guard
When configuring multiple standby databases for Oracle Data Guard on Oracle Database Appliance, an error may be encountered.
Problem Description
When you configure Oracle Data Guard for multiple standby
databases, that is, two standby, the operation fails at the step
Update Data Guard status (Existing standby site)
but
Oracle Data Guard is configured successfully with no issue. The command
DGMGRL> SHOW CONFIGURATION;
shows success status for
all standby databases. The command odacli
list-dataguardstatus
on all sites shows correct Oracle Data
Guard information.
Failure Message
The following error message is displayed:
DCS-10001:Internal error encountered: Unable to update dg config
dcs-agent.log
shows the temporary
error:"Error: ORA-16532: Oracle Data Guard broker configuration does not exist."
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
Ignore the error. Oracle Data Guard was actually configured successfully.
Bug Number
This issue is tracked with Oracle bug 37780488.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in upgrading Oracle Data Guard
When upgrading Oracle Data Guard, an error may be encountered.
Problem Description
If you configured Oracle Data Guard on a multi-user access enabled Oracle Database Appliance release 19.19 system, as odaadmin user, then this Oracle Data Guard configuration may not display when you run the odacli list-dataguardstatus command. If you upgrade this system to Oracle Database Appliance release 19.23 using Data Preserving Reprovisioning, then the Validate Database Service presence step in the the create-preupgradereport precheck may fail for the Oracle Data Guard database.
One or more pre-checks failed for [DB]
Command Details
# odacli create-preupgradereport
# odacli describe-preupgradereport
Task Level Failure message
"The following services [TDG1yn_ro, TDG1yn_rw, Y6Z_ro, Y6Z_rw] created on database
'TDG1yn' can result in a failure in 'detach-node'
Hardware Models
All Oracle Database Appliance hardware models X9-2, X8-2, and X7-2
Workaround
- Stop the service
reported:
srvctl stop service -d db_unique_name -service service_name
- Remove the
service:
srvctl remove service -d db_unique_name -service service_name
Bug Number
This issue is tracked with Oracle bug 36610040.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring two standby databases on Oracle Data Guard
When configuring two standby databases for Oracle Data Guard on Oracle RAC One Node databases or single-instance high-availability databases on Oracle Database Appliance, an error may be encountered.
Problem Description
When you configure Oracle Data Guard for multiple standby databases, that is, two standby, the operation fails on Oracle RAC One Node databases or single-instance high-availability databases on Oracle Database Appliance.
Failure Message
The following error message is displayed:
"DCS-10001:Internal error encountered: Job 'Update Data Guard status (Primary site)' failed with error:
DCS-10001:Internal error encountered: Unable to update dg config"
Command Details
odacli configure-dataguard
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
- Check that Oracle Data Guard was successfully configured for two
standby
databases:
check DGMGRL>SHOW CONFIGURATION;
- Check Oracle Data Guard status on primary and all standby
sites:
odacli list-dataguardstatus -j
- Delete Oracle Data Guard status if it does not have information
about all standby
sites:
DEVMODE=true odacli delete-dataguardstatus -i dataguardstatus_id
- Check that Oracle Data Guard status is
correct:
odacli list-dataguardstatus -j
- Run the
odacli create-dataguardstatus
command on both standby sites from the active node of the database.DEVMODE=true odacli create-dataguardstatus -i database_id -r config_dg.json -n dataguardstatus_id_of_primary
Example ofconfig_dg.json
file:{ "name": "dgname", "protectionMode": "MAX_PERFORMANCE", "enableFlashback": true, "enableActiveDg": false, "replicationGroups": [ { "sourceEndPoints": [ { "endpointType": "PRIMARY", "hostName": "xxx.com", "listenerPort": 1521, "databaseUniqueName": "primary", "serviceName": "primary.com", "sysPassword": "***", "ipAddress": "x.x.x.x" }, { "endpointType": "PRIMARY", "hostName": "xxx.com", "listenerPort": 1521, "databaseUniqueName": "primary", "serviceName": "primary.com", "sysPassword": "***", "ipAddress": "x.x.x.x" } ], "targetEndPoints": [ { "endpointType": "STANDBY", "hostName": "xxx.com", "listenerPort": 1521, "databaseUniqueName": "standby1", "serviceName": "standby1.com", "sysPassword": "***", "ipAddress": "x.x.x.x" }, { "endpointType": "STANDBY", "hostName": "xxx.com", "listenerPort": 1521, "databaseUniqueName": "standby1", "serviceName": "standby1.com", "sysPassword": "***", "ipAddress": "x.x.x.x" } ], "transportType": "ASYNC" }, { "sourceEndPoints": [ { "endpointType": "PRIMARY", "hostName": "xxx.com", "listenerPort": 1521, "databaseUniqueName": "primary", "serviceName": "primary.com", "sysPassword": "***", "ipAddress": "x.x.x.x" }, { "endpointType": "PRIMARY", "hostName": "xxx.com", "listenerPort": 1521, "databaseUniqueName": "primary", "serviceName": "primary.com", "sysPassword": "***", "ipAddress": "x.x.x.x" } ], "targetEndPoints": [ { "endpointType": "STANDBY", "hostName": "xxx.com", "listenerPort": 1521, "databaseUniqueName": "standby2", "serviceName": "standby2.com", "sysPassword": "***", "ipAddress": "x.x.x.x" }, { "endpointType": "STANDBY", "hostName": "xxx.com", "listenerPort": 1521, "databaseUniqueName": "standby2", "serviceName": "standby2.com", "sysPassword": "***", "ipAddress": "x.x.x.x" } ], "transportType": "ASYNC" } ] }
Bug Number
This issue is tracked with Oracle bug 38021930.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in deconfiguring Oracle Data Guard
When deconfiguring multiple standby databases for Oracle Data Guard on Oracle Database Appliance, an error may be encountered.
Problem Description
If you specify incorrect primary site address for the "Standby site
address" details when deconfiguring Oracle Data Guard, then the operation
fails at the step Delete Dataguard Status(Standby
site)
.
Failure Message
The following error message is displayed:
DCS-10001:Internal error encountered: Error creating job 'Delete Dataguard Status(Standby site)': com.oracle.pic.commons.client.exceptions.RestClientException: DCS-10000:Resource dgConfig with ID xxx is not found
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
One the standby site, run the command DEVMODE=true odacli
delete-dataguardstatus -i dataguard_status_id
to
remove the Oracle Data Guard status metadata. Run the command odacli
list-dataguardstatus
to confirm removal.
Bug Number
This issue is tracked with Oracle bug 37782833.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in relocating and re-keying a TDE-enabled database
When relocating and re-keying a TDE-enabled database on Oracle Database Appliance, an error may be encountered.
Problem Description
When you relocate a TDE-enabled database that uses Oracle Key Vault
to store TDE keys with the option --target-node,-tn
, and
re-key with the option --rekey-tde,-rkt
, at the same time,
an error may be encountered when setting the TDE master encryption key.
Failure Message
DCS-10164:Failed to configure TDE: Failed to set TDE Master Encryption key
Command Details
# odacli modify-database -n dbname -rkt -tn target_node_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Perform the relocation and re-key operations separately, one after another.
Bug Number
This issue is tracked with Oracle bug 37155404.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in deleting a TDE-enabled database
When deleting a TDE-enabled database on Oracle Database Appliance, an error may be encountered.
Problem Description
When you delete a TDE-enabled database that uses Oracle Key Vault
release 21.8 to store TDE keys, then an error message may be displayed
during the OKV delete
task.
Failure Message
DCS-10001:Internal error encountered: Failed to delete Wallet <wallet_name> : okv.log.0 (Permission denied)
{
"result" : "Failure",
"message" : "Insufficient privileges on wallet"
}.
Command Details
# odacli delete-database -n db_name
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Log into as the Oracle Key Vault administrator to the Oracle Key Vault server where the Oracle Key Vault wallet is present.
- Navigate to the Keys & Wallets tab.
- Click the edit icon for the wallet that you want to delete.
- In the Select Endpoint/User Group section, select the Type as Users from the drop down list.
- Select the user that owns the Oracle Key Vault wallet.
- In the Select Access Level section, select Read and Modify, and then Manage Wallet.
- Click Save.
- Delete the database.
Bug Number
This issue is tracked with Oracle bug 36640379.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in deleting database home
When deleting a database home on Oracle Database Appliance, an error may be encountered.
Problem Description
When you delete a database home, the database home is not deleted
completely. The subfolders and files exist in the corresponding database
home location and the database home entry exists in the
/u01/app/oraInventory/ContentsXML/inventory.xml
file.
Failure Message
When the odacli update-database
command is run,
the following error message is displayed:
Failed to execute sqlpatch for database …
Command Details
# odacli delete-dbhome
Hardware Models
All Oracle Database Appliance hardware models
Workaround
Before you run the odacli delete-dbhome
command,
confirm that the wOraDBversion_homeidx
exists
in the /opt/oracle/rhp/RHPCheckpoints/
location on the same
node where you run the command.
Bug Number
This issue is tracked with Oracle bug 36864228.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error may be encountered.
Problem Description
When configuring Oracle Data Guard, an error may be encountered when locking the SYS DB user for Oracle RAC and Oracle RAC One Node database.
Failure Message
The following error message is displayed:
SQL> ALTER USER SYS ACCOUNT LOCK;
ALTER USER SYS ACCOUNT LOCK
*
ERROR at line 1:
ORA-40365: The SYS user cannot be locked while the password file is in its
current format.
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
- Get the current password file location of the
database.
srvctl config database -d dbUniqueName | grep -i password +DATA/dbUniqueName/PASSWORD/pwddbnamenumbers
- Set the password file location of the database to
empty.
srvctl modify database -d dbUniqueName -pwfile ''
- Recreate the password file location of the database from the
existing password file in
step1.
orapwd file='new_password_file_location' format=12.2 input_file='output_in_step_1' dbuniquename=dbUniqueName
output_in_step_1 is similar to +DATA/dbUniqueName/PASSWORD/pwddbnamenumbersnew_password_file_location can be +DATA/dbUniqueName/PASSWORD/orapwdbname , or another Oracle ASM location preferably under +DATA/dbUniqueName
- Confirm the new password file is in 12.2
format.
orapwd describe file='step3_new_password_file_location' Password file Description : format=12.2
- Set the password file location of the database to the new location
in step 3 with
srvctl.
srvctl modify database -d dbUniqueName -pwfile 'step3_new_password_file_location'
- Check to see if previous password can still be used to login to
the
database.
sqlplus sys/"password"@dbUniqueName as sysdba
- Lock SYS
user.
SQL> alter user sys account lock; User altered. [root@n1 ~]# odacli list-databases 333cd996-4de4-472c-a290-7907b0bd8313 ccc RAC 19.26.0.0.250121 false OLTP EE odb1 ASM CONFIGURED 572a21bb-5912-4bad-a217-34611c821a89 ... [root@n1 ~]# odacli describe-database -n ccc -j { "id" : "333cd996-4de4-472c-a290-7907b0bd8313", "name" : "ccc", "dbName" : "ccc", "databaseUniqueName" : "ccc1", ... [root@n1 ~]# odacli describe-dbhome -i 572a21bb-5912-4bad-a217-34611c821a89 DB Home details ---------------------------------------------------------------- ID: 572a21bb-5912-4bad-a217-34611c821a89 Name: OraDB19000_home1 Version: 19.26.0.0.250121 Home Location: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 [root@n1 ~]# su - oracle [oracle@n1 ~]$ . oraenv ORACLE_SID = [oracle] ? ccc1 ORACLE_HOME = [/home/oracle] ? /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 The Oracle base has been set to /u01/app/odaorabase/oracle [oracle@n1 ~]$ srvctl config database -d ccc1 | grep -i password Password file: +DATA/CCC1/PASSWORD/pwdccc1.305.1201883837 [oracle@n1 ~]$ [oracle@n1 ~]$ srvctl modify database -d ccc1 -pwfile '' [oracle@n1 ~]$ orapwd file='+DATA/CCC1/PASSWORD/orapwccc' format=12.2 input_file='+DATA/CCC1/PASSWORD/pwdccc1.305.1201883837' dbuniquename=ccc1 [oracle@n1 ~]$ orapwd describe file='+DATA/CCC1/PASSWORD/orapwccc' Password file Description : format=12.2 [oracle@n1 ~]$ srvctl modify database -d ccc1 -pwfile '+DATA/CCC1/PASSWORD/orapwccc' [oracle@n1 ~]$ sqlplus sys/"password"@ccc1 as sysdba SQL> alter user sys account lock; User altered.
Bug Number
This issue is tracked with Oracle bug 37997268.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in configuring Oracle Data Guard
When configuring Oracle Data Guard on Oracle Database Appliance, an error may be encountered.
Problem Description
When you configure Oracle Data Guard on the second node of the
standby system on an Oracle Database Appliance high-availability deployment,
the operation may fail at step Configure Standby database (Standby
site)
in the task Reset Db sizing and hidden
parameters for ODA best practice
.
Command Details
odacli configure-dataguard
Hardware Models
All Oracle Database Appliance hardware models high-availability deployments
Workaround
Run odacli configure-dataguard on the first node of the standby system in the high-availability deployment
Bug Number
This issue is tracked with Oracle bug 33401667.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in cleaning up a deployment
When cleaning up a Oracle Database Appliance, an error is encountered.
Hardware Models
All Oracle Database Appliance hardware models with DB systems
Workaround
- Stop the NFS service on both
nodes:
service nfs stop
- Clean up the bare metal system. See the Oracle Database Appliance Deployment and User's Guide for your hardware model for the steps.
This issue is tracked with Oracle bug 33289742.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in display of file log path
File log paths are not displayed correctly on the console but all the logs that were generated for a job have actually logged the correct paths.
Hardware Models
All Oracle Database Appliance hardware models with virtualized platform
Workaround
None.
This issue is tracked with Oracle bug 33580574.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in the enable apply process after upgrading databases
When running the enable apply process after upgrading databases in an Oracle Data Guard deployment, an error is encountered.
Error: ORA-16664: unable to receive the result from a member
Hardware Models
All Oracle Database Appliance hardware models
Workaround
- Restart standby database in upgrade mode:
srvctl stop database -d <db_unique_name> Run PL/SQL command: STARTUP UPGRADE;
- Continue the enable apply process and wait for log apply process to refresh.
- After some time, check the Data Guard status with the DGMGRL
command:
SHOW CONFIGURATION;
This issue is tracked with Oracle bug 32864100.
Parent topic: Known Issues When Managing Oracle Database Appliance
Error in updating Role after Oracle Data Guard operations
When performing operations with Oracle Data Guard on Oracle Database Appliance, an error is encountered in updating the Role.
odacli
describe-database
command is not updated after Oracle Data Guard
switchover, failover, and reinstate operations on Oracle Database
Appliance.
Hardware Models
All Oracle Database Appliance hardware models with Oracle Data Guard configuration
Workaround
Run odacli update-registry -n db --force/-f
to update the
database metadata. After the job completes, run the odacli
describe-database
command and verify that dbRole is updated.
This issue is tracked with Oracle bug 31378202.
Parent topic: Known Issues When Managing Oracle Database Appliance
Inconsistency in ORAchk summary and details report page
ORAChk report summary on the Browser User Interface may show different counts of Critical, Failed, and Warning issues than the report detail page.
Hardware Models
Oracle Database Appliance hardware models bare metal deployments
Workaround
Ignore counts of Critical, Failed, and Warning issues in the ORAchk report summary on the Browser User Interface. Check the report detail page.
This issue is tracked with Oracle bug 30676674.
Parent topic: Known Issues When Managing Oracle Database Appliance
The odaeraser tool does not work if oakd is running in non-cluster mode
After cleaning up the deployment, the Secure Eraser tool does not work if oakd is running in non-cluster mode.
Hardware Models
All Oracle Database Appliance Hardware bare metal systems
Workaround
After cleanup of the deployment, oakd
is started in the non-cluster mode, and it cannot be stopped using "odaadmcli stop oak" command. In such a case, if the Secure Erase tool is run, then the odaeraser command fails.
Use the command odaadmcli shutdown oak
to stop oakd
.
This issue is tracked with Oracle bug 28547433.
Parent topic: Known Issues When Managing Oracle Database Appliance