Verifying the Store
Use the Admin CLI verify command to complete these tasks:
-
Perform general troubleshooting of the store.
The
verify
command inspects all store components. It also checks whether all store services are available. For the available store services, the command also checks for any version or metadata mismatches. -
Check the status of a long-running plan
Some plans require many steps and may take some time to execute. The administrator can verify plans to check on the plan progress. For example, you can verify a
plan deploy–sn
command while it is running against many Storage Nodes. Theverify
command can report at each iteration to confirm that additional nodes have been created and come online.For more about managing plans, see Using Plans.
-
Get additional information to help diagnose a plan in an ERROR state.
You verify your store using the verify
command in the
CLI. The command requires no parameters, and runs in verbose mode, by default. For
example:
kv-> verify configuration
Output:Verify: starting verification of store MetroArea based upon
topology sequence #117
100 partitions and 6 storage nodes
Time: 2024-04-05 06:57:10 UTC Version: 24.1.11
See node01:Data/virtualroot/datacenter1/kvroot/MetroArea/
log/MetroArea_{0..N}.log for
progress messages
Verify: Shard Status: healthy:2 writable-degraded:0
read-only:0 offline:0
Verify: Admin Status: healthy
Verify: Zone [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false]
RN Status: online:2 offline: 0 maxDelayMillis:1 maxCatchupTimeSecs:0
Verify: Zone [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false]
RN Status: online:2 offline: 0 maxDelayMillis:1 maxCatchupTimeSecs:0
Verify: Zone [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false]
RN Status: online:2 offline: 0
Verify: == checking storage node sn1 ==
Verify: Storage Node [sn1] on node01:5000
Zone: [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Admin [admin1] Status: RUNNING,MASTER
Verify: Rep Node [rg1-rn2] Status: RUNNING,REPLICA
sequenceNumber:127 haPort:5011 available storage size:14 GB delayMillis:1 catchupTimeSecs:0
Verify: == checking storage node sn2 ==
Verify: Storage Node [sn2] on node02:6000
Zone: [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Rep Node [rg2-rn2] Status: RUNNING,REPLICA
sequenceNumber:127 haPort:6010 available storage size:24 GB delayMillis:1 catchupTimeSecs:0
Verify: == checking storage node sn3 ==
Verify: Storage Node [sn3] on node03:7000
Zone: [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Admin [admin2] Status: RUNNING,REPLICA
Verify: Rep Node [rg1-rn3] Status: RUNNING,REPLICA
sequenceNumber:127 haPort:7011 available storage size:22 GB delayMillis:1 catchupTimeSecs:0
Verify: == checking storage node sn4 ==
Verify: Storage Node [sn4] on node04:8000
Zone: [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Rep Node [rg2-rn3] Status: RUNNING,REPLICA
sequenceNumber:127 haPort:8010 available storage size:24 GB delayMillis:1 catchupTimeSecs:0
Verify: == checking storage node sn5 ==
Verify: Storage Node [sn5] on node05:9000
Zone: [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Admin [admin3] Status: RUNNING,REPLICA
Verify: Rep Node [rg1-rn1] Status: RUNNING,MASTER
sequenceNumber:127 haPort:9011 available storage size:18 GB
Verify: == checking storage node sn6 ==
Verify: Storage Node [sn6] on node06:10000
Zone: [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Rep Node [rg2-rn1] Status: RUNNING,MASTER
sequenceNumber:127 haPort:10010 available storage size:16 GB
Verification complete, no violations.
Use the optional –silent
mode to show only problems or
completion.
kv-> verify configuration -silent
Output:Verify: starting verification of store MetroArea based upon
topology sequence #117
100 partitions and 6 storage nodes
Time: 2024-04-05 06:57:10 UTC Version: 24.1.11
See node01:Data/virtualroot/datacenter1/kvroot/MetroArea/
log/MetroArea_{0..N}.log for progress messages
Verification complete, no violations.
The verify
command clearly reports any problems with the
store. For example, if a Storage Node is unavailable, using –silent
mode displays that problem as follows:
kv-> verify configuration -silent
Output:Verify: starting verification of store MetroArea based upon
topology sequence #117
100 partitions and 6 storage nodes
Time: 2024-04-05 06:57:10 UTC Version: 24.1.11
See node01:Data/virtualroot/datacenter1/kvroot/MetroArea/
log/MetroArea_{0..N}.log for progress messages
Verification complete, 2 violations, 0 notes found.
Verification violation: [rg2-rn2] ping() failed for rg2-rn2 :
Unable to connect to the storage node agent at host node02, port 6000,
which may not be running; nested exception is:
java.rmi.ConnectException: Connection refused to host: node02;
nested exception is:
java.net.ConnectException: Connection refused
Verification violation: [sn2] ping() failed for sn2 : Unable to connect
to the storage node agent at host node02, port 6000,
which may not be running; nested exception is:
java.rmi.ConnectException: Connection refused to host: node02;
nested exception is:
java.net.ConnectException: Connection refused
Using the default mode (verbose), verify configuration
shows the same problem as follows:
kv-> verify configuration
Output:Verify: starting verification of store MetroArea based upon
topology sequence #117
100 partitions and 6 storage nodes
Time: 2024-04-05 06:57:10 UTC Version: 24.1.11
See node01:Data/virtualroot/datacenter1/kvroot/MetroArea/
log/MetroArea_{0..N}.log for progress messages
Verify: Shard Status: healthy:1 writable-degraded:1
read-only:0 offline:0
Verify: Admin Status: healthy
Verify: Zone [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false]
RN Status: online:1 offline: 1 maxDelayMillis:1 maxCatchupTimeSecs:0
Verify: Zone [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false]
RN Status: online:2 offline: 0 maxDelayMillis:1 maxCatchupTimeSecs:0
Verify: Zone [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false]
RN Status: online:2 offline: 0
Verify: == checking storage node sn1 ==
Verify: Storage Node [sn1] on node01:5000
Zone: [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Admin [admin1] Status: RUNNING,MASTER
Verify: Rep Node [rg1-rn2] Status: RUNNING,REPLICA
sequenceNumber:127 haPort:5011 available storage size:18 GB delayMillis:1 catchupTimeSecs:0
Verify: == checking storage node sn2 ==
Verify: sn2: ping() failed for sn2 :
Unable to connect to the storage node agent at host node02, port 6000,
which may not be running; nested exception is:
java.rmi.ConnectException: Connection refused to host: node02;
nested exception is:
java.net.ConnectException: Connection refused
Verify: Storage Node [sn2] on node02:6000
Zone: [name=Manhattan id=zn1 type=PRIMARY allowArbiters=false masterAffinity=false]
UNREACHABLE
Verify: rg2-rn2: ping() failed for rg2-rn2 :
Unable to connect to the storage node agent at host node02, port 6000,
which may not be running; nested exception is:
java.rmi.ConnectException: Connection refused to host: node02;
nested exception is:
java.net.ConnectException: Connection refused
Verify: Rep Node [rg2-rn2] Status: UNREACHABLE
Verify: == checking storage node sn3 ==
Verify: Storage Node [sn3] on node03:7000
Zone: [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Admin [admin2] Status: RUNNING,REPLICA
Verify: Rep Node [rg1-rn3] Status: RUNNING,REPLICA
sequenceNumber:127 haPort:7011 available storage size:12 GB delayMillis:1 catchupTimeSecs:0
Verify: == checking storage node sn4 ==
Verify: Storage Node [sn4] on node04:8000
Zone: [name=JerseyCity id=zn2 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Rep Node [rg2-rn3] Status: RUNNING,REPLICA
sequenceNumber:127 haPort:8010 available storage size:11 GB delayMillis:0 catchupTimeSecs:0
Verify: == checking storage node sn5 ==
Verify: Storage Node [sn5] on node05:9000
Zone: [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Admin [admin3] Status: RUNNING,REPLICA
Verify: Rep Node [rg1-rn1] Status: RUNNING,MASTER
sequenceNumber:127 haPort:9011 available storage size:14 GB
Verify: == checking storage node sn6 ==
Verify: Storage Node [sn6] on node06:10000
Zone: [name=Queens id=zn3 type=PRIMARY allowArbiters=false masterAffinity=false]
Status: RUNNING
Ver: 24.1.11 2024-04-05 09:33:45 UTC Build id: a72484b8b33c
Verify: Rep Node [rg2-rn1] Status: RUNNING,MASTER
sequenceNumber:127 haPort:10010 available storage size:16 GB
Verification complete, 2 violations, 0 notes found.
Verification violation: [rg2-rn2] ping() failed for rg2-rn2 :
Unable to connect to the storage node agent at host node02, port 6000,
which may not be running; nested exception is:
java.rmi.ConnectException: Connection refused to host: node02;
nested exception is:
java.net.ConnectException: Connection refused
Verification violation: [sn2] ping() failed for sn2 :
Unable to connectto the storage node agent at host node02, port 6000,
which may not be running; nested exception is:
java.rmi.ConnectException: Connection refused to host: node02;
nested exception is:
java.net.ConnectException: Connection refused
Note:
The verify output is only displayed in the shell after the command is
complete. Use tail
, or grep
the Oracle NoSQL Database log file to get a sense of how the verification is
progressing. Look for the string Verify
. For example:
grep Verify /KVRT1/mystore/log/mystore_0.log
Violations and Solutions Table
The table below lists some violation or verification notes and their solutions that may arise when running the verify configuration command.
Table 4-1 Violations and Solutions Table
Violation / Verification Note | Solution |
---|---|
The zone needs more Admins to meet the required Replication Factor. | Add more Admins in a particular zone using plan deploy-admin command |
There are fewer Replication Nodes than expected by the Replication Factor. |
|
The zone is empty. | Remove the zone manually using plan remove-zone command. |
The zone has excess Admins than required by the Replication Factor. | Remove the Admins that are not needed using plan remove-admin command. |
The storage size is not defined for the root directory of a Storage Node. | Add or change the storage directory size using plan change-storagedir command. |
The storage size is not defined for the storage directory of a Storage Node. | |
The Replication Nodes in a particular shard have significantly different sizes. | |
More than one Replication Node is present in the root directory of the Storage Node. | Create and specify storage directory size for individual Replication Nodes using plan change-storagedir command. |
Note:
The first three points from the table above are violations, and the others are the verification notes.