Support for MongoDB APIs, Operations, and Data Types — Reference
The script content on this page is for navigation purposes only and does not alter the content in any way.
MongoDB APIs, operations, and data types supported by Oracle AI Database are listed, together with information about their support.
Unsupported MongoDB constructs raise an error. A construct that is ignored is listed in this documentation as a no-op (it does not raise an error). A construct can be ignored because it makes no sense or is not needed on Oracle architecture.
Only server commands are covered, not client-side wrapper functions. Client-side wrapper functions such as deleteMany() and updateMany() use server commands delete() and update() internally.
Oracle Database API for MongoDB supports GridFS, a specification for storing large files in a MongoDB database.
Database Commands
Support of MongoDB database commands is described. This includes commands for administration, aggregation, authentication, diagnostic, query and write operations, role management, replication, sessions, user management, and sharding.
See Also: Database Commands in the MongoDB Reference manual
Table 1 Administration Commands
| Command | Support (Since) | Notes |
|---|---|---|
| Capped Collections | No | |
cloneCollectionAsCapped |
No | |
collMod |
No | |
collMod, expireAfterSeconds |
No | |
convertToCapped |
No | |
create |
19c | Creates a collection in the current Oracle Database schema. If the specified collection already exists then this is a no-op. |
createView |
26ai | |
createIndexes |
26ai. No-op (19c) | |
currentOp |
No | |
drop |
19c | |
dropDatabase |
19c | Deletes all collections in the current Oracle Database schema. Does not delete (drop) the schema itself. The command is available only to a user who is logged in with role |
dropIndexes |
26ai. No-op (19c) | |
filemd5 |
No | This command is deprecated in MongoDB and its use is discouraged as MD5 is no longer considered cryptographically secure. |
getParameter |
19c | Parameter supported: authenticationMechanisms |
killCursors |
19c | Supported field: cursors. |
killOp |
No | |
listCollections |
19c | Lists collections in the current Oracle Database schema. |
listDatabases |
19c | Lists Oracle Database schemas enabled for access by Oracle Database API for MongoDB and for Simple Oracle Document Access (SODA). |
listIndexes |
19c | Lists Oracle Database indexes relevant for the specified collection. |
reIndex |
No-op | Deprecated since MongoDB version 6.0. |
renameCollection |
No | You can create JSON collection views using aggregation pipeline stage $sql. |
setParameter |
No-op | Ignored (no error). |
validate |
19c | |
repairDatabase |
No-op | Not applicable. |
Besides creating a collection with explicit use of command create, a collection is automatically created upon its first insertion of a document. That is, to create a collection it is sufficient to refer to it by name when inserting a document into it.
See Also: Administration Commands in the MongoDB Reference manual
Table 2 Aggregation Commands
| Command | Support (Since) | Notes |
|---|---|---|
aggregate |
26ai | |
count |
19c | Supported field: query. |
distinct |
19c | Supported fields: Returns the distinct scalar values targeted by the path specified by |
mapReduce |
No |
See Also: Aggregation Commands in the MongoDB Reference manual
Table 3 Authentication Commands
| Command | Support (Since) | Notes |
|---|---|---|
logout |
19c | Logs out the current user of an Oracle Database schema on a specific port. |
See Also: Authentication Commands in the MongoDB Reference manual
Table 4 Diagnostic
| Command | Support (Since) | Notes |
|---|---|---|
buildInfo |
19c | Returns information about current build of Oracle Database API for MongoDB. |
collStats |
19c | |
compact |
No-op | Ignored (no error). |
connPoolStats |
No | |
connectionStatus |
19c | |
dataSize |
26ai | Supported fields: estimate, keyPattern, min, max. |
dbHash |
No | |
dbStats |
19c | Supported field: Lists statistics about an Oracle Database schema: its collections and relevant indexes. |
explain |
19c | |
explain, executionStats |
19c | |
features |
No | |
getLog |
No-op | Ignored. Not applicable. |
hostInfo |
19c | |
listCommands |
19c | |
ping |
19c | |
profiler |
No | Use SQL*Monitor for query profiling. |
serverStatus |
19c | |
top |
No | |
whatsmyuri |
19c |
See Also: Diagnostic Commands in the MongoDB Reference manual
Table 5 Query and Write Operation
| Command | Support (Since) | Notes |
|---|---|---|
| Change Streams | No | |
delete |
19c |
|
find |
19c | See Support for commandfind. |
findAndModify |
19c |
|
getLastError |
19c | |
getMore |
19c | Supported fields: batchSize, collection. |
getPrevError |
No | |
GridFS |
19c | |
insert |
19c | Supported field: documents. |
parallelCollectionScan |
No | Consider using hint $service. |
resetError |
19c | |
update |
19c |
Returned response contains fields |
Note:
Support for command find.
-
Supported operators: see Supported query operators for commands
delete,find,findAndModify, andupdate. -
Supported fields:
batchSize,filter,limit,projection,returnKey,singleBatch,skip,sort.
Field returnKey can only return the primary key (e.g. the ObjectID) associated with the documents found. You cannot use it to return only the index key if an index is used to support the query.
$cannot be used in aprojectionspecification. Only simple field selections or omissions can be performed.
Sorting JSON values:
-
Oracle AI Database 26ai or later: JSON values are sorted using a canonical sort order — see Comparison and Sorting of JSON Data Type Values.
-
Oracle Database 19c: By default, sorting is lexicographical: JSON values are serialized to obtain strings, which are then compared.
To request a numeric ordering, date ordering, or timestamp ordering, you use a hint, providing the relevant JSON scalar type with $type.
For example, the following code requests an ascending lexicographical sort on field name, then an ascending numeric sort on field age, and then a descending date-time (that is, reverse chronological) sort on field birthday. (A positive number, such as 1, means ascending; a negative number, such as -1, means descending.)
find().sort({"name":**1**, "age":**1**, "birthday":**-1**}).**hint**({"**$type**":{"age":"**number**", "birthday":"**dateTime**"}})
Note:
Supported query operators for commands delete, find, findAndModify, and update.
-
Comparison and logical:
$eq,$gt,$gte,$in,$lt,$lte,$ne,$nin,$and,$not,$nor, and$or. -
Element and evaluation:
$type,$regex, and$text. -
Geospatial:
$geoIntersects,$geoWithin,$near,$nearSphere. -
Array:
$all,$elemMatch.
See Also: Query and Write Operation Commands in the MongoDB Reference manual
Table 6 Role Management Commands
| Command | Support (Since) | Notes |
|---|---|---|
createRole |
No | |
dropRole |
No | |
dropAllRolesFromDatabase |
No | |
grantRolesToRole |
No | |
revokePrivilegesFromRole |
No | |
updateRole |
No | |
rolesInfo |
No |
See Also: Role Management Commands in the MongoDB Reference manual
Table 7 Replication Commands
| Command | Support (Since) | Notes |
|---|---|---|
hello |
19c | |
isMaster |
19c | |
replSetGetStatus |
No-op | Ignored. Not applicable. |
See Also: Replication Commands in the MongoDB Reference manual
Table 8 Sessions Commands
| Command | Support (Since) | Notes |
|---|---|---|
abortTransaction |
19c | |
commitTransaction |
19c | |
endSessions |
19c | |
killAllSessions |
19c | |
killAllSessionsByPattern |
19c | |
killSessions |
19c | |
refreshSessions |
19c | |
startSession |
19c | Starts a server-side session. Uses a UUID created by the client, if provided, or a secure random UUID. Returns the UUID used. |
See Also: Sessions Commands in the MongoDB Reference manual
The security model of the Oracle Database API for MongoDB leverages the built-in security model of the Oracle AI Database.
Table 9 User Management Commands
| Command | Support (Since) | Notes |
|---|---|---|
createUser |
No | |
dropAllUsersFromDatabase |
No | |
dropUser |
No | |
grantRolesToUser |
No | |
revokeRolesFromUser |
No | |
updateUser |
No | |
userInfo |
No |
See Also: User Management Commands in the MongoDB Reference manual
Table 10 Sharding Commands
| Command | Support (Since) | Notes |
|---|---|---|
abortReshardCollection |
No | Not applicable. |
addShard |
No | Not applicable. |
addShardZone |
No | Not applicable. |
balancerCollectionStatus |
No | Not applicable. |
balancerStart |
No | Not applicable. |
balancerStatus |
No | Not applicable. |
balancerStop |
No | Not applicable. |
checkShardingIndex |
No | Not applicable. |
clearJumboFlag |
No | Not applicable. |
cleanupOrphaned |
No | Not applicable. |
cleanupReshardCollection |
No | Not applicable. |
commitReshardCollection |
No | Not applicable. |
enableSharding |
No | Not applicable. |
flushRouterConfig |
No | Not applicable. |
getShardMap |
No | Not applicable. |
getShardVersion |
No | Not applicable. |
isdbGrid |
No | Not applicable. |
listShards |
No | Not applicable. |
medianKey |
No | Not applicable. |
moveChunk |
No | Not applicable. |
movePrimary |
No | Not applicable. |
mergeChunks |
No | Not applicable. |
refineCollectionShardKey |
No | Not applicable. |
removeShard |
No | Not applicable. |
removeShardFromZone |
No | Not applicable. |
reshardCollection |
No | Not applicable. |
setAllowMigrations |
No | Not applicable. |
setShardVersion |
No | Not applicable. |
shardCollection |
No | Not applicable. |
shardingState |
No | Not applicable. |
split |
No | Not applicable. |
splitVector |
No | Not applicable. |
unsetSharding |
No | Not applicable. |
updateZoneKeyRange |
No | Not applicable. |
See Also: Sharding Commands in the MongoDB Reference manual
Query and Projection Operators
Support of MongoDB query and projection operators is described. This includes array, bitwise, comment, comparison, element, evaluation, geospatial, and logical query operators, as well as projection operators.
See Also: Query and Projection Operators in the MongoDB Reference manual
Table 11 Array Query Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$all |
19c | |
$elemMatch |
19c | |
$size |
19c |
See Also: Array Query Operators in the MongoDB Reference manual
Table 12 Bitwise Query Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$bitsAllSet |
No | |
$bitsAnySet |
No | |
$bitsAllClear |
No | |
$bitsAnyClear |
No |
See Also: Bitwise Query Operators in the MongoDB Reference manual
Table 13 Comparison Query Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$eq |
19c | |
$gt |
19c | |
$gte |
19c | |
$lt |
19c | |
$lte |
19c | |
$ne |
19c | |
$in |
19c | |
$nin |
19c |
See Also: Comparison Query Operators in the MongoDB Reference manual
Table 14 Element Query Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$exists |
19c | |
$type |
19c |
See Also: Element Query Operators in the MongoDB Reference manual
Table 15 Evaluation Query Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$expr |
No | |
$jsonSchema |
No | |
$mod |
26ai | |
$regex |
19c | |
$text |
19c | |
$where |
No | Server-side JavaScript Deprecated, see MongoDB documentation. |
See Also: Evaluation Query Operators in the MongoDB Reference manual
Table 16 Geospatial Query Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$box |
No | |
$center |
No | |
$centerSphere |
No | |
$geoIntersects |
19c | |
$geometry |
No | |
$geoWithin |
19c | |
$maxDistance |
No | |
$near |
19c | |
$nearSphere |
19c | |
$polygon |
No | |
$uniqueDocs |
No |
Table 17 Logical Query Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$and |
19c | |
$nor |
19c | |
$not |
19c | |
$or |
19c |
See Also: Logical Query Operators in the MongoDB Reference manual
Table 18 Projection Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$elemMatch |
19c | |
$meta |
No | |
$slice |
No |
See Also: Projection Operators in the MongoDB Reference manual
Update Operators
Support of MongoDB update operators is described. This includes array, bitwise, field, and modifier update operators.
Table 19 Array Update Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$ |
19c | |
$[] |
19c | |
$[<identifier>] |
19c | |
$addToSet |
19c | |
$pop |
19c | |
$pull |
19c | |
$pullAll |
19c | |
$push |
19c |
See Also: Update Array
Table 20 Bitwise Update Operator
| Operator | Support (Since) | Notes |
|---|---|---|
$bit |
19c |
Update Bitwise in the MongoDB Reference manual
Table 21 Field Update Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$currentDate |
19c | |
$inc |
19c | |
$max |
19c | |
$min |
19c | |
$mul |
19c | |
$rename |
19c | |
$set |
19c | |
$setOnInsert |
19c | |
$unset |
19c |
See Also: Update Field in the MongoDB Reference manual
Table 22 Modifier Update Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$each |
19c | |
$position |
19c | |
$slice |
19c | |
$sort |
19c |
See Also: Update Operators in the MongoDB Reference manual
Cursor Methods
Support of MongoDB cursor methods is described.
Table 23 Cursor Methods
| Method | Support (Since) | Notes |
|---|---|---|
$cursor.batchSize() |
19c | |
$cursor.close() |
19c | |
$cursor.collation() |
No | |
$cursor.comment() |
19c | |
$cursor.count() |
19c | |
$cursor.explain() |
19c | |
$cursor.forEach() |
19c | |
$cursor.hasNext() |
19c | |
$cursor.hint() |
19c | |
$cursor.isClosed() |
19c | |
$cursor.isExhausted() |
19c | |
$cursor.itcount() |
19c | |
$cursor.limit() |
19c | |
$cursor.map() |
19c | |
$cursor.max() |
19c | |
$cursor.maxScan() |
No | |
$cursor.maxTimeMS() |
19c | |
$cursor.min() |
19c | |
$cursor.next() |
19c | |
$cursor.noCursorTimeout() |
19c | |
$cursor.objsLeftInBatch() |
19c | |
$cursor.pretty() |
19c | |
$cursor.readConcern() |
19c | |
$cursor.readPref() |
19c | |
$cursor.returnKey() |
19c | |
$cursor.showRecordId() |
19c | |
$cursor.size() |
19c | |
$cursor.skip() |
19c | |
$cursor.sort() |
19c | |
$cursor.tailable() |
19c | |
$cursor.toArray() |
19c |
See Also: Cursor Methods in the MongoDB Reference manual
Aggregation Pipeline Stages
Support of MongoDB aggregation pipeline stages is described.
See Also: Aggregation Pipeline Stages in the MongoDB Reference manual.
Table 24 Stages
| Stage | Support (Since) | Notes |
|---|---|---|
$addFields |
26ai | Alias: $set. |
$bucket |
26ai | |
$bucketAuto |
No | |
$collStats |
19c | Lists statistics about the specified collection and the Oracle Database indexes relevant for it. Supported fields: |
$count |
19c | |
$currentOp |
No | |
$documents |
26ai | |
$external |
26ai | See $external Aggregation Pipeline Stage. |
$facet |
26ai | |
$geoNear |
No | |
$graphLookup |
No | |
$group |
26ai | |
$indexStats |
26ai | |
$limit |
19c | |
$listLocalSessions |
No | |
$listSessions |
No | |
$lookup |
26ai | See $lookup Aggregation Pipeline Stage. |
$match |
19c | |
$merge |
26ai | |
$out |
26ai | |
$planCacheStats |
No | |
$project |
19c | |
$redact |
No | |
$replaceRoot |
26ai | Alias: $replaceWith. |
$replaceWith |
26ai | Alias for $replaceRoot. |
$sample |
26ai | |
$setWindowFields |
No | |
$set |
26ai | Alias for $addFields. |
$skip |
19c | |
$sort |
26ai | |
$sortByCount |
26ai | |
$sql |
19c | See $sql Aggregation Pipeline Stage. |
$unionWith |
26ai | |
$unset |
19c | |
$unwind |
26ai |
$sql Aggregation Pipeline Stage
You can use a $sql stage to execute Oracle SQL and PL/SQL code.
Here is an example that uses shell mongosh to execute, as user user100, an aggregation pipeline with a simple $sql stage from a MongoDB client.
insertMany is used to create a collection called emps and inserts three employee documents into it.Foot 1
user100> db.**emps**.insertMany([
{"ename" : "SMITH", "job" : "CLERK", "sal" : 800},
{"ename" : "ALLEN", "job" : "SALESMAN", "sal" : 1600},
{"ename" : "WARD", "job" : "SALESMAN", "sal" : 1250}
]);
Result shown by mongosh:
{
acknowledged: true,
insertedIds: {
'0': ObjectId("6595eb06e0fc41db6de93a6d"),
'1': ObjectId("6595eb06e0fc41db6de93a6e"),
'2': ObjectId("6595eb06e0fc41db6de93a6f")
}
}
A SQL SELECT query is used to compute the average of the employee salaries for each job. The average is computed using SQL function AVG.
user100> db.aggregate([ {**$sql** :
**`****SELECT** e.data.**job**, **AVG**(e.data.**sal**) **average**
FROM emps e
GROUP BY e.data.**job****`**
} ]);
The query returns two JSON objects with fields JOB and AVERAGE.
[
{ **JOB**: 'CLERK', **AVERAGE**: 800 },
{ JOB: 'SALESMAN', AVERAGE: 1425 }
]
A $sql stage has the following syntax. The fields other than $sql are described in Table 3-25.
{$sql : {statement : *<SQL statement>*,
binds : *<variables>*,
dialect : *<dialect>*,
format : *<format>*}}
The abbreviated syntax {$sql : <SQL statement>} is equivalent to this syntax {$sql : **{statement :** <SQL statement>**}**}.
*
-
If
$sqlis the only stage in the pipeline and the pipeline has no starting collection, then <SQL statement> can be any Oracle SQL or PL/SQL code, including SQL data definition language (DDL) and data manipulation language (DML) code.For example, this code uses a SQL
UPDATEstatement to increase the salaries of all employees,by 10 percent:db.aggregate([ {**$sql** : {statement : "**UPDATE** employees **SET** salary = salary * 0.1"}} ]); -
Otherwise, either the pipeline is executed on a collection or it has multiple stages. In this case:
-
<SQL statement> must be aSELECTstatement that projects a singleJSON-type column.
-
The
SELECTstatement can refer to the output from the input collection or the previous stage using the database view (row source) named INPUT, which has a singleJSON-type column DATA containing the input documents.See also Query JSON Data in Oracle AI Database JSON Developer’s Guide.
For example, the following code acts on starting collection
orders. It has three stages:-
Stage
$matchfilters collectionorders, choosing only the documents with astatusfield that has valueclosed. -
Stage
$sqltakes as input the filtered documents output from stage$match. It obtains them from columndataof viewinput(aliasv). While selecting the documents, it uses Oracle SQL Function JSON_MERGEPATCH to add a system timestamp to them as the value of new fieldupdated. The resulting timestamped documents are returned as the output from stage$sql. -
Stage
$outcreates a new collection,closed_orders, using the output of stage$sql, that is, the documents returned as the result of the SQLSELECTstatement.
db.**orders**.aggregate([ {$**match** : {status : "closed"}}, {**$sql** : **`**SELECT **json_mergepatch**( **v**.**data**, JSON {'updated' : **SYSTIMESTAMP**}) FROM **input** **v****`**}, {**$out** : "**closed_orders**"} ]);This query returns a document from the new collection,
closed_orders:db.closed_orders.**findOne**(){ _id: ObjectId('65e8b973ca4d0a3a255794c8'), order_id: 12382, product: 'Autonomous Database', status: '**closed**', updated: **ISODate('2024-03-06T18:44:23.275Z**') } -
These SQL statements are not supported by stage $sql:
-
Statements that use OUTparameters or invoke stored procedures directly (see Subprogram Parameter Modes and SQL Statements for Stored PL/SQL Units)
-
Data Manipulation Language (DML) statements that use a returning clause and return variables (see DML Returning)
All stages return zero or more JSON objects as their result. The result for a $sql stage depends on whether or not the SQL statement executed is a SELECT statement.
-
For a
SELECTstatement, each row in the query result set is mapped to a JSON object in the$sqlstage result. See $sql Stage Result for a SELECT Statement. -
For a non-
SELECTstatement, the$sqlstage result is a JSON object with the single field result, whose value indicates the number of table rows that the statement changed. See $sql Stage Result for a Non-SELECT Statement.
Table 25 $sql Fields
| Field | Type | Description | Required? |
|---|---|---|---|
statement |
string | The SQL statement to execute. | Yes. |
binds |
Any type | SQL variable bindings, each being a variable and its value. See binds Field. | No. |
dialect |
string | The dialect of the SQL statement (statement). The value must be "oracle" (otherwise, an error is raised). |
No. |
format |
string | The format of the output documents for stage $sql. The value must be "oracle" (otherwise, an error is raised). |
No. |
resetSession |
boolean |
|
No. Default: false. |
binds Field
The optional binds field in a $sql stage specifies one or more sets of SQL variable bindings (placeholder expressions). Each binding specifies a variable used in the SQL statement and the value to replace it with. When multiple binding sets are specified, thestatement is executed once for each set.
There are three ways to specify a single set of bindings:
-
Specify a set of bindings as an object, each of whose members has a variable's name as its field name and the variable's value as field value.
For example, here variable
empnois bound to value"E123", and variableenameis bound to value"Abdul J.".db.aggregate([ {$sql : {statement : `INSERT INTO emp(empno, ename) VALUES(**:empno**, :ename)`, binds : **{**"**empno**" : "**E123**", "ename" : "Abdul J."**}**}} ]); -
Specify a set of bindings as an array, each of whose elements is an object with any of these fields:
index,name,value,dataType. Each object represents a binding.For example, here the bind variable
:empnohas value"E123", and variable:ename, has value"Abdul J.":db.aggregate([ {$sql : {statement : `INSERT INTO emp(empno, ename) VALUES (**:empno**, :ename)`, binds : **[** {**name : empno**, **value** : **"E123"**}, {name : "ename", value : "Abdul J."} **]** }} ]); -
Specify a set of bindings as an array, each of whose elements is a bind-variable value. Each value is bound according to its position in the array: the first array element (
"E123", here) is the value of the first bind variable,:empno,and the second element is the value of the second variable. (The array elements need not be of the same type.)db.aggregate([ {$sql : {statement : `INSERT INTO emp(empno, ename) VALUES (**:empno**, :ename)`, binds : [ **"E123"**, "Abdul J." ] }} ]);
To specify multiple sets of bindings you just use an array of values that each specify a single set of bindings. Each of the array elements can specify a binding set using any of the ways described above: (1) an object whose members are variable name–value pairs, (2) an array of objects with optional fields index, name, value, and dataType, (3) an array of variable values whose array positions correspond to the variable indexes in the VALUES clause.
The following three examples illustrate this. They are semantically equivalent. The INSERT statement of each example is executed three times:
-
Once for the first set of bindings: variable
:empnoas"E123", and variable:enameas"Abdul J." -
Once for the second set of bindings: variable
:empnoas"E456"and variable:enameas"Elena H." -
Once for the third set of bindings: variable
:empnoas"E789"and variable:enameas"Francis K."
In the first example, the array elements are objects, each of which specifies a set of bindings. Each element of an object specifies the value of an individual (positional) binding.
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES (**:empno**, :ename)`,
binds :
[ **{**"**empno**" : "**E123**", "ename" : "Abdul J."**}**,
**{**"**empno**" : "**E456**", "ename" : "Elena H."**}**,
**{**"**empno**" : "**E789**", "ename" : "Francis K."**}** ]}} ]);
In the second example, the array elements are themselves arrays, each of which specifies a set of variable bindings. But in this case each element of the inner arrays is an object with the fields: name and value, specifying the value of an individual (positional) binding.
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES (**:empno**, :ename)`,
binds : [ **[** {**name** : **empno**,
**value** : **"E123"**},
{name : ename,
value : "Abdul J."} **]**,
**[** {**name** : **empno**,
**value** : "**E456**"},
{name : ename,
value : "Elena H."} **]**,
**[** {**name** : **empno**,
**value** : "**E789**"},
{name : ename,
value : "Francis K."} **]** ]}} ]);
In the third example, the array elements are themselves arrays, each of which specifies a set of variable bindings. Each element of the inner arrays specifies the value of an individual (positional) binding.
db.aggregate([ {$sql :
{statement :
`INSERT INTO emp(empno, ename)
VALUES (**:empno**, :ename)`,
binds : [ **[** "**E123**", "Abdul J." **]**,
**[** "E456", "Elena H." **]**,
**[** "E789", "Francis K." **]** ]}} ]);
See also Example 3-5.
Table 26 Fields of binds Object
| Field | JSON (BSON) Type | Description | Required? |
|---|---|---|---|
index |
number | The index (one-based position) of the given variable binding in the SQL statement. | No. If absent, it is inferred from the value's position in the array. Fields |
name |
string | The name of the bind variable. | No. Fields |
value |
Any type | The value of the bind variable. | No. If absent, the object itself is the bind value. For example,
is equivalent to
|
dataType |
string | The SQL data type to use for a given variable binding. | No. If absent, the default type for the given BSON value is used. See Supported SQL Data Types for Field dataType. |
Supported SQL Data Types for Field dataType
The allowed values for field dataType are described.
BSON types not listed are not supported; their use raises an error.
Starting with Oracle AI Database 26ai, JSON type is supported for each of the supported BSON types. Prior to release 26ai, an error is raised if field dataType has value JSON.
Table 27 Field datatype Values
| Input BSON Type | Supported SQL Type | Default SQL Type |
|---|---|---|
| String | JSON, VARCHAR2 |
VARCHAR2 |
| Double | JSON, BINARY_DOUBLE |
BINARY_DOUBLE |
| Decimal128, Int32, or Int64 | JSON, NUMBER |
NUMBER |
| Boolean | JSON, VARCHAR2, BOOLEAN |
Oracle AI Database 26ai: Oracle Database 19c: Error — no default type |
| ObjectId or Binary | JSON, RAW |
RAW |
| DateTime | JSON, TIMESTAMP WITH TIME ZONE |
TIMESTAMP WITH TIME ZONE |
| Object | JSON, VARCHAR2 |
Oracle AI Database 26ai: Oracle Database 19c: Error (no default type) |
| Array | JSON, VARCHAR2 |
Oracle AI Database 26ai: Oracle Database 19c: Error (no default type) |
| Null | Any SQL type mentioned above. For |
VARCHAR2 |
$sql Stage Result for a SELECT Statement
For a SELECT statement, each row in the query result set is mapped to a JSON object in the $sql stage result. (The MongoDB shell output encloses the objects in brackets ([, ]); the result is not a JSON array.)
The query can return a single column of JSON data, or it can return data from multiple columns, each of which can be of any type.
-
In the former case, the JSON object in the
$sql-stage result is the JSON data returned by the SQL query. This is illustrated in Example 3-1. -
In the latter case, the JSON object in the result is constructed from the multiple column values. The column aliases in the query are used as the object field names. This is illustrated in Example 3-2.
For the second case (query returning multiple columns), the query results are mapped to new BSON documents. If a given SQL column is known to be JSON data (because it is JSON type or it has an IS JSON constraint) then it is used directly, as a BSON (JSON) value. Otherwise, the SQL-to-BSON type mappings for the column values are as shown in Table 3-28. Selection of a value from a column of any other type raises an error.
Table 28 SELECT: Mappings of Non-JSON
| SQL Column Type | BSON (JSON Scalar) Type |
|---|---|
BINARY_DOUBLE, BINARY_FLOAT |
double |
BLOB |
raw |
RAW |
binary |
CLOB, VARCHAR2 |
string |
DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE |
date (UTC is assumed for |
NUMBER |
If scale is zero then int32 or int64, depending on the precision. Otherwise, double. |
Example 3-1 Result for SELECT Query that Returns a Single Column of JSON Data
This example shows two queries that select columns from table dept and return a single column of JSON data. They both use SQL construction JSON{…} to produce a JSON-type object.
This first query uses a wildcard (*) to select all columns from table dept. The column names are used as the resulting object field names.
Query:
SELECT JSON{*****} **data** FROM deptFoot 2
Result:
[ {**DEPTNO** : 10, **DNAME** : 'ACCOUNTING', **LOC** : 'NEW YORK'},
{DEPTNO : 20, DNAME : 'RESEARCH', LOC : 'DALLAS'},
{DEPTNO : 30, DNAME : 'SALES', LOC : 'CHICAGO'},
{DEPTNO : 40, DNAME : 'OPERATIONS', LOC : 'BOSTON'} ]
This second query selects columns deptno and dname from table dept. It uses JSON{…} to produce a JSON-type object with the column names as the values of fields _id and name, respectively.
Query:
SELECT JSON{'**_id**' : **deptno**, '**name**', **dname**} **data** FROM deptFoot 3
Result:
[ {**_id** : 10, **name** : 'ACCOUNTING'},
{_id : 20, name : 'RESEARCH'},
{_id : 30, name : 'SALES'},
{_id : 40, name : 'OPERATIONS'} ]
Example 3-2 Result for SELECT Query that Returns Data from Multiple Columns (Any Types)
This example shows two queries that select columns from table dept and construct a JSON object. (These queries do not use construction JSON{…}.)
This first query selects columns deptno, dname, and loc. The field names of the resulting object are the aliases of the selected columns and the field values are the corresponding column values.
Query:
SELECT deptno, dname, loc FROM dept
Result:
[ {**DEPTNO** : 10, **DNAME** : 'ACCOUNTING', **LOC** : 'NEW YORK'},
{DEPTNO : 20, DNAME : 'RESEARCH', LOC : 'DALLAS'},
{DEPTNO : 30, DNAME : 'SALES', LOC : 'CHICAGO'},
{DEPTNO : 40, DNAME : 'OPERATIONS', LOC : 'BOSTON' } ]
This second query selects columns deptno and loc, and it uses SQL function SYSTIMESTAMP to produce a timestamp. The query provides field names id, location, and ts for the resulting object, instead of using the column aliases. mongosh wraps the ISO timestamp value with the ISODate helper.
Query:
SELECT deptno "**id**", loc "**location**", **SYSTIMESTAMP** "**ts**" FROM dept
Result:
[ {**id** : 10,
**location** : 'NEW YORK',
**ts** : **ISODate("2023-12-01T20:44:17.118Z")**},
{id : 20,
location : 'DALLAS',
ts : ISODate("2023-12-01T20:44:17.118Z")},
{id : 30,
location : 'CHICAGO',
ts : ISODate("2023-12-01T20:44:17.118Z")},
{id : 40,
location : 'BOSTON',
ts : ISODate("2023-12-01T20:44:17.118Z")} ]
$sql Stage Result for a Non-SELECT Statement
The result of a $sql stage whose statement is not a SELECT statement is a JSON object with the single field result, whose value indicates the number of rows of data that were changed by the statement (that is, inserted, deleted, or updated). When such a stage uses multiple sets of bind variables, the result is an array of such numbers (of rows changed).
Example 3-3, Example 3-4, Example 3-5, and Example 3-6 illustrate the result for non-SELECT statements.
Example 3-3 Result for a DDL Statement — No Rows Are Modified
A DDL statement, such as this CREATE TABLE statement, changes no rows.
db.aggregate([{$sql:`**CREATE TABLE** employee (name VARCHAR2(4000), job
VARCHAR2(4000))`}])
[ {result : **0**} ]
Example 3-4 Result for a DML Statement That Modifies One Row
The INSERT statement in this $sql stage inserts one row, so result is 1.
db.aggregate([ {$sql : "**INSERT** INTO employee VALUES ('Bob', 'Programmer')"} ]);
[ {result : **1**} ]
Example 3-5 Result for a DML Statement That Modifies Three Rows
The INSERT statement in this $sql stage inserts three rows, one for each of the three sets of bind variables.
db.aggregate([ {$sql :
{statement : "INSERT INTO employee VALUES (:name, :job)",
binds : [ **{"name" : "John", "job" : "Programmer"}**,
**{"name" : "Jane", "job" : "Manager"}**,
**{"name" : "Francis", "job" : "CEO"}** ]}}]);
[ {result : [ **1**, **1**, **1** ]} ]
Example 3-6 Result for a DML Statement That Modifies Two Rows
This DELETE statement deletes two rows, so result is 2.
db.aggregate([ {$sql : `DELETE FROM employee e WHERE e.job = 'Programmer'`} ])
[ {result : 2} ]
$external Aggregation Pipeline Stage
You can use an $external stage to access data from external files.
Using JSON data stored in an external file you can, for example:
-
Use stage
$matchto filter documents in the file. See Example 3-10. -
Use stage
$groupto group documents in the file. -
Use stage
$outto store the output of stage$externalin a new JSON collection. See Example 3-10.
An $external stage has the following syntax. The fields are described in Table 3-29.
{$external : {location : *<URL or file name>*,
directory : *<database directory name>*,
credential : *<credential name*
path : *<SQL/JSON path expression>*}}
The abbreviated syntax {$external : <pre-authenticated URI>} is equivalent to this syntax {$external : **{location :** <pre-authenticated URI>**}**}.
Table 29 $external Fields
| Field | Type | Description | Required? |
|---|---|---|---|
location |
string | The location of the external JSON file to use.[Foot 4](#fn_4) The string text is a either a URL, such as |
Yes |
directory |
string | The database directory object that contains the file specified by field location. See Location of Data Files and Output Files in Oracle AI Database Utilities. |
No, unless the value of field location is a file within a directory object. |
credential |
string | The credential object to use when accessing a file in an object-store private bucket. (Field You create a credential object using PL/SQL subprogram |
No |
path |
string | A SQL/JSON path expression that targets JSON objects to extract from the JSON file as separate documents. The default value is One typical case is a file with a single JSON array of objects. Another is a file with multiple objects at top level. |
No |
Footnote 4 Accessing external files using the Internet requires PL/SQL package DBMS_CLOUD. It is pre-installed for Oracle Autonomous AI Database, but you must install and configure it for a non-autonomous database.
For any URI for which you don’t need authentication, so you don’t need to specify a credential, if you also don’t need to specify a path then you can just use the location URL as the value of field $external. See Example 3-10. No authentication is needed for public repositories, files in public buckets, or pre-authenticated URIs.
If fields location and directory are both present, then if their values are both are valid then the directory field is ignored; otherwise an error is raised.
Example 3-7 Creating a Credential
This use of stage $sql uses PL/SQL subprogram DBMS_CLOUD.create_credential to create credential MYCRED for user myuser@example.com. PL/SQL package DBMS_CLOUD is pre-installed for Oracle Autonomous AI Database, but you must install and configure it for a non-autonomous database.
db.aggregate([ {$sql :
`BEGIN
DBMS_CLOUD.**create_credential**(
credential_name => '**MYCRED**',
username => 'myuser@example.com',
password => 'XXXXXXXXX');
END;`} ])
Example 3-8 Extracting JSON Documents From a Private Bucket
This example extracts the elements of the array in private object-store bucket array.json as document rows, using credential MYCRED. The path, $[*], matches each array element.
Assume that this is the content of array.json:
[ {"_id" : {"$oid" : "663bce1c219cb9c411e8a719"},
"a" : {"b" : [ {"z" : {"b" : 1, "c" : 99}},
{"z" : {"b" : 2}},
{"z" : {"a" : 5}},
{"z" : {"b" : 1, "a" : 5}} ]}},
{"_id" : {"$oid" : "663bce1c219cb9c411e8a71a"},
"a" : {"b" : [ {"z" : {"b" : 1, "c" : 99}},
{"z" : {"b" : 2}},
{"z" : {"a" : 5}} ]}} ]
db.aggregate([ {$external :
{location :
"https://private-repo.example.com/.../array.json",
credential : "**MYCRED**",
path : "**$[*]**"}} ])
The stage returns the elements (two objects) from the array:
[ {_id : ObjectId("663bce1c219cb9c411e8a719"),
a : {b : [ {z : {b : 1, c : 99}},
{z : {b : 2}},
{z : {a : 5}},
{z : {b : 1, a : 5}} ]}}
{_id : ObjectId("663bce1c219cb9c411e8a71a"),
a : {b : [ {z : {b : 1, c : 99}}, {z : {b : 2}}, {z : {a : 5}} ]}} ]
Example 3-9 Extracting JSON Documents From an External File In a Directory
This example uses a path expression to extract only the objects that are values of field z from external file array.json (defined in Example 3-8). The example assumes that file array.json exists in directory object DEMO.
db.aggregate([ {$external :
{location : "array.json",
directory : "DEMO",
path : `**$.a.b[*].z**`}}])
The stage returns these objects from the array:
[ {b : 1, c : 99},
{b : 2},
{a : 5},
{b : 1, a : 5},
{b : 1, c : 99},
{b : 2},
{a : 5} ]
Example 3-10 Create a New Collection From Selected Documents
This example extracts the purchase orders with status closed from a public repository, using stage $out to create collection closed-orders for them.
Because no path is needed and the repository is public (so no credential is needed), we can use the abbreviated form for the value of field $external: just the repository location. (You can use the abbreviated form with any pre-authenticated URI.)
db.aggregate([ {**$external** :
**"https://public-repo.example.com/.../orders.json"**},
{$match : {status : "**closed**"},
{**$out** : "**closed-orders**"} ])
$lookup Aggregation Pipeline Stage
Restrictions on the use of stage $lookup are described.
You can use a $lookup stage to join documents from another collection on given fields. The following restrictions apply to the fields of stage $lookup:
-
Field
letis not supported. An error is raised if you use it. -
If the value of field
localFieldis a field that's missing from any input documents then an error is raised. (The missing field is not treated as if it were present with the value ofnull.) -
If the value of field
localFieldis an array, or if it is a field that occurs more than once in an input document (thus producing multiple values), then an error is raised. -
If the value of field
foreignFieldis a field that's missing from the input documents then no documents match (no error is raised). This includes the case where the document field specified by fieldlocalFieldis missing or has anullvalue. -
Field
pipelinecan contain only stages$lookup,$project, and$sort; otherwise, an error is raised. Fields$lookupand$projectmust not alter any fields referenced by fieldforeignField; otherwise, an error is raised. See Also: $lookup (aggregation) in the MongoDB Reference manual.
Aggregation Pipeline Operators
Support of MongoDB aggregation pipeline operators is described.
See Also: Aggregation Pipeline Operators in the MongoDB Reference manual
Table 30 Arithmetic Expression
| Operator | Support (Since) | Notes |
|---|---|---|
$abs |
26ai | |
$add |
26ai | |
$ceil |
26ai | |
$divide |
26ai | |
$exp |
26ai | |
$floor |
26ai | |
$ln |
26ai | |
$log |
26ai | |
$log10 |
26ai | |
$mod |
26ai | |
$multiply |
26ai | |
$pow |
26ai | |
$round |
26ai | |
$sqrt |
26ai | |
$subtract |
26ai | |
$trunc |
26ai |
See Also: Arithmetic Expression Operators in the MongoDB Reference manual
Table 31 Trigonometry Expression Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$sin |
26ai | |
$cos |
26ai | |
$tan |
26ai | |
$asin |
26ai | |
$acos |
26ai | |
$atan |
26ai | |
$atan2 |
26ai | |
$sinh |
26ai | |
$tanh |
26ai | |
$degreesToRadians |
26ai | |
$radiansToDegrees |
26ai |
Table 32 Array Expression Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$arrayElemAt |
26ai | |
$arrayToObject |
26ai | |
$concatArrays |
26ai | |
$filter |
26ai | |
$first |
26ai | |
$firstN |
26ai | |
$in |
26ai | |
$indexOfArray |
26ai | |
$isArray |
26ai | |
$last |
26ai | |
$lastN |
26ai | |
$objectToArray |
26ai | |
$range |
26ai | |
$reduce |
26ai | |
$reverseArray |
26ai | |
$size |
26ai | |
$slice |
26ai | |
$sortArray |
26ai | |
$zip |
26ai |
See Also: Array Expression Operators in the MongoDB Reference manual
Table 33 Boolean Expression
| Operator | Support (Since) | Notes |
|---|---|---|
$and |
26ai | |
$not |
26ai | |
$or |
26ai |
See Also: Boolean Expression Operators in the MongoDB Reference manual
Table 34 Comparison Expression
| Operator | Support (Since) | Notes |
|---|---|---|
$cmp |
26ai | |
$eq |
26ai | |
$gt |
26ai | |
$gte |
26ai | |
$lt |
26ai | |
$lte |
26ai | |
$ne |
26ai |
See Also: Comparison Expression Operators in the MongoDB Reference manual
Table 35 Conditional Expression
| Operator | Support (Since) | Notes |
|---|---|---|
$cond |
26ai | |
$ifNull |
26ai | |
$switch |
26ai |
See Also: Conditional Expression Operators in the MongoDB Reference manual
Table 36 Date Expression Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$dateAdd |
No | |
$dateDiff |
No | |
$dateFromParts |
26ai | |
$dateFromString |
26ai | |
$dateSubtract |
No | |
$dateToParts |
26ai | |
$dateToString |
26ai | |
$dateTrunc |
No | |
$dayOfMonth |
26ai | |
$dayOfWeek |
26ai | |
$dayOfYear |
26ai | |
$hour |
26ai | |
$isoDayOfWeek |
26ai | |
$isoWeek |
26ai | |
$isoWeekYear |
26ai | |
$millisecond |
26ai | |
$minute |
26ai | |
$month |
26ai | |
$second |
26ai | |
$week |
26ai | |
$year |
26ai |
See Also: Date Expression Operators in the MongoDB Reference manual
Table 37 Literal Expression Operator
| Operator | Support (Since) | Notes |
|---|---|---|
$literal |
26ai |
See Also: Literal Expression Operator in the MongoDB Reference manual
Table 38 Object Expression
| Operator | Support (Since) | Notes |
|---|---|---|
$mergeObjects |
26ai | |
$objectToArray |
26ai | |
$setField |
No |
See Also: Object Expression Operators in the MongoDB Reference manual
Table 39 Set Expression Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$anyElementFalse |
No | |
$anyElementTrue |
No | |
$setDifference |
No | |
$setEquals |
No | |
$setIntersection |
26ai | |
$setIsSubset |
No | |
$setUnion |
26ai |
See Also: Set Expression Operators in the MongoDB Reference manual
Table 40 String Expression
| Operator | Support (Since) | Notes |
|---|---|---|
$concat |
26ai | |
$indexOfBytes |
No | |
$indexOfCP |
26ai | |
$ltrim |
26ai | |
$regexFind |
No | |
$regexFindAll |
No | |
$regexMatch |
No | |
$replaceAll |
No | |
$replaceOne |
No | |
$rtrim |
26ai | |
$split |
26ai | |
$strcasecmp |
26ai | |
$strLenBytes |
No | |
$strLenCP |
26ai | |
$substr |
26ai | |
$substrBytes |
No | |
$substrCP |
26ai | |
$toLower |
26ai | |
$toUpper |
26ai | |
$trim |
19c |
See Also: String Expression Operators in the MongoDB Reference manual
Table 41 Text Expression Operator
| Operator | Support (Since) | Notes |
|---|---|---|
$meta |
No |
See Also: Text Expression Operator in the MongoDB Reference manual
Table 42 Type Expression Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$convert |
26ai | |
$isNumber |
26ai | |
$toBool |
26ai | |
$toDate |
26ai | |
$toDecimal |
No | |
$toDouble |
26ai | |
$toInt |
26ai | |
$toLong |
26ai | |
$toObjectId |
26ai | |
$toString |
26ai | |
$type |
26ai |
See Also: Type Expression Operators in the MongoDB Reference manual
Table 43 Accumulator Expression
| Operator | Support (Since) | Notes |
|---|---|---|
$accumulator |
No | Server-side JavaScript Deprecated, see MongoDB documentation. |
$addToSet |
26ai | |
$avg |
26ai | |
$bottom |
26ai | |
$bottomN |
No | |
$count |
26ai | |
$first |
26ai | |
$firstN |
No | |
$last |
26ai | |
$lastN |
No | |
$max |
26ai | |
$maxN |
No | |
$min |
26ai | |
$push |
26ai | |
$stdDevPop |
26ai | |
$stdDevSamp |
26ai | |
$sum |
26ai | |
$top |
26ai | |
$topN |
No |
See Also: Accumulators ($group) and Accumulators ($project) in the MongoDB Reference manual
Table 44 Variable Expression
| Operator | Support (Since) | Notes |
|---|---|---|
$let |
26ai |
See Also: Variable Expression Operators in the MongoDB Reference manual
Table 45 System Variables
| Variable | Support (Since) | Notes |
|---|---|---|
$$CURRENT |
26ai | |
$$DESCEND |
No | |
$$KEEP |
No | |
$$PRUNE |
No | |
$$REMOVE |
No | |
$$ROOT |
26ai |
See Also: Variables in Aggregation Expressions in the MongoDB Reference manual
Table 46 Miscellaneous Operators
| Operator | Support (Since) | Notes |
|---|---|---|
$binarySize |
26ai | |
$getField |
No | |
$rand |
26ai | |
$sampleRate |
No | |
$map |
26ai | |
$function |
No | Server-side JavaScript Deprecated, see MongoDB documentation. |
Hint $service: Application-Connection Service (Consumer Group)
You can use any of the following application-connection services (consumer groups) with any aggregation pipeline expression, by adding a $service hint to the expression. Service LOW is used by default. LOW, MEDIUM, and HIGH are typically used for reporting and batch processing; TP and TPURGENT are typically used for transaction processing.
-
LOW— Low-priority service for reporting and batch processing. Operations are not run in parallel. -
MEDIUM— Medium-priority service for reporting and batch operations. All operations run in parallel and are subject to queuing. -
HIGH— High-priority service for reporting and batch operations. All operations run in parallel and are subject to queuing. -
TP— Typical service for transaction processing. Operations are not run in parallel. -
TPURGENT— Highest-priority service, for time-critical transaction processing. Supports manual parallelism.
For example, the hint here specifies that operator $count should use service HIGH.
db.foo.aggregate([ {"$count":"cnt"} ], {"**hint**":{"**$service**":"HIGH"}}});
Hint $service uses PL/SQL procedure CS_SESSION.switch_service, to switch the consumer group from the default of LOW. Because of this, a user of $service must be granted privilege EXECUTE on package CS_SESSION. Otherwise, the consumer group remains LOW, and no error is raised to indicate that the group status is unchanged.
For example, while connected as an administrator, you can use this command to grant privilege EXECUTE to user myuser:
db.aggregate([{$sql : `grant execute on cs_session to myuser`}]);
Related Topics
See Also:
-
Which Database Service Should I Choose for My Connection, Application, or Tool? in Using Oracle Autonomous AI Database Serverless
-
CS_SESSION Package in Using Oracle Autonomous AI Database Serverless
Data Types
Support of MongoDB data types is described.
Table 47 Data Types
| Data Type and Alias | Support (Since) | Notes |
|---|---|---|
32-Bit Integer (int) |
19c | |
64-Bit Integer (long) |
19c | |
Array (array) |
19c | |
Binary Data (binData) |
19c | |
Boolean (bool) |
19c | |
Date (date) |
19c | |
DBPointer (dbPointer) |
No | |
Decimal128 (decimal) |
19c | |
Double (double) |
19c | |
JavaScript (javascript) |
No | |
MaxKey (maxKey) |
No | |
MinKey (minKey) |
No | |
Null (null) |
19c | |
Object (object) |
19c | |
ObjectId (objectId) |
19c | |
Regular Expression (regex) |
No | |
String (string) |
19c | |
Symbol (symbol) |
No | |
Timestamp (timestamp) |
No | |
Undefined (undefined) |
No |
See Also: $type in the MongoDB Reference manual
Indexes and Index Properties
Support of MongoDB indexes and index properties is described.
All indexes except text index, and all index options are ignored in Oracle Database 19c. Index support begins in Oracle AI Database 26ai.
Table 48 Indexes
| Index Type | Support (Since) | Notes |
|---|---|---|
| 2d Index | No | |
| 2dsphere Index | No | You can create an Oracle AI Database spatial index using SQL CREATE INDEX on the backing table of the collection. |
| Compound Multikey Index | 26ai | See Note, below. |
| Hashed Index | No | |
| Single Field Multikey Index | 26ai | See Note, below. |
| Text Index | 19c |
Note:
You can create a suitable Oracle AI Database index using SQL CREATE INDEX on the backing table of the collection. See Indexes for JSON Data.
If the field cannot ever have an array value then create a json_value function-based index. Otherwise, use an index over a materialized view. See JSON Query Rewrite To Use a Materialized View Over JSON_TABLE.
See Also: Index Types in the MongoDB Reference manual
Table 49 Index Options
| Index Option | Support (Since) | Notes |
|---|---|---|
| background | No-op | Deprecated by MongoDB. Ignored by Oracle AI Database. |
| collation | No | |
| expireAfterSeconds | 26ai | |
| hidden | No | |
| online | 26ai | Specific to Oracle AI Database. Value can be true (default) or false. True means DML operations on the table are allowed during index creation. |
| partialFilterExpression | No | |
| sparse | 26ai | |
| storageEngine | No | |
| unique | 26ai |
See Also: Index Properties in the MongoDB Reference manual
Footnote Legend
Footnote 1: In Oracle AI Database the collection is table emps with a single JSON-type column data.
Footnote 2: On Oracle Database 19c use this query instead: SELECT **json_object**(*) data FROM dept;
Footnote 3: On Oracle Database 19c use this query instead: SELECT **json_object**('_id':deptno, 'name', dname) data FROM dept;