DBMS_CATALOG Package
The DBMS_CATALOG package includes a wide range of subprograms for catalog management, grouped by functionality.
- Catalog Mounting and Unmounting:
You can add a new catalog to your Autonomous AI Database, either by mounting an existing connection (for example an existing database link to a database), or by adding a new connection and registering the external system as a new catalog.
You can enable or disable a catalog.Note:
Disabled catalogs keep connection details and any current metadata but are not updated and are not accessible either from UIs or APIs.Remove or detach a catalog from your Autonomous AI Database.
- Catalog Property Management:
The subprograms mentioned here allows you to retrieve and update various properties of catalogs, schemas, and tables within your system. Specifically, it provides functions to:
-
Get the value of a property for a catalog, schema, or table, including options to handle large text values via CLOB format.
-
Update the value of properties for catalogs, schemas, or tables, with support for both regular and CLOB data types.
-
- Credential Management:
A mounted catalog will have zero or more required credentials. These are credentials that need to be specified before the catalog can be used. Iceberg catalogs, for example, typically have two required credentials.
- ICEBERG_CREDENTIAL: A credential used to call the Iceberg REST Catalog endpoints. This is usually a bearer token credential.
- DATA_STORAGE_CREDENTIAL: A credential used to access the Iceberg files in the bucket.
You can think of required credential as a parameter that needs to be specified. A local credential is an actual CREDENTIAL object in the user's schema that plays the role of one of a required credential. You can specify the local credentials when you mount the catalog, using arguments likecatalog_credentialanddata_storage_credentialinDBMS_CATALOG.MOUNT_ICEBERG. You could also specify them later by callingSET_LOCAL_CREDENTIAL.BEGIN -- Create a new credential DBMS_CLOUD.CREATE_CREDENTIAL('BUCKET_CREDENTIAL', ...); -- Make the new credential play the role of the required 'DATA_STORAGE_CREDENTIAL' DBMS_CATALOG.SET_LOCAL_CREDENTIAL( catalog_name => 'iceberg_cat', required_credential => 'DATA_STORAGE_CREDENTIAL', local_credential => 'BUCKET_CREDENTIAL'); END;You can see the current mapping between required and local credentials with theGET_LOCAL_CREDENTIAL_MAPfunction.SELECT JSON_QUERY(dbms_catalog.get_local_credential_map('iceberg_cat'), '$' PRETTY) FROM dual; 2 3 JSON_QUERY(DBMS_CATALOG.GET_LOCAL_CREDENTIAL_MAP('ICEBERG_CAT'),'$'PRETTY) ------------------------------------------------------------------------------------------------------------------------ { "DATA_STORAGE_CREDENTIAL" : "BUCKET_CREDENTIAL", "ICEBERG_CREDENTIAL" : "UNITY_CRED" } - Catalog Entity Management:
The subprograms listed here refers to managing various entities within a catalog, including creation, retrieval, and deletion of storage links, schemas, tables, and other catalog objects. It provides functions to:
-
Create and drop cloud storage links linked to catalogs.
-
Create and drop external tables within catalogs.
-
Create and drop schemas and their contents.
-
Retrieve lists of tables, schemas, and other catalog objects with optional filtering.
-
- Data Access and Caching:
This section encompasses functions that handle querying data, managing result cursors, and controlling metadata caching within catalogs. It includes capabilities to:
-
Generate SELECT statements for logical tables with customizable options.
-
Open cursors to fetch data from logical tables.
-
Flush and prefill the metadata cache for catalogs to keep data up to date.
-
- Synchronization and Sharing:
This section lists subprograms describes features that enable the management and maintenance of consistent, up-to-date catalogs and schemas across different systems or environments. It provides the following functionalities:
CREATE_SYNCHRONIZED_SCHEMAS: Creates local schemas on the database that are synchronized with remote catalog definitions, ensuring that the local environment reflects the remote source.DROP_SYNCHRONIZED_SCHEMAS:Removes one or more schemas that were previously synchronized.UPDATE_SYNCHRONIZED_SCHEMA_PROPERTY: Modifies properties of synchronized schemas, such as change tracking or synchronization behavior, by updating properties with new values in CLOB format.GRANT_TO_RECIPIENT & REVOKE_FROM_RECIPIENT: Manage access permissions by granting or revoking access to catalogs for specific share recipients, controlling who can view or modify the catalog data.