Augment Your Data
Extract and load data from your custom transactions and make it readily available in tables populated in the autonomous data warehouse.
You can use the system provided or customer provided source tables that are the custom transaction objects that you created in NetSuite. The system provided tables are pre-validated by Oracle NetSuite Analytics Warehouse. The customer provided tables are other source tables that are available for extraction but aren’t validated by Oracle NetSuite Analytics Warehouse. As a user with the functional administrator or system administrator application role, you can allow usage of a particular table that isn’t pre-validated by Oracle NetSuite Analytics Warehouse. However, Oracle can't ensure the success of processing such custom tables or any performance impacts, such as delays in the daily refreshing of data.
If you enable the SME Options for Data Augmentation under the Generally Available Features tab on the Enable Features page, then you can augment your reports with datasets created by extending an existing entity or group of facts, by adding a new dimension in the target instance, and by adding a new fact in the target instance. When you run these data augmentation pipeline jobs, they publish these datasets to the semantic model. However, this isn’t the recommended practice. The recommended method is not to enable the SME Options for Data Augmentation feature and use the default Dataset augmentation type to bring varied data into the warehouse. When you run the Dataset data augmentation pipeline job, it doesn’t publish anything to the semantic model. You can then use the semantic model extensions to create your own semantic model. This method supports complex semantic modelling to meet your business requirements. Use the Data augmentation capability to bring data into the warehouse and then use the Semantic Model Extensibility capability to create the joins and expose that data to the subject areas that you want. This enables flexibility and better performance of both the capabilities. Additionally, this method allows better lifecycle management. For example, if you need to make any adjustments to the semantic model, then you can make the changes directly in the semantic model. You don’t need to adjust the data augmentation that brought the data into the warehouse.
The Dataset augmentation type isn’t associated with any other augmentations. Based on the incremental schedule, the data in this dataset gets refreshed during scheduled pipeline refresh. But unlike other augmentations, this augmentation isn’t linked to other augmentations, and you can’t change the attributes as dimension or measure. This dataset isn’t associated with any subject area, since it is simply copying the dataset from source and creating a warehouse table. You can perform semantic model extension after the table is created. To use this dataset to build the joins or incorporate an object from the dataset into your semantic model, you must run an incremental load prior to using it because the incremental load populates the dataset.
- Activation in Progress - You can’t edit, delete, or schedule a data augmentation pipeline job while activation is in progress.
- Activation Completed - You can edit the data augmentation to add or delete VO attributes and save the changes. You can’t modify the schedule in this status.
- Activation Scheduled - You can edit the data augmentation to add VO attributes, save the changes while retaining the existing schedule, reschedule the execution date and time, or execute the plan immediately.
Note:
You can change the names of the columns that you’ve added from the various data sources in your data augmentation. Later if you delete a data augmentation, then you must wait for the daily incremental run to complete to see the change in the reports, cards, and decks.