2 Configure Oracle NetSuite Analytics Warehouse Data
You can set up data transfer using the NetSuite user interface or alternatively use the Data Configuration tile on the Oracle NetSuite Analytics Warehouse Console.
To set up data transfer using the NetSuite user interface, see Setting Up the Data Transfer.
- Sign in to your NetSuite account.
- Click Setup, click Integration, and then click Configuration.
- On the NetSuite Analytics Warehouse Configuration page, click Log in to NetSuite Analytics Warehouse. You see the Oracle NetSuite Analytics Warehouse home page.
- On the home page, open the Navigator menu, click Console, and then click Data Configuration under Data Administration. You see the Data Configuration page on which you can perform all the tasks discussed in this section.
Topics:
- Typical Workflow to Configure Data
- About Transaction Entities
- About Data Pipelines for Functional Areas
- About Data Refresh Performance
- About Pipeline Parameters
- Set Up the Pipeline Parameters
- About Reporting Configurations
- Set Up the Reporting Configurations
- Create a Data Pipeline for a Functional Area
- Edit a Data Pipeline for a Functional Area
- Activate a Data Pipeline for a Functional Area
- Activate Value-added Datasets
- Deactivate a Data Pipeline for a Functional Area
- Delete a Data Pipeline for a Functional Area
- Refresh a Data Pipeline for a Functional Area
- Reload Data for a Data Pipeline
- Reset the Data Warehouse
- Reset the Cache
- View Load Request History
- View the Audit Log
- About Augmenting Your Data
- Augment Your Data
- Create a Dimension Alias
- Connect to an Autonomous Data Warehouse
- Disable Data Pipeline
- About Managing Data Connections
- Schedule Frequent Refreshes of Data
- Schedule Periodic Full Reload of Functional Area Data
- Schedule Frequent Refreshes of Warehouse Tables (Preview)
- Override Data Pipeline Schedules for Functional Areas (Preview)
- Prioritize Datasets for Incremental Refresh (Preview)
- Reset and Reload the Data Source
- Extend Data with Custom Data Configurations
Typical Workflow to Configure Data
Follow these tasks as a guide to configure Oracle NetSuite Analytics Warehouse data.
Task | Description | More Information |
---|---|---|
Specify the pipeline parameters | Set up the pipeline parameters for your data model file before running your data pipelines for the functional areas. | Set Up the Pipeline Parameters |
Create a data pipeline for a functional area | To start analyzing your data, create data pipelines for the functional areas to copy data to the data warehouse. | Create a Data Pipeline for a Functional Area |
Activate a data pipeline for a functional area | You must activate the data pipeline for a functional area to run and load data into the data warehouse. | Activate a Data Pipeline for a Functional Area |
Specify the reporting configurations | Set up the reporting configurations for your data model file after activating a data pipeline for a functional area. | Set Up the Reporting Configurations |
View request history | View the load request history by functional area, load type, and status. | View Load Request History |
Augment your data | Augment your reports by using datasets that you create with specific columns from various data stores. | Augment Your Data |
About Transaction Entities
Oracle NetSuite Analytics Warehouse is organized into logical business groupings called functional areas, which contain tables specific to the transaction types in each functional area.
Table Types
- <Transaction Type> are transaction type stripes. In the NetSuite universal module definition (UMD), the transaction table is striped by transaction types using the TYPE column. NetSuite Analytics Warehouse creates tables by stripes. Transaction type can be SalesOrd, CustInvc, and so on, but <Transaction Type> contains an expanded and readable format such as _SALES_ORDER_ or _CUSTOMER_INVOICE_ respectively.
- <NS Table Name> are NetSuite tables which store dimensional data in the source.
Table 2-1 NetSuite Analytics Warehouse Table Types
Table Type | Naming Pattern | Source |
---|---|---|
Fact — Transaction Header | DW_NS_<Transaction Type>_F | Derived from NetSuite source tables transaction |
Fact — Transaction Lines | DW_NS_<Transaction Type>_LINES_F | Derived from NetSuite source tables transaction, transactionline, and transactionaccountingline |
Fact — Snapshot | DW_NS_<Transaction Type>_SNAPSHOT_F | Derived from transaction, systemnote, transactionhistory, and other transactions which act as snapshot events for the transaction |
Dimensions | DW_NS_<NS Table Name>_D | Usually derived directly from source table |
General Helper Table | DW_NS_<NS Table Name>_G | Usually derived directly from source table |
Fact Denormalization
Fact — Transaction Lines are usually denormalized and contain additional attributes and dimension keys from the Fact — Transaction Header table to aid ease of reporting. For example, the Sales Order Lines fact contains status and trandate, which are denormalized from the Sales Order Header fact.
Fact — Transaction Header also contains dimensions keys or attributes from the Fact — Transaction Lines mainline in the NetSuite source tables that are non-transactional in nature.
Dimensions
The dimensions keys used in a fact usually follow the rule where they are named similar to the dimension table with which they are supposed to join to. For example, DW_NS_SALES_ORDER_F.CUSTOMER
joins to DW_NS_CUSTOMER_D.ID
.
Dimensions can be common across functional areas and transaction types. For example, customer
and item
are some of the common dimensions in NetSuite Analytics
Warehouse.
Dimension History
The dimensions used in NetSuite Analytics Warehouse are type 1. Any changes in the source are reflected in NetSuite Analytics Warehouse either incrementally (if they are high volume dimensions) or in full update mode.
Base and Accounting Book Amounts for Transaction Amounts
Usually, transaction currency-based amount columns extracted from a source are converted to a subsidiary or base currency in NetSuite Analytics
Warehouse fact lines tables. For example, foreignamount
has a corresponding base_amount
, estgrossprofit
has a corresponding base_estgrossprofit
, and so on.
Similarly, in line facts, the accounting book currency-based amounts are also converted into base and transaction currency when applicable. For example, accountingbook_credit
and accountingbook_debit
have base_creditamount
and base_debitamount
(in base or subsidiary currency) along with credit and debit (in transaction currency). The lines facts also consist of accountingbook_currency
along with the base_currency
key as a role-playing dimension in the lines fact tables. This helps in identifying the currency used for these measure columns.
Header facts contain conversion for the base or subsidiary currency amounts only for corresponding transaction currency amounts, such as foreigntotal
, which will have a corresponding base_total
along with the base_currency
key. Header facts contain the base_currency
key as a role-playing dimension. Header facts don’t contain any accounting book amounts.
Flags Attributes
Both facts and dimension tables in NetSuite Analytics
Warehouse consist of flag attributes which are useful for reporting purposes. The typical values in these columns is either T
or F
, indicating true or false. Alternately, Y
or N
indicates yes or no.
Snapshot Facts
NetSuite Analytics Warehouse contains snapshot facts for many transaction types. These tables track daily status and amount changes for a transaction type. The granularity of the snapshots tables is a combination of transaction identifier and date. Snapshots usually contain one row for each day from the transaction start date to the transaction close date. If the transaction is open on the ETL run date, then snapshots are built until one day before the ETL run date.
Delete Transactions
NetSuite Analytics Warehouse supports deletion of transactions. When any transaction is deleted in the source system, the same is deleted in NetSuite Analytics Warehouse after an incremental refresh.
Warehouse Table Grain Identification
The granularity of each table (fact, dimension, etc.) can be determined by checking the source_record_id
column. When filtered by a single value, the table should have only one record. As a convention, source record ID is placed immediately after the columns that constitute the grain of the table. For example, for DW_NS_SALES_ORDER_LINES_F
, the granularity of the table consists of transaction
, id,
and accountingbook
, hence the source_record_id
is placed as the fourth column after the above three grain columns.
Source record source_record_id
contains the concatenated string-converted value of the grain columns. Hence for the above example, DW_NS_SALES_ORDER_LINES_F.source_record_id
= transaction~id~accountingbook
(or 32451~2~1
).
Extraction Dates and Load Volume Control
Both Fact — Transaction Header
and Fact — Transaction Lines
tend to be high-volume tables and may impact data extract performance. To control the extract data volume, users can set an initial extract date as part of pipeline parameters while performing the setup. This controls the extraction of data in the load. Incremental loads are controlled using the last modified date attribute in transaction
and transactionlines
tables. Many dimensions also extract data incrementally using the last modified date attribute, while some dimensions always extract in full extracts. The ones which load in full extracts are usually smaller dimensions,
Reporting Parameters
NetSuite Analytics
Warehouse has one reporting parameter that needs to be set: subsidiary
. The subsidiary
parameter is used for consolidating transaction values from other subsidiaries' currencies to this reporting subsidiary currency.
About Data Pipelines for Functional Areas
Data pipelines for functional areas load data specific to a functional area into the data warehouse.
These pipelines hold configuration parameters specific to a functional area such as General Ledger functional area under Finance and Sales functional area under NetSuite.
Allowed Actions for Data Pipelines
Here's a list of the actions allowed for data pipelines when they are in various states. This information enables you to know which actions you can and can't perform when a data pipeline is in a particular state.
Pipeline status | Available actions | Additional information |
---|---|---|
Saved | Edit and Delete | You can activate a data pipeline using the Edit wizard or can delete the data pipeline. |
InActive | Edit, Activate, and Delete | You can activate a data pipeline using the Edit wizard or can delete the data pipeline. |
Activation Scheduled | Edit and Delete | If you edit a scheduled data pipeline, then the previous data pipeline is unscheduled and a new data pipeline is scheduled. |
Activation in Progress | View | You can view the data pipeline in Read-only mode. |
Activation Complete | Edit, Delete, Reset, Deactivate, and Refresh Data | You can perform these actions:
|
About Data Refresh Performance
Oracle strives constantly to improve performance for data loading in pipelines.
The performance of loading data for your instance will vary. The time to complete data processing, both full warehouse loads and incremental data loads, depends on various factors. A data pipeline load includes the following:
- Extracting data from the Netsuite source.
- Loading the data into Oracle Autonomous Data Warehouse.
- Transforming the data into the prebuilt schema.
- The availability of the source system.
- The size and complexity of the source data.
- The activated functional areas.
- Custom SQL queries that impact Oracle Autonomous Data Warehouse.
- Your queries running concurrently.
- Customizations made on the source system objects (which require a full load for those objects).
- The patching of source and target systems.
- Turn on the pipeline execution estimation using the Data Refresh Estimate preview feature on the Enable Features page. The estimated refresh completion details are visible on the Pipeline Settings page.
- Trigger any subsequent operations based on the estimated refresh completion time.
- Reach out to Oracle Support if the refresh isn't completed by the estimated completion time.
Data refresh is typically completed daily unless the data pipeline jobs are turned off or stuck. You can observe data loading times for your specific source and warehouse configuration to estimate the time it takes for an incremental daily refresh and for a full warehouse load. This information can help you plan for the optimal time in the day to start your daily data refresh. You may want to schedule the data load to run during off-peak hours, for example, run initial full warehouse loads during weekends and incremental loads during weeknights to ensure that users aren't impacted.
You can view the estimated refresh completion time for
daily pipelines on the Pipeline Settings page in the Estimated Refresh Completion field
as a Preview feature. This enables you to plan your tasks in the application.
The incremental data refresh process refreshes objects as base and derived datasets. The base datasets are objects that are part of the core pipeline and sourced directly from the Netsuite source and needed for core content (for example, transactions and dimensions). The derived datasets are based on data that require additional processing (for example, datasets used in prebuilt machine learning use cases). The incremental refresh process refreshes the base datasets first to ensure essential data is made available quickly. The system predicts and displays the incremental load completion time but doesn't do the same for derived datasets. If you haven't activated content that has derived datasets, then you won't see this information on the Pipeline Settings page in the Data Pipeline section.
About Pipeline Parameters
The pipeline parameters apply to all functional areas.
- Initial Extract Date: Initial extract date is used when you extract data for a full load. Transactional data created after the initial extract date processes and loads to the warehouse. It reduces the initial data load volume. After extracting the data for a functional area, avoid changing the initial extract date. If you need to change the initial extract data, then after changing the date, reset the data warehouse and reactivate the functional areas. See Reset the Data Warehouse.
- Data Refresh Schedule: Specify the frequency and when you want the incremental data load to happen. While specifying the timezone, the recommendation is to use city names to handle the daylight savings. For example, instead of selecting timezone such as EST or PST, select Europe/Bucharest or America/Los_Angeles. In this case, the data refresh process calculates the value mentioned in the Time field based on the local time irrespective of daylight savings.
Set Up the Pipeline Parameters
Set up the pipeline parameters for your data model file before running your data pipelines for the functional areas.
- In Oracle NetSuite Analytics Warehouse Console, click Data Configuration.
- On the Data Configuration page, under Configurations, click Pipeline Settings.
- Under Global Parameters, indicate the initial extract date from which to load the transaction data.
- Under Data Refresh Schedule, select the interval and depending on the selected interval, specify the time, day, and month when you want the incremental data load to happen.
- Click Save.
About Reporting Configurations
Set up the reporting configurations to specify how data is presented on the key metrics, workbooks, visualizations, dashboards, and reports in Oracle NetSuite Analytics Warehouse.
You must activate a data pipeline for a functional area to be able to set up the reporting configurations. Based on the functional area for which you've activated a data pipeline, you see the applicable reporting configurations. For example, if you've activated a data pipeline for a functional area in Oracle Fusion ERP Analytics, then you see the reporting configurations applicable for Oracle Fusion ERP Analytics.
About the Reporting Configuration for NetSuite Analytics Warehouse
You specify values for these parameters as you configure reporting for Oracle NetSuite Analytics Warehouse:
- Subsidiary: This parameter controls the subsidiary or subsidiaries used for reporting. If you select parent, all the subsidiaries are covered in the reporting layer.
Set Up the Reporting Configurations
Specify the report parameters for your data model file.
- In Oracle NetSuite Analytics Warehouse Console, click Reporting Configuration.
- On the Reporting Configuration page, under Global Parameters, in Subsidiary, select a subsidiary to transfer data from that subsidiary.
- Click Save.
Create a Data Pipeline for a Functional Area
As a functional administrator, create a data pipeline for a functional area to copy data to the data warehouse. Use these steps to select and schedule runs to update the Oracle Autonomous Data Warehouse instance with the latest data.
Perform this task only after you have set up the pipeline and report parameters. You can schedule an activation of a functional area even if activation of another functional area is in progress. However, you can't update the activation of a functional area that's in progress.
- Sign in to your service.
- In Oracle NetSuite Analytics Warehouse Console, click Data Configuration under Application Administration.
- On the Data Configuration page, click your service. For example, under Applications, click NetSuite Warehouse.
- On the service page, for example, the Data Configuration: Oracle NetSuite Analytics page, click Create.
- On the NetSuite Warehouse page, use the wizard to select your application areas to transfer data to the warehouse, and then click Next.
- Review the parameters and click one of the options:
- Cancel: To cancel the data pipeline for the functional area.
- Save: To save the data pipeline for the functional area but not activate it.
- Activate: To schedule when to run the data pipeline for the functional area. See Activate a Data Pipeline for a Functional Area.
Edit a Data Pipeline for a Functional Area
You can change any details of your data pipeline for a functional area prior to activating it.
- Sign in to your service.
- In Oracle NetSuite Analytics Warehouse Console, click Data Configuration under Application Administration.
- On the Data Configuration page, click your service. For example, under Applications, click NetSuite Warehouse.
- On the service page, for example, the Data Configuration: Oracle NetSuite Analytics page, click the Action menu for the data pipeline for a functional area that you want to edit, and click Edit.
- Make the changes, review the changes, and then click Save or Activate.
Activate a Data Pipeline for a Functional Area
You must activate the data pipeline for a functional area to run it and load the data into the data warehouse.
- Load in progress: If an incremental load is in progress.
- An impending load: If an incremental load is scheduled to run in the next hour.
- Exceeded the number of daily refresh requests: The maximum number of ad hoc data refresh requests for the day is four. If you've exceeded this number, then you can submit a request the following day.
Activate Value-added Datasets
Oracle NetSuite Analytics Warehouse provides ready-to-use value-added datasets that load data independently into Oracle NetSuite Analytics Warehouse and aren't dependent on activation of any other functional areas. You must specifically activate these datasets.
Deactivate a Data Pipeline for a Functional Area
You can deactivate all the incremental setup and jobs of a data pipeline for a functional area.
Delete a Data Pipeline for a Functional Area
You can delete a data pipeline for a functional area prior to activating it.
Refresh a Data Pipeline for a Functional Area
Refresh an activated data pipeline for a functional area to bring in new data for the selected functional area.
When you refresh the data pipeline, the system executes an incremental load immediately. The next scheduled run could take additional time to complete because of any new data. If you had reset a data pipeline for the functional area before refreshing data, then the system runs a full load immediately for that functional area.
Note:
Executing an on-demand data refresh while working on the data could cause temporary inconsistencies.Reload Data for a Data Pipeline
When you reload data for a data pipeline, you start the process of completely refreshing the data in the data warehouse for the functional area.
Note:
If you're reloading data for a functional area that depends on other functional areas, then you must also reload the data for the dependent functional areas to avoid incremental refresh failures. Alternatively, you can reset and reload the data source rather than reloading data for individual functional areas. See Reset and Reload the Data Source.When you issue a request to reload data, the Request History page shows that the request is received and accepted. After you reload the data, the previous data still exists in the data warehouse related to the functional area. On the next data refresh, Oracle NetSuite Analytics Warehouse discards the existing data and loads new data. After reloading the data for the data pipeline, you see the data pipeline for the functional area on the Data Configuration page with the Completed status.
In a 24-hour period, you can reload the data for only 4 functional areas's data pipelines.
Reset the Data Warehouse
Reset your data warehouse when your data source instance is refreshed from another environment, for example, when the environment is refreshed from production to a test environment. You may also want to reset the data warehouse when the data becomes corrupt.
Prior to resetting the data warehouse, you must have at least one activated functional area. After resetting the data warehouse, the last refresh date shows the last increment details. You must reactivate all the data pipelines for the functional areas to load data. See Activate a Data Pipeline for a Functional Area
.Note:
Your data isn't available until Oracle NetSuite Analytics Warehouse completes all the activations after resetting the data warehouse.Reset the Cache
The data augmentation source column information is cached in the browser cache and is maintained for a week. If your source has new columns and they don't display in the Data Augmentation wizard, then you can reset the browser cache and retry data augmentation.
View Load Request History
You can view the data pipeline load request history by functional area, load type, and status along with other details for tracking and reference purposes.
The request history doesn't display the incremental loads. Because the request history shows only past load requests, any unscheduled loads don’t affect the existing pipelines and you can reactivate any functional area or augmentation.
About Request Types and Statuses
When you perform tasks on a functional area, you can review the different types of requests involved with those tasks in the Request History area.
Request Types
Request Type | Description |
---|---|
Batch Request | An operation that performs multiple functions such as mapping attributes, or importing and export objects. |
Content Patch | A content patch upgrade run. |
Full Load (Adhoc) | A full load run that happens immediately upon request. |
Full Load (Scheduled) | A request to schedule a full load run. |
Map Attribute | A Netsuite-specific request run. |
Module Reset | A request to delete an active functional area or source table. |
Refresh Data | The system ran a request to refresh data. |
Reset | A request to refresh the data in the data warehouse for the functional area. |
Reset Data Warehouse | A request to reset the warehouse. This reset deletes all the customer-added warehouse data. |
Target Model Upgrade | A request to upgrade the target model to the latest available version. |
Request Statuses
Request Status | Description |
---|---|
Activation Completed | The job ran successfully and is now complete. |
Activation in Progress |
|
Activation Scheduled |
|
Deactivation Complete | The job is removed from Active status. |
InActive | The job isn't saved or scheduled to run. |
Received | The job request is submitted. |
Saved | Job is saved but not scheduled to run. |
Troubleshooting | The job is taking a long time and Oracle is investigating it. |
View the Audit Log
You can view the data configuration-related actions that were performed such as activation of a data pipeline for a functional area, setting up of reporting parameters, and saving a data augmentation.
About Augmenting Your Data
Augment your reports by choosing specific columns from various custom transaction objects.
- Custom attributes or records that reference a custom segment and custom list.
- Custom attributes that don't reference other objects.
- Custom reference that don't reference other objects.
Data augmentation allows you to seamlessly extract and load data from these custom transaction types and make it readily available in tables populated in Oracle Autonomous Data Warehouse. It also extends the subject areas in the semantic model and makes the custom transaction types available for reporting.
Augment Your Data
Extract and load data from your custom transactions and make it readily available in tables populated in the autonomous data warehouse.
You can use the system provided or customer provided source tables that are the custom transaction objects that you created in NetSuite. The system provided tables are pre-validated by Oracle NetSuite Analytics Warehouse. The customer provided tables are other source tables that are available for extraction but aren’t validated by Oracle NetSuite Analytics Warehouse. As a user with the functional administrator or system administrator application role, you can allow usage of a particular table that isn’t pre-validated by Oracle NetSuite Analytics Warehouse. However, Oracle can't ensure the success of processing such custom tables or any performance impacts, such as delays in the daily refreshing of data.
If you enable the SME Options for Data Augmentation under the Generally Available Features tab on the Enable Features page, then you can augment your reports with datasets created by extending an existing entity or group of facts, by adding a new dimension in the target instance, and by adding a new fact in the target instance. When you run these data augmentation pipeline jobs, they publish these datasets to the semantic model. However, this isn’t the recommended practice. The recommended method is not to enable the SME Options for Data Augmentation feature and use the default Dataset augmentation type to bring varied data into the warehouse. When you run the Dataset data augmentation pipeline job, it doesn’t publish anything to the semantic model. You can then use the semantic model extensions to create your own semantic model. This method supports complex semantic modelling to meet your business requirements. Use the Data augmentation capability to bring data into the warehouse and then use the Semantic Model Extensibility capability to create the joins and expose that data to the subject areas that you want. This enables flexibility and better performance of both the capabilities. Additionally, this method allows better lifecycle management. For example, if you need to make any adjustments to the semantic model, then you can make the changes directly in the semantic model. You don’t need to adjust the data augmentation that brought the data into the warehouse.
The Dataset augmentation type isn’t associated with any other augmentations. Based on the incremental schedule, the data in this dataset gets refreshed during scheduled pipeline refresh. But unlike other augmentations, this augmentation isn’t linked to other augmentations, and you can’t change the attributes as dimension or measure. This dataset isn’t associated with any subject area, since it is simply copying the dataset from source and creating a warehouse table. You can perform semantic model extension after the table is created. To use this dataset to build the joins or incorporate an object from the dataset into your semantic model, you must run an incremental load prior to using it because the incremental load populates the dataset.
- Activation in Progress - You can’t edit, delete, or schedule a data augmentation pipeline job while activation is in progress.
- Activation Completed - You can edit the data augmentation to add or delete VO attributes and save the changes. You can’t modify the schedule in this status.
- Activation Scheduled - You can edit the data augmentation to add VO attributes, save the changes while retaining the existing schedule, reschedule the execution date and time, or execute the plan immediately.
Note:
You can change the names of the columns that you’ve added from the various data sources in your data augmentation. Later if you delete a data augmentation, then you must wait for the daily incremental run to complete to see the change in the reports, cards, and decks.Create a Dimension Alias
Dimension alias are alias names on the warehouse dimension tables. You can specify the alias names for the Dimension type augmentations and the data that's in the warehouse already as dimensions. The alias names enable you to reuse the existing warehouse tables with different names in the subject areas.
You create dimension aliases as a type of augmentation. This is useful when you want to reinstate a dimension for analysis. For example, if you would like to reuse the Vendor dimension as a Shipper dimension as well, you can create an alias for Vendor.
You can also select complex custom attributes as dimension aliases and retain the name defined in NetSuite. Ensure that you enable Dimension Alias in the Custom Attribute Mapper section under the Preview Features tab on the Enable Features page.
When you create the aliases, Oracle NetSuite Analytics Warehouse doesn’t copy the data for the dimension aliases. These augmentations don’t have a pipeline status, but they are visible in the subject area.
Connect to an Autonomous Data Warehouse
NetSuite Analytics Warehouse provides three ways to connect to an Autonomous Data Warehouse: Oracle Analytics Cloud Connector, Database Actions (SQL Developer Web), and SQL Developer desktop client.
The Oracle Analytics Cloud connector provides a way to query database objects for analysis, visualization, and reporting. You can query database objects by configuring Oracle Analytics Cloud datasets or writing SQL manually.
Database Actions (SQL Developer Web) and SQL Developer desktop client allow you to create, edit, and query database objects. You can also use them to perform administrative tasks such as creating users and database schemas and resetting passwords. These tools help you investigate, debug, and validate data.
Oracle recommends using Database Actions (SQL Developer Web) to leverage the modern cloud interface and its additional features.
Configure Autonomous Data Warehouse Credentials
You need to configure credentials of the OAX_USER and ADMIN users and download a wallet file to connect to an Autonomous Data Warehouse.
All connection methods initially require credentials for ADMIN and OAX_USER. You must reset these credentials annually, or configure them if you're performing this task for the first time.
The ADMIN can perform certain important tasks, such as changing OAX_USER's password, but can't see pipeline objects or data. The OAX_USER can browse and query pipeline objects and data. The following steps ensure that both users have correct credentials that you can use for connecting to an Autonomous Data Warehouse.
About Autonomous Data Warehouse Wallet and Administrator Credentials
The Autonomous Data Warehouse Wallet contains certificate keys for accessing your Autonomous Data Warehouse instance.
The client credentials .zip
that you download contains the following files:
cwallet.sso
- Oracle auto-login wallet.ewallet.p12 - PKCS #12
wallet file associated with the auto-login wallet.sqlnet.ora
- SQL*Net profile configuration file that includes the wallet location and TNSNAMES naming method.tnsnames.ora
- The SQL*Net configuration file that contains network service names mapped to connect descriptors for the local naming method.- Java Key Store (JKS) files - Key store files for use with JDBC Thin connections.
- README - Contains information about database tools and resources.
Note:
Wallet files, along with the database user ID and password, provide access to data in your Autonomous Data Warehouse. Store wallet files in a secure location. Share wallet files only with authorized users. If wallet files are transmitted in a way that might be accessed by unauthorized users (for example, over public email), transmit the wallet password separately and securely.Reset the ADMIN User Password
Administrator credentials allow you to access and manage your Autonomous Data Warehouse. You can reset the ADMIN user's credentials, but if you reset the password, that might affect other users who log in as ADMIN.
- Sign in to NetSuite Analytics Warehouse.
- In Oracle NetSuite Analytics Warehouse Console, click Warehouse.
- On the Warehouse page, under Administrator Credentials, click Reset Credentials, then provide and confirm the new password.
Download the Autonomous Data Warehouse Wallet
The Autonomous Data Warehouse Wallet contains certificate keys for accessing your Autonomous Data Warehouse instance.
Get the Database Actions URL
You need to locate the URL for Database Actions to be able to reset the OAX_USER's password.
- Unzip the downloaded wallet file, and open the README file in a text editor.
- Find and copy the URL ending with
/sql-developer
listed in the Database Actions section of the README file. - Paste the URL in a browser, then sign in to the Database Actions web page.
Connect to an Autonomous Data Warehouse with Database Actions (SQL Developer Web)
Follow these steps to connect to Autonomous Data Warehouse with Database Actions (SQL Developer Web).
Connect to an Autonomous Data Warehouse with Oracle Analytics Cloud
Follow these steps to connect to an Autonomous Data Warehouse with Oracle Analytics Cloud.
Configure the Connection for OAX_USER in Oracle Analytics Cloud
You can connect to an Autonomous Data Warehouse with Oracle Analytics Cloud.
When you configure the connection, select the service name with the "low" prefix in its name. Connecting with the "high" or "medium" service might lead to performance issues in reports in NetSuite Analytics Warehouse and delays in completion of daily incremental data pipeline loads. See Database Service Names for Autonomous Transaction Processing and Autonomous JSON Database for more information.
- Sign in to your NetSuite Analytics Warehouse instance.
- Click Go to Home Page.
- Click Navigator and then Data.
- Click the Connections tab.
- On the Oracle Analytics page, click Create, click Connection, then click Oracle Autonomous Data Warehouse.
- Enter these values and then click Save:
- Connection Name: ADW - OAX_USER
- Description: ADW - OAX_USER
- Client Credentials: Click Select, then browse to select the wallet.zip file.
- Username: OAX_USER (default)
- Password: The OAX_USER password.
- Service Name: Select <service name>-low. Using low prevents pipeline conflicts.
- In the Connections tab, click the ellipsis icon to the right of the name of the connection you just created to open the Actions menu and click Inspect.
- Click Access and then Roles.
- Type ServiceAdmin in the Search By Name field, then click to add it.
- Click the Full Control radio button for the ServiceAdmin user.
- Click Save and then Close.
Connect to an Autonomous Data Warehouse with SQL Developer Desktop Client
You can optionally use the SQL Developer desktop client to connect to Autonomous Data Warehouse. You need to configure credentials, and then set up connections for both ADMIN and OAX_USER.
Note:
Oracle recommends using Database Actions (SQL Developer Web) to leverage the modern cloud interface and its additional features.Configure the Connection to Autonomous Data Warehouse for ADMIN with SQL Developer Desktop Client
You can use the SQL Developer desktop client to connect to Autonomous Data Warehouse. However, this connection method is optional, and using Database Actions (SQL Developer Web) is the preferred method.
When you configure the connection, select the service name with the "low" prefix in its name. Connecting with the "high" or "medium" service might lead to performance issues in reports in NetSuite Analytics Warehouse and delays in completion of daily incremental data pipeline loads. See Database Service Names for Autonomous Transaction Processing and Autonomous JSON Database for more information.
Configure the Connection to Autonomous Data Warehouse for OAX_USER with SQL Developer Desktop Client
You can use the SQL Developer desktop client to connect to Autonomous Data Warehouse.
When you configure the connection, select the service name with the "low" prefix in its name. Connecting with the "high" or "medium" service might lead to performance issues in reports in NetSuite Analytics Warehouse and delays in completion of daily incremental data pipeline loads. See Database Service Names for Autonomous Transaction Processing and Autonomous JSON Database for more information.
Disable Data Pipeline
As the cloud account administrator with the functional administrator or system administrator application role, you can disable the data pipeline and enable it again.
- If you don’t want to run pipeline jobs for a particular source.
- If you don’t want to run pipeline jobs for a particular duration such as a quite time in your business activities.
In cases where the pipeline jobs are failing due to issues, Oracle disables the data pipeline from the backend to investigate and resolve. On the Pipeline Parameters page, Oracle provides a reason and action for you, if the resolution needs to be done from your side. You can resolve the issue and as an administrator you can enable the data pipeline yourself using the Data Pipeline Disabled toggle.
About Managing Data Connections
You can connect to a variety of data sources and remote applications to provide the background information for reports. You can blend the additional data from the various data sources with the prebuilt datasets to enhance business analysis.
Oracle NetSuite Analytics Warehouse can connect to other pre-validated data sources such as Oracle Object Storage, cloud applications such as Google Analytics, and on-premises applications such as Oracle E-Business Suite.
You can view the usage of capacity for custom data that's loaded into Oracle NetSuite
Analytics Warehouse through the connectors in the Custom
Data Usage dashboard available in the Common folder. The dashboard shows data loaded
daily and monthly from each of the activated external data sources.
Topics
- About Date and Timestamp Formatting for CSV File-based Extractors
- Create a Data Connection Type
- Edit a Data Connection Type
- Delete a Data Connection Type
- Create a Data Connection
- Test a Data Connection
- Update a Data Connection
- Delete a Data Connection
- Connect With On-premises Sources
- Connect with Cloud File Storage Sources
- Connect With Cloud Sources
Create a Data Connection Type
Connection Type specifies the source to which you're connecting. A connection type can have multiple connections.
Edit a Data Connection Type
If the properties or parameters for a data connection type change, you can edit them.
Delete a Data Connection Type
You can delete a data connection type if you don't need it anymore.
Note:
After you delete a connection type, you can't create new data connections to it.- Sign in to your service.
- In Oracle NetSuite Analytics Warehouse Console, click Data Configuration under Application Administration.
- On the Data Configuration page, click Manage Connections under Configurations.
- On the Manage Connections page, click Connections and then select or search for the connection you want to test.
- Click the Action menu for the connection and select Delete.
- In the Delete Connection dialog box, click Delete.
Create a Data Connection
You create a data connection to a data source to load data from that source into Oracle NetSuite Analytics Warehouse. You can create a connection for any available connection type.
While creating the connection, the system populates the connection name based on the connection source and you can’t change it while creating the connection or edit it later.
Update a Data Connection
When you first make a data connection, or when you make changes, you need to initialize and refresh it.
Test a Data Connection
After you create a data connection, you should test it to ensure it works properly.
- Sign in to your service.
- In Oracle NetSuite Analytics Warehouse Console, click Data Configuration under Application Administration.
- On the Data Configuration page, click Manage Connections under Configurations.
- On the Manage Connections page, click Connections, then select or search for the connection you want to test.
- Click the Action menu for the connection and select Test Connection.
- On the Request History page, check the status of the request to test the connection.
Delete a Data Connection
You can delete a custom data connection if you don't need it anymore.
Ensure that you delete the functional areas, data augmentations, and custom data configurations related to the data connection before deleting it. You can't update or load data from deleted data connections to the warehouse.
Note:
- Sign in to your service.
- In Oracle NetSuite Analytics Warehouse Console, click Data Configuration under Application Administration.
- On the Data Configuration page, click Manage Connections under Configurations.
- On the Manage Connections page, click Connections, then select or search for the connection you want to test.
- Click the Action menu for the connection and select Delete.
- In the Delete Connection dialog box, click Delete.
Connect With On-premises Sources
Connect with your on-premises applications to provide the background information for reports.
You can blend the additional data from these data sources with the prebuilt datasets to enhance business analysis.
Set up the Remote Agent to Load Data into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use an extract service remote agent to connect to your on-premises systems such as E-Business Suite, Peoplesoft, and JD Edwards, load data from these on-premises systems into Oracle NetSuite Analytics Warehouse, and then use the on-premises data to create data augmentations.
Note:
After configuring the remote agent on the Data Configuration page, wait for few minutes, refresh the remote agent page, and when you see the Agent Initialised message, you can proceed with other operations such as testing the connection to the remote agent, testng the connection to the remote source like EBusiness Suite, and refreshing the metadata. This enables you to run these jobs without timeout failures because data pipeline has a default timeout of 15 minutes.Ensure that Remote Agent is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
Load Data from On-premises E-Business Suite into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use an extract service remote agent to connect to your on-premises Oracle E-Business Suite system.
After connecting to your on-premises system, the remote agent extracts the data and loads it into the autonomous data warehouse associated with your Oracle NetSuite Analytics Warehouse instance. The remote agent pulls the metadata through the public extract service REST API and pushes data into object storage using the object storage REST API. You can extract and load the on-premises data into Oracle NetSuite Analytics Warehouse only once a day. Ensure that the user credentials you provide have access to the specific tables they need to extract data from within the EBS schema, whose URL you provide while creating the connection.
Ensure that Oracle E-Business Suite On-Prem is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
Load Data from On-premises JD Edwards into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use an extract service remote agent to connect to your on-premises JD Edwards system and use the JD Edwards data to create data augmentations.
Ensure that Remote Agent and Oracle JD Edwards On-Prem are enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
Load Data from On-premises PeopleSoft into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use an extract service remote agent to connect to your on-premises Oracle PeopleSoft system.
- Oracle PeopleSoft On-Prem - Campus Solutions
- Oracle PeopleSoft On-Prem - Financials
- Oracle PeopleSoft On-Prem - Human Resources
- Oracle PeopleSoft On-Prem - Learning Management
Load Data from On-premises MySQL Database into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use an extract service remote agent to connect to your on-premises MySQL database.
Ensure that MySQL On-Prem is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
Load Data from SQL Server into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from SQL Server and use it to create data augmentations.
Ensure that SQL Server is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
Connect with Cloud File Storage Sources
Connect with your file storage-based cloud sources to provide the background information for reports.
You can blend the additional data from these data sources with the prebuilt datasets to enhance business analysis. The file-based connectors support only UTF-8 encoding for data files that you upload.
About Date and Timestamp Formatting for CSV File-based Extractors
Extractors such as Secure FTP (SFTP), Amazon Simple Storage Service (AWS S3), and Oracle Object Storage Service use CSV data files that have date and timestamp fields.
Note:
Ensure that the date and timestamp formats for the data files match the date and timestamp formats in your source; for example, if you've used MM/dd/yyyy and MM/dd/yyyy hh:mm:ss in your source, then you must specify the same formats while creating the applicable data connections.Example | Pattern |
---|---|
1/23/1998 | MM/dd/yyyy |
1/23/1998 12:00:20 | MM/dd/yyyy hh:mm:ss |
12:08 PM | h:mm a |
01-Jan-1998 | dd-MMM-yyyy |
2001-07-04T12:08:56.235-0700 | yyyy-MM-dd'T'HH:mm:ss.SSSZ |
Letter | Meaning |
---|---|
M | Month |
d | Day |
y | Year |
h | Hour (0-12) |
H | Hour (0-23) |
m | Minute |
s | Second |
S | Milli Second |
a | AM/PM |
Z | Timezone |
Load Data from Amazon Simple Storage Service into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from Amazon Simple Storage Service (AWS S3) and use it to create data augmentations.
Ensure that AWS S3 is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In Oracle NetSuite
Analytics Warehouse, create the AWS S3 data
connection using these instructions:
- On the Manage Connections page, select Actions for the AWS S3 connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the AWS S3 data. Select the applicable AWS S3 source tables. See Augment Your Data.
Load Data from Oracle Object Storage into NetSuite Analytics Warehouse
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from Oracle Object Storage Service and use it to create data augmentations.
The recommended approach is to create one augmentation from one source table after acquiring data from Oracle Object Storage Service. After completion of augmentation, NetSuite Analytics Warehouse renames the source table in this case and if you create more than one augmentation from the same source, all other augmentations may fail with a message that the source file wasn't found.
- Store the following details in a text file to use while creating the connection
to Oracle Object Storage Service in NetSuite Analytics
Warehouse:
- In Oracle Object Storage Service, create the Remote Host Extract Files directory as the base folder in which you must place all your data files. Note down the name of this directory. See the "To create a folder or subfolder" section in Using the Console.
- Obtain the URL of the Oracle Object Storage Service by signing into the
Oracle Cloud
Infrastructure Console and navigating to the bucket to get the details of the
region, namespace, and bucket name. For example, the URL must be in the
https://objectstorage.<region>.oraclecloud.com/n/<namespace>/b/<name of the bucket>
format. See the "To view bucket details" section in Using the Console. - Obtain a user’s OCID by navigating in the Oracle Cloud Infrastructure Console to Identity & Security, and then Users. On the Users page, search for a user who has access to the bucket used in the connector and copy the OCID. Obtain the tenancy ID by clicking your profile icon and then Tenancy in the Oracle Cloud Infrastructure Console. Under Tenancy information, copy the OCID. See Where to Get the Tenancy's OCID and User's OCID.
- Obtain the fingerprint for a user from the Oracle Cloud
Infrastructure Console. Navigate to API Keys under
Resources on the user page, and then click
Add API Keys. In the Add API Keys dialog,
ensure that Generate API Key Pair is selected.
Download the private and public keys using the Download
Private Key and Download Public
Key options. You must copy the entire text of the
private key along with the comments before and after the actual key.
These comments could be as simple as: “---------------Begin RSA Private
Key --------” and “-----------End of RSA Private Key----------“. Don’t
copy only the alphanumeric key without the header and footer comments.
In the Add API Keys dialog, select Choose Public Key
File to upload your file, or Paste Public
Key, if you prefer to paste it into a text box and then
click Add. Copy the fingerprint that you see
after you upload the public key in the Console. It looks something like
this:
12:34:56:78:90:ab:cd:ef:12:34:56:78:90:ab:cd:ef
.
- In NetSuite Analytics
Warehouse, create the Oracle Object
Storage connection using these instructions:
- In Oracle Object Storage Service:
- In NetSuite Analytics Warehouse, on the Manage Connections page, select Actions for the Oracle Object Storage Service connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the data from the Oracle Object Storage Service. Select the applicable source tables from the Oracle Object Storage Service data. See Augment Your Data.
Load Data from a Secure FTP Source into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from a secure FTP source (SFTP) and use it to create data augmentations.
Ensure that SFTP is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
Load Data from Azure Storage into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from Azure Storage and use it to create data augmentations.
Ensure that Azure Storage is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In Oracle NetSuite
Analytics Warehouse, create the Azure Storage
data connection using these instructions:
- On the Manage Connections page, select Actions for the Azure Storage connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the Azure Storage data. Select the applicable Azure Storage source tables. See Augment Your Data.
Connect With Cloud Sources
Connect with your cloud applications to provide the background information for reports.
You can blend the additional data from these data sources with the prebuilt datasets to enhance business analysis. To know about the date and timestamp formatting for the CSV file-based extractors, see About Date and Timestamp Formatting for CSV File-based Extractors.
Create Additional NetSuite Connections
You can create additional connections to the NetSuite source based on your customer tier. These additional connections enable you to bring in data from multiple NetSuite accounts.
- One NetSuite primary account with live data and other NetSuite accounts having static data.
- More than one NetSuite account with live data but no data mash-up required.
- More than one NetSuite account with live data requiring data mash-up.
As a premium license tier Oracle NetSuite Analytics Warehouse customer, you can connect to two additional NetSuite accounts. If you're an enterprise license tier customer, then you can connect to ten additional NetSuite accounts. Based on your license tier, Oracle displays the additional NetSuite connections on the Create Connection dialog.
Load Data from Azure SQL into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from Azure SQL and use it to create data augmentations.
Ensure that Azure SQL is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In Oracle NetSuite
Analytics Warehouse, create the Azure SQL data
connection using these instructions:
- On the Manage Connections page, select Actions for the Azure SQL connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the Azure SQL data. Select the applicable Azure SQL source tables. See Augment Your Data.
Load Data from Oracle Autonomous Database into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from Oracle Autonomous Database and use it to create data augmentations.
Note:
Currently, you can't connect to a private autonomous transaction processing database (ATP database).- In Oracle NetSuite
Analytics Warehouse, create the autonomous
database connection using these instructions:
- On the Manage Connections page, select Actions for the autonomous database connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the autonomous database data. Select the applicable autonomous database source tables. See Augment Your Data.
Load Data from Enterprise Data Management Cloud into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the Enterprise Data Management Cloud instance and use it to create data augmentations.
The extracts created in the Enterprise Data Management Cloud service need to be public, hence you must promote your private extracts to the public. Review the documentation and error messages for the metadata refresh failures for the private extract. This connector supports only the CSV data format.
Ensure that Oracle Enterprise Data Management Cloud is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In NetSuite Analytics
Warehouse, create the Enterprise Data
Management Cloud data connection using these instructions:
- On the Manage Connections page, select Actions for the Enterprise Data Management Cloud connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the Enterprise Data Management Cloud data. Select the applicable Enterprise Data Management Cloud source tables. See Augment Your Data.
Load Data from Enterprise Performance Management into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the Enterprise Performance Management (EPM) SaaS instance and use it to create data augmentations for various Enterprise Resource Planning and Supply Chain Management use cases.
- Financial Close and Consolidation (FCCS)
- Planning and Budgeting (PBCS)
- Profitability and Cost Management (PCMCS)
Note:
The EPM connectors display the default datatype and size; you must edit these values as applicable while creating data augmentations.- Oracle EPM - Financial Close and Consolidation
- Oracle EPM - Planning and Budgeting
- Oracle EPM - Profitability and Cost Management
Load Data from Google Analytics into NetSuite Analytics Warehouse
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the Google Analytics SaaS instance and use it to create data augmentations for various Enterprise Resource Planning and Supply Chain Management use cases.
- NetSuite Analytics Warehouse supports Google Analytics extractor for GA4 properties and doesn’t support the previous version – Google Universal Analytics (UA) properties.
- DataStores are the list of GA4 properties.
- DataStore columns are the list of Dimensions and Metrics for a GA4 property.
- DataExtract runs the report based on user selection for a GA4 property as DataStore and Dimensions and Metrics as DataStore columns.
- MetaExtract fetches metadata for all the available GA4 properties (DataStores) and its Dimensions and Metrics (DataStoreColumns).
- This connector supports limited number of Google Analytics metrics. See Dimensions Metrics Explorer to know what is available.
Load Data from Mongo Database into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the Mongo database and use it to create data augmentations.
Ensure that MongoDB is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- Check the port number for your Mongo database and create a service request with server host and port details to enable network connectivity to the Mongo database server.
- In NetSuite Analytics
Warehouse, create the Mongo database
connection using these instructions:
- On the Manage Connections page, select Actions for the Mongo database connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the Mongo database data. Select the applicable Mongo database source tables. See Augment Your Data.
Load Data from MySQL Cloud Database into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the MySQL Cloud database and use it to create data augmentations.
Ensure that MySQL Cloud is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- Check the port number for your MySQL Cloud database and create a service request with server host and port details to enable network connectivity to the MySQL server.
- In NetSuite Analytics
Warehouse, create the MySQL Cloud
database connection using these instructions:
- On the Manage Connections page, select Actions for the MySQL Cloud database connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the MySQL Cloud database data. Select the applicable MySQL Cloud database source tables. See Augment Your Data.
Load Data from Oracle Analytics Publisher into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the Analytics Publisher reports and use it to create data augmentations for various use cases.
- The Analytics Publisher in Oracle Fusion Cloud Applications for data augmentation.
- Only those reports that complete within the Analytics Publisher report execution timeout limit that's typically 300 seconds.
The BI Publisher Reports connector workflow must observe the security rules of Oracle Fusion Cloud Applications. You must ensure that the password rotation and update are done on time before executing the BI Publisher Reports connector pipeline. Otherwise, those pipeline jobs will hang and eventually those jobs will get deleted, and the data source will be disabled until you update the password and resubmit the job.
Ensure that BI Publisher Reports is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In BI Publisher, build a comma separated values (CSV) report for the targeted tables. See Create Reports and Generate CSV Output.
- In NetSuite Analytics
Warehouse, create the data connection
to the BI Publisher reports using these instructions:
- On the Manage Connections page, select Actions for the BI Publisher reports connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the BI Publisher reports data. Select the applicable BI Publisher source tables. See Augment Your Data.
Load Data from Oracle Database Using JDBC into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use an extract service remote agent to connect to an Oracle database using JDBC and use the data to create data augmentations.
Ensure that Remote Agent and Oracle JDBC are enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
Load Data from Salesforce into NetSuite Analytics Warehouse
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the Salesforce SaaS instance and use it to create data augmentations.
Load Data from Shopify into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the Shopify SaaS instance and use it to create data augmentations for various Enterprise Resource Planning and Supply Chain Management use cases.
Ensure that Shopify is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In NetSuite Analytics
Warehouse, create the Shopify data
connection using these instructions:
- On the Manage Connections page, select Actions for the Shopify connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the Shopify data. Select the applicable Shopify source tables. See Augment Your Data.
Connect with Your Oracle Eloqua Data Source (Preview)
If you’ve subscribed for Oracle Fusion CX Analytics and want to load data from your Oracle Eloqua source into NetSuite Analytics Warehouse, then create a connection using the Eloqua connection type.
The Oracle Eloqua data that you load into NetSuite Analytics Warehouse enables you to augment the data in your warehouse and create varied customer experience-related analytics. Ensure that Oracle Eloqua is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
Load Data from QuickBooks Online into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from QuickBooks Online and use it to create data augmentations.
Ensure that QuickBooks Online is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In Oracle NetSuite
Analytics Warehouse, create the QuickBooks Online
data connection using these instructions:
- On the Manage Connections page, select Actions for the QuickBooks Online connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the QuickBooks Online data. Select the applicable QuickBooks Online source tables. See Augment Your Data.
Load Data from Snowflake into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from a Snowflake instance.
Note:
Snowflake some times requires API calls to originate from a known IP address. If you're experiencing connection issues due to an unauthorized IP, then submit an Oracle Support ticket to obtain the necessary Oracle IP address for your Snowflake allowlist.Ensure that Snowflake is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In NetSuite Analytics
Warehouse, create the Snowflake data
connection:
- On the Manage Connections page, select Actions for the Snowflake connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the Snowflake data. Select the applicable Snowflake source tables. See Augment Your Data.
Load Data from Taleo into NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the NetSuite Analytics Warehouse extract service to acquire data from the Taleo instance and use it to create data augmentations for various Enterprise Resource Planning and Supply Chain Management use cases.
Ensure that Taleo is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In NetSuite Analytics
Warehouse, create the Taleo data
connection using these instructions:
- On the Manage Connections page, select Actions for the Taleo connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the Taleo data. Select the applicable Taleo source tables. See Augment Your Data. See Augment Your Data.
Load Data from Oracle Transportation Management Cloud Service into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from an Oracle Transportation Management Cloud Service SaaS instance.
You can later use this data to create data augmentations for various Enterprise Resource Planning and Supply Chain Management use cases. Establish the connection from NetSuite Analytics Warehouse to your Oracle Transportation Management Cloud Service instance to start data acquisition followed by augmentation.
Note:
Oracle Fusion SCM Analytics is a prerequisite to use the "Oracle Transportation Management" connector.Load Data from Oracle Warehouse Management Cloud into Oracle NetSuite Analytics Warehouse (Preview)
As a service administrator, you can use the Oracle NetSuite Analytics Warehouse extract service to acquire data from Oracle Warehouse Management Cloud and use it to create data augmentations.
Ensure that Oracle Warehouse Management Cloud is enabled on the Enable Features page prior to creating this connection. See Make Preview Features Available.
- In Oracle NetSuite
Analytics Warehouse, create the Oracle Warehouse
Management Cloud data connection using these instructions:
- On the Manage Connections page, select Actions (
) for the Oracle Warehouse Management Cloud connection and then select Test Connection.
- After the connections are successfully established, navigate to the Data Configuration page, select the connection in Data Source, then click the Data Augmentation tile, and create a data augmentation using the Oracle Warehouse Management Cloud data. Select the applicable Oracle Warehouse Management Cloud source tables. See Augment Your Data.
Schedule Frequent Data Refreshes
You can schedule frequent refreshes of the applicable functional areas, prioritized datasets, and warehouse tables to ensure that the incremental data in these is the latest.
About Scheduling Frequent Data Refreshes
You can refresh specific set of tables currently with plans to support more functional areas and datasets in future releases.
When you refresh certain functional areas, prioritized datasets, and warehouse tables, be sure you understand which tables you can refresh or not because the selective refresh of some tables could lead to functional inconsistencies when combining data from multiple subject areas. This frequent data refresh capability is designed for refresh of base tables that capture the transactional data; it isn't meant for derived datasets that require aggregations, snapshots, or complex transformation logic. Such processing creates data latency that doesn't support high volume of frequent data refresh. For Oracle NetSuite Analytics Warehouse, you can schedule frequent refreshes for functional areas that are visible in the Frequent Data Refresh Schedule tab on the Pipeline Settings page.
If you've enabled the "Prioritized Data Refresh" preview feature and selected datasets for a prioritized incremental refresh, then those specific datasets are available for a frequent data refresh. See Prioritize Datasets for Incremental Refresh (Preview). If you've enabled the "Frequent Refresh Tables" preview feature and saved your selection of the warehouse tables, then the selected tables are available as "Warehouse Tables" for a frequent data refresh. See Schedule Frequent Refreshes of Warehouse Tables (Preview). If you want to select the warehouse tables created by the custom data configurations that were generated prior to release Platform 23.R4, then you must regenerate the applicable custom data configurations for these tables to be visible for selection. From release Platform 23.R4, the warehouse tables created by the custom data configurations are available for a frequent data refresh under the Frequent Refresh tab.
When you select the functional areas for a frequent refresh, you won’t be able to refresh the data pipeline for the applicable functional area using the Refresh Data option on the Data Configuration page. The frequent data refresh process doesn't refresh data from external sources through the data augmentation connectors. Oracle NetSuite Analytics Warehouse processes the data augmentations as part of the incremental loads. If you change a data augmentation after including it in the frequent data refresh schedule, then you must remove that data augmentation and let the next incremental refresh finish. Otherwise, the frequent data refresh might fail. After the incremental refresh is complete, you can add the updated data augmentation back to the frequent data refresh schedule.
For frequent data refreshes, the semantic model won't be updated. The update jobs for semantic model won't run as part of data augmentations, they run for data augmentations only during incremental loads.
- Performance of frequent data refreshes depends on the:
- Size of data.
- Data change such as what data has changed, and which pipeline gets triggered.
- Number of extracted records that may result in very different number of published records, for example, 44 extracted records resulted in 1060 published records in 70 minutes and 395 extracted records resulted in 55 published records in 35 minutes.
- The frequent data refresh process doesn’t get executed in the following
scenarios:
- In the 180-minute window before the scheduled start of the daily incremental data refresh.
- If any release upgrade is in progress.
- Until the previous frequent data refresh process is completed. You can set 1 hour frequency (maximum), however, in some cases it takes more than 1 hour to complete the refresh; in that case, the next frequent data refresh process starts at the next hour.
- For dataset-level (warehouse tables) frequent data refresh:
- You must know which exact datasets to refresh.
- There is a limit of up to 20 datasets for each run.
- Dependencies aren’t automatically incorporated. You must determine the dependencies and include the applicable tables.
Schedule Frequent Refreshes of Data
You may want to frequently refresh the data in these sample scenarios:
- During the last five days of the month or every quarter, general ledger data changes quickly with updated entries more than once a day. In this scenario, a financial analyst wants to ensure taking more recent and accurate data into the analysis and report generation. Because some of the data pipelines can take several hours to execute from end to end, the frequency of refresh needs to be every 4 hours or 6 times in a 24-hour period.
- In this scenario, a retailer that has strict guidelines for returns or refunds on any purchase orders, and needs to bring in the most recent order data for computation and report generation, especially in the beginning of a given fiscal quarter. To cover for time differences between Europe, East Coast, and West Coast of the U.S., a financial analyst needs to refresh data every 8 hours in 24 hours, or 3 times a day.
Ensure that Frequent Refresh is enabled in Pipeline Features under the Generally Available Features tab on the Enable Features page prior to scheduling frequent data refreshes. See Enable Generally Available Features.
Schedule Periodic Full Reload of Functional Area Data
You can schedule a periodic full reload of the applicable functional areas to ensure that all the data in these functional areas is completely reloaded.
For Oracle NetSuite Analytics Warehouse, you can schedule a periodic full reload for functional areas such as Financials and Sales. When you select the functional areas for full reload, ensure that you select the dependent functional areas as well.
WARNING:
When you schedule a full reload, the incremental loads take a longer time and depending on the initial extract date, the incremental data load may take days to complete the extraction.- The Finance department wants to do full reload of the general ledger (GL) data at the beginning of the month to make sure all data is fresh and clean for the monthly GL entries and the analytics and report generation.
- At the beginning of every quarter, all data related to Purchase Orders is updated with returns, refunds, and any adjustments. In this scenario, revenue recognition needs to take all the latest Purchase Order data to compute all numbers and create accurate reports. Hence, a full reload of the order data is required on the 1st of every fiscal quarter that can be different from the calendar quarter.
- View details of the requests on the Request History page.
- View the DW_WH_REFRESH_SUMMARY data warehouse table.
Ensure that Scheduled Full Data Reload is enabled in Pipeline Features under the Generally Available Features tab on the Enable Features page prior to scheduling periodic full reload. See Enable Generally Available Features.
Schedule Frequent Refreshes of Warehouse Tables (Preview)
You can select warehouse data tables that are part of functional areas, descriptive flexfields custom configurations, and data augmentations for frequent data refresh ahead of other data in the regularly scheduled batch.
For a table to be eligible for a frequent data refresh, ensure that its supporting functional areas, data augmentations, and descriptive flexfields custom configurations are activated. The custom data configurations activated prior to release 23.R4 won't be available automatically. You must regenerate and redeploy them to schedule for frequent refresh. After you select the warehouse tables for frequent data refresh and save your selection, the selected tables are available as "Warehouse Tables" under the Frequent Refresh tab. You can set the frequent refresh schedule for these tables in the Frequent Refresh tab. See Schedule Frequent Refreshes of Data.
Ensure that Frequent Refresh Tables is enabled in Pipeline Features under the Preview Features tab on the Enable Features page prior to scheduling frequent refreshes of the tables. See Make Preview Features Available.
Override Data Pipeline Schedules for Functional Areas (Preview)
You can remove a functional area from the standard incremental schedule and set a separate schedule with different frequency of run to prioritise your data pipelines and run timings.
You can select one or more functional areas and schedule to run incremental at a different frequency than the standard incremental job. If the incremental refresh is scheduled for Daily, then you can choose one or more functional areas to schedule them to run Weekly, Monthly, Quarterly, or Yearly. If the incremental refresh is scheduled for Weekly, then you can choose one or more functional areas to schedule them to run Monthly, Quarterly, or Yearly.
If the incremental refresh is weekly, then it runs on a certain day of the week. Thereafter, if you do an override for Monthly on a certain day of the month, then you can't piggyback on incremental since weekly incremental (certain day of the week, let's say Monday) may not run on the desired Day of the Month (let's say 1st). In this scenario, you can create schedules to override different modules on different days of the week and override different modules on the same day of the week.
Ensure that Functional Area Schedule Override is enabled in Pipeline Features under the Preview Features tab on the Enable Features page . See Make Preview Features Available.
Prioritize Datasets for Incremental Refresh (Preview)
As a functional administrator, you can select datasets to prioritize for incremental refresh over others. For example, at the end of the quarter, you want to prioritize some of the datasets that have bookings data over other datasets.
Reset and Reload the Data Source
As a functional administrator, you can reset and reload the data source for several reasons.
- If you need to change the data source for some reason, then you may want to reset the present data source back to the original state.
- If there is any data issue with the source system and you need to start the entire pipeline all over again.
- Initially if you set up the connection and the pipeline to test some things and then want to start from the beginning again.
- Whenever you perform a Production to Test (P2T) refresh, you must complete a full reload of the source to avoid data issues.
You can reset and reload data for all the activated functional areas, augmentations, and custom data configurations. This request may take a long time as it requires all data to be discarded and fetched again. Other requests like the incremental and full load won't start until this request completes.
Ensure that Reset and Reload the Data Source is enabled in Pipeline Features under the Generally Available Features tab on the Enable Features page prior to resetting and reloading the data source. See Enable Generally Available Features.
Extend Data with Custom Data Configurations
As a functional administrator, you can use the available custom data configurations to extend your data for detailed and varied analytics.
About Extending Data with Custom Data Configurations
Custom data configurations are prebuilt templates that enable you to create custom functional areas with the data that you bring into the Autonomous Data Warehouse. You later use these custom functional areas to augment the data from the source or enrich the data available in the Autonomous Data Warehouse.
When you start using a custom data configuration for the first time and deploy it, Oracle NetSuite Analytics Warehouse assigns the custom data configuration a version number. You can then edit the deployed version, make changes, and save it. This saved version then gets a new version number. While you’re changing the definition, the deployed version continues to fetch new data. An example of the currently available custom data configuration is Netsuite Account Analysis.
About Extending Data with Netsuite Account Analysis
NetSuite Account Analysis is a custom data configuration that enables you to build custom content by choosing data across 42 transaction types.
It also supports posting custom transactions, custom attribute mapper based custom attributes, and prebuilt subject areas. This feature supports granularity at the transaction line level. Once custom data configuration is deployed, the created content manifests in the form of a subject area. This feature eliminates the need for users to go to raw table and link them using SQL to build across transaction type content for those 42 transaction type.
Prerequisites for Setting up Netsuite Account Analysis
Ensure these prerequisites are met:
- Prior to configuring the Netsuite account analysis template, ensure that you’ve activated at least one functional area such as Sales. See Activate a Data Pipeline for a Functional Area.
- If you want to use the custom transactions and custom attributes while creating the account analysis subject area, then contact Oracle Support to have the custom transactions and custom attributes enabled in your instance.
- If you want to use custom attributes in Netsuite account analysis, then ensure that the Custom Attribute Mapper (CAM) entities are published before any other action.
Enable Netsuite Account Analysis (Preview)
Prior to configuring the Netsuite account analysis template to build the account analysis reports in Oracle NetSuite Analytics Warehouse, you must enable it.
Set up Netsuite Account Analysis
Set up the Netsuite Account Analysis subject area to build the account analysis reports in Oracle NetSuite Analytics Warehouse.
- Certain columns are mandatory for the account analysis report, hence these columns are selected by default and you can't deselect them.
- Certain columns are recommended and selected by default but you can deselect them based on your requirements for the account analysis reports.
- Certain columns are optional and you can select them based on your requirements for the account analysis reports.
- Certain columns are custom attributes and custom transactions-related. They're available for selection if you've completed the prerequisite task. See Prerequisites for Setting up Netsuite Account Analysis. After completing the prerequisite task, you must select the Include custom transaction applicable for Account Analysis check box to have the columns from custom transactions available for selection.
After creating the custom account analysis subject area, you can create the account analysis reports based on your business requirements.
- Undeploy
- Reload Data
- Edit
- Deactivate
- Refresh Data
- Delete