Interface ModelDeploymentAsync

  • All Superinterfaces:
    AutoCloseable
    All Known Implementing Classes:
    ModelDeploymentAsyncClient

    @Generated(value="OracleSDKGenerator",
               comments="API Version: 20240424")
    public interface ModelDeploymentAsync
    extends AutoCloseable
    Model deployments are a managed resource in the OCI Data Science service to use to deploy machine learning models as HTTP endpoints in OCI.

    Deploying machine learning models as web applications (HTTP API endpoints) serving predictions in real time is the most common way that models are productionized. HTTP endpoints are flexible and can serve requests for model predictions.

    For more information, see [Model Deployments](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm)

    • Method Detail

      • refreshClient

        void refreshClient()
        Rebuilds the client from scratch.

        Useful to refresh certificates.

      • setEndpoint

        void setEndpoint​(String endpoint)
        Sets the endpoint to call (ex, https://www.example.com).
        Parameters:
        endpoint - The endpoint of the serice.
      • getEndpoint

        String getEndpoint()
        Gets the set endpoint for REST call (ex, https://www.example.com)
      • setRegion

        void setRegion​(Region region)
        Sets the region to call (ex, Region.US_PHOENIX_1).

        Note, this will call setEndpoint after resolving the endpoint. If the service is not available in this region, however, an IllegalArgumentException will be raised.

        Parameters:
        region - The region of the service.
      • setRegion

        void setRegion​(String regionId)
        Sets the region to call (ex, ‘us-phoenix-1’).

        Note, this will first try to map the region ID to a known Region and call setRegion.

        If no known Region could be determined, it will create an endpoint based on the default endpoint format (Region.formatDefaultRegionEndpoint(Service, String) and then call setEndpoint.

        Parameters:
        regionId - The public region ID.
      • useRealmSpecificEndpointTemplate

        void useRealmSpecificEndpointTemplate​(boolean realmSpecificEndpointTemplateEnabled)
        Determines whether realm specific endpoint should be used or not.

        Set realmSpecificEndpointTemplateEnabled to “true” if the user wants to enable use of realm specific endpoint template, otherwise set it to “false”

        Parameters:
        realmSpecificEndpointTemplateEnabled - flag to enable the use of realm specific endpoint template
      • predict

        Future<PredictResponse> predict​(PredictRequest request,
                                        AsyncHandler<PredictRequest,​PredictResponse> handler)
        Invoking a model deployment calls the predict endpoint of the model deployment URI.

        This endpoint takes sample data as input and is processed using the predict() function in score.py model artifact file

        Parameters:
        request - The request object containing the details to send
        handler - The request handler to invoke upon completion, may be null.
        Returns:
        A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.
      • predictWithResponseStream

        Future<PredictWithResponseStreamResponse> predictWithResponseStream​(PredictWithResponseStreamRequest request,
                                                                            AsyncHandler<PredictWithResponseStreamRequest,​PredictWithResponseStreamResponse> handler)
        Invoking a model deployment calls the predictWithResponseStream endpoint of the model deployment URI to get the streaming result.

        This endpoint takes sample data as input and is processed using the predict() function in score.py model artifact file

        Parameters:
        request - The request object containing the details to send
        handler - The request handler to invoke upon completion, may be null.
        Returns:
        A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.