Interface ModelDeploymentAsync
-
- All Superinterfaces:
AutoCloseable
- All Known Implementing Classes:
ModelDeploymentAsyncClient
@Generated(value="OracleSDKGenerator", comments="API Version: 20240424") public interface ModelDeploymentAsync extends AutoCloseable
Model deployments are a managed resource in the OCI Data Science service to use to deploy machine learning models as HTTP endpoints in OCI.Deploying machine learning models as web applications (HTTP API endpoints) serving predictions in real time is the most common way that models are productionized. HTTP endpoints are flexible and can serve requests for model predictions.
For more information, see [Model Deployments](https://docs.oracle.com/en-us/iaas/data-science/using/model-dep-about.htm)
-
-
Method Summary
All Methods Instance Methods Abstract Methods Modifier and Type Method Description String
getEndpoint()
Gets the set endpoint for REST call (ex, https://www.example.com)Future<PredictResponse>
predict(PredictRequest request, AsyncHandler<PredictRequest,PredictResponse> handler)
Invoking a model deployment calls the predict endpoint of the model deployment URI.Future<PredictWithResponseStreamResponse>
predictWithResponseStream(PredictWithResponseStreamRequest request, AsyncHandler<PredictWithResponseStreamRequest,PredictWithResponseStreamResponse> handler)
Invoking a model deployment calls the predictWithResponseStream endpoint of the model deployment URI to get the streaming result.void
refreshClient()
Rebuilds the client from scratch.void
setEndpoint(String endpoint)
Sets the endpoint to call (ex, https://www.example.com).void
setRegion(Region region)
Sets the region to call (ex, Region.US_PHOENIX_1).void
setRegion(String regionId)
Sets the region to call (ex, ‘us-phoenix-1’).void
useRealmSpecificEndpointTemplate(boolean realmSpecificEndpointTemplateEnabled)
Determines whether realm specific endpoint should be used or not.-
Methods inherited from interface java.lang.AutoCloseable
close
-
-
-
-
Method Detail
-
refreshClient
void refreshClient()
Rebuilds the client from scratch.Useful to refresh certificates.
-
setEndpoint
void setEndpoint(String endpoint)
Sets the endpoint to call (ex, https://www.example.com).- Parameters:
endpoint
- The endpoint of the serice.
-
getEndpoint
String getEndpoint()
Gets the set endpoint for REST call (ex, https://www.example.com)
-
setRegion
void setRegion(Region region)
Sets the region to call (ex, Region.US_PHOENIX_1).Note, this will call
setEndpoint
after resolving the endpoint. If the service is not available in this region, however, an IllegalArgumentException will be raised.- Parameters:
region
- The region of the service.
-
setRegion
void setRegion(String regionId)
Sets the region to call (ex, ‘us-phoenix-1’).Note, this will first try to map the region ID to a known Region and call
setRegion
.If no known Region could be determined, it will create an endpoint based on the default endpoint format (
Region.formatDefaultRegionEndpoint(Service, String)
and then callsetEndpoint
.- Parameters:
regionId
- The public region ID.
-
useRealmSpecificEndpointTemplate
void useRealmSpecificEndpointTemplate(boolean realmSpecificEndpointTemplateEnabled)
Determines whether realm specific endpoint should be used or not.Set realmSpecificEndpointTemplateEnabled to “true” if the user wants to enable use of realm specific endpoint template, otherwise set it to “false”
- Parameters:
realmSpecificEndpointTemplateEnabled
- flag to enable the use of realm specific endpoint template
-
predict
Future<PredictResponse> predict(PredictRequest request, AsyncHandler<PredictRequest,PredictResponse> handler)
Invoking a model deployment calls the predict endpoint of the model deployment URI.This endpoint takes sample data as input and is processed using the predict() function in score.py model artifact file
- Parameters:
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.- Returns:
- A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.
-
predictWithResponseStream
Future<PredictWithResponseStreamResponse> predictWithResponseStream(PredictWithResponseStreamRequest request, AsyncHandler<PredictWithResponseStreamRequest,PredictWithResponseStreamResponse> handler)
Invoking a model deployment calls the predictWithResponseStream endpoint of the model deployment URI to get the streaming result.This endpoint takes sample data as input and is processed using the predict() function in score.py model artifact file
- Parameters:
request
- The request object containing the details to sendhandler
- The request handler to invoke upon completion, may be null.- Returns:
- A Future that can be used to get the response if no AsyncHandler was provided. Note, if you provide an AsyncHandler and use the Future, some types of responses (like java.io.InputStream) may not be able to be read in both places as the underlying stream may only be consumed once.
-
-