inference_config: Create an inference configuration for model deployments

Description Usage Arguments Value Defining the entry script See Also

View source: R/model.R

Description

The inference configuration describes how to configure the model to make predictions. It references your scoring script (entry_script) and is used to locate all the resources required for the deployment. Inference configurations use Azure Machine Learning environments (see r_environment()) to define the software dependencies needed for your deployment.

Usage

1
2
3
4
5
6
inference_config(
  entry_script,
  source_directory = ".",
  description = NULL,
  environment = NULL
)

Arguments

entry_script

A string of the path to the local file that contains the code to run for making predictions.

source_directory

A string of the path to the local folder that contains the files to package and deploy alongside your model, such as helper files for your scoring script (entry_script). The folder must contain the entry_script.

description

(Optional) A string of the description to give this configuration.

environment

An Environment object to use for the deployment. The environment does not have to be registered.

Value

The InferenceConfig object.

Defining the entry script

To deploy a model, you must provide an entry script that accepts requests, scores the requests by using the model, and returns the results. The entry script is specific to your model. It must understand the format of the incoming request data, the format of the data expected by your model, and the format of the data returned to clients. If the request data is in a format that is not usable by your model, the script can transform it into an acceptable format. It can also transform the response before returning it to the client.

The entry script must contain an init() method that loads your model and then returns a function that uses the model to make a prediction based on the input data passed to the function. Azure ML runs the init() method once, when the Docker container for your web service is started. The prediction function returned by init() will be run every time the service is invoked to make a prediction on some input data. The inputs and outputs of this prediction function typically use JSON for serialization and deserialization.

To locate the model in your entry script (when you load the model in the script's init() method), use AZUREML_MODEL_DIR, an environment variable containing the path to the model location. The environment variable is created during service deployment, and you can use it to find the location of your deployed model(s).

To get the path to a file in a model, combine the environment variable with the filename you're looking for. The filenames of the model files are preserved during registration and deployment.

Single model example:

1
model_path <- file.path(Sys.getenv("AZUREML_MODEL_DIR"), "my_model.rds")

Multiple model example:

1
model1_path <- file.path(Sys.getenv("AZUREML_MODEL_DIR"), "my_model/1/my_model.rds")

See Also

r_environment(), deploy_model()


azuremlsdk documentation built on Oct. 23, 2020, 8:22 p.m.