mlflow_rfunc_serve: Serve an RFunc MLflow Model

View source: R/model-serve.R

mlflow_rfunc_serveR Documentation

Serve an RFunc MLflow Model

Description

Serves an RFunc MLflow model as a local REST API server. This interface provides similar functionality to “mlflow models serve“ cli command, however, it can only be used to deploy models that include RFunc flavor. The deployed server supports standard mlflow models interface with /ping and /invocation endpoints. In addition, R function models also support deprecated /predict endpoint for generating predictions. The /predict endpoint will be removed in a future version of mlflow.

Usage

mlflow_rfunc_serve(
  model_uri,
  host = "127.0.0.1",
  port = 8090,
  daemonized = FALSE,
  browse = !daemonized,
  ...
)

Arguments

model_uri

The location, in URI format, of the MLflow model.

host

Address to use to serve model, as a string.

port

Port to use to serve model, as numeric.

daemonized

Makes 'httpuv' server daemonized so R interactive sessions are not blocked to handle requests. To terminate a daemonized server, call 'httpuv::stopDaemonizedServer()' with the handle returned from this call.

browse

Launch browser with serving landing page?

...

Optional arguments passed to 'mlflow_predict()'.

Details

The URI scheme must be supported by MLflow - i.e. there has to be an MLflow artifact repository corresponding to the scheme of the URI. The content is expected to point to a directory containing MLmodel. The following are examples of valid model uris:

- “file:///absolute/path/to/local/model“ - “file:relative/path/to/local/model“ - “s3://my_bucket/path/to/model“ - “runs:/<mlflow_run_id>/run-relative/path/to/model“ - “models:/<model_name>/<model_version>“ - “models:/<model_name>/<stage>“

For more information about supported URI schemes, see the Artifacts Documentation at https://www.mlflow.org/docs/latest/tracking.html#artifact-stores.

Examples

## Not run: 
library(mlflow)

# save simple model with constant prediction
mlflow_save_model(function(df) 1, "mlflow_constant")

# serve an existing model over a web interface
mlflow_rfunc_serve("mlflow_constant")

# request prediction from server
httr::POST("http://127.0.0.1:8090/predict/")

## End(Not run)

mlflow documentation built on Nov. 23, 2023, 9:13 a.m.