predict_savedmodel: Predict using a SavedModel

Description Usage Arguments See Also Examples

View source: R/predict_interface.R

Description

Runs a prediction over a saved model file, web API or graph object.

Usage

1
predict_savedmodel(instances, model, ...)

Arguments

instances

A list of prediction instances to be passed as input tensors to the service. Even for single predictions, a list with one entry is expected.

model

The model as a local path, a REST url or graph object.

A local path can be exported using export_savedmodel(), a REST URL can be created using serve_savedmodel() and a graph object loaded using load_savedmodel().

A type parameter can be specified to explicitly choose the type model performing the prediction. Valid values are export, webapi and graph.

...

See predict_savedmodel.export_prediction(), predict_savedmodel.graph_prediction(), predict_savedmodel.webapi_prediction() for additional options.

#' @section Implementations:

  • predict_savedmodel.export_prediction()

  • predict_savedmodel.graph_prediction()

  • predict_savedmodel.webapi_prediction()]

See Also

export_savedmodel(), serve_savedmodel(), load_savedmodel()

Examples

1
2
3
4
5
6
7
8
## Not run: 
# perform prediction based on an existing model
tfdeploy::predict_savedmodel(
  list(rep(9, 784)),
  system.file("models/tensorflow-mnist", package = "tfdeploy")
)

## End(Not run)

rstudio/tfdeploy documentation built on July 9, 2021, 1:35 a.m.