Description Usage Arguments Value Examples
View source: R/webservice-local.R
You can deploy a model locally for limited testing and troubleshooting. To do so, you will need to have Docker installed on your local machine.
If you are using an Azure Machine Learning Compute Instance for development, you can also deploy locally on your compute instance.
1 |
port |
An int of the local port on which to expose the service's HTTP endpoint. |
The LocalWebserviceDeploymentConfiguration
object.
1 2 3 4 | ## Not run:
deployment_config <- local_webservice_deployment_config(port = 8890)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.