knitr::opts_chunk$set( collapse = TRUE, comment = "#>", warning = FALSE, message = FALSE, fig.align = "center", out.width = "90%", fig.width = 7, fig.height = 5 )
This section covers how to setup modeltime.gluonts
to use GPUs.
You must have:
Refer to MXNet's Official GPU Documentation on using GPUs.
Create a Custom GluonTS Python Environment. You will need to install a version of mxnet
that is compatible with your CUDA software.
reticulate::py_install( envname = "my_gluonts_env", python_version = "3.7.1", packages = c( # IMPORTANT "mxnet-cu92", # replace `cu92` according to your CUDA version. "gluonts==0.8.0", "pandas", "numpy", "pathlib" ), method = "conda", pip = TRUE )
Follow instructions to set the path and check your custom gluonts environment. You will need to:
modeltime.gluonts
is connecting to your GPU-enabled GluonTS Python EnvironmentYou're now ready to start using GPUs. Just start training as normal.
model_fit_deepar <- deep_ar( id = "id", freq = "M", prediction_length = 24, lookback_length = 36, epochs = 10, num_batches_per_epoch = 500, learn_rate = 0.001, num_layers = 3, num_cells = 80, dropout = 0.10 ) %>% set_engine("gluonts_deepar") %>% fit(value ~ date + id, m750)
One final point is that if you have multiple GPUs, you can configure how to distribute the work using the MXNet Context (ctx
). For example, if you have two GPUs, you can specify to use both of them by adding to the set_engine()
.
mxnet <- reticulate::import("mxnet") # Modify your set_engine() ... %>% set_engine("gluonts_deepar", ctx = list(mxnet$gpu(0), mxnet$gpu(1)))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.