| mlr_callback_set.checkpoint | R Documentation |
Saves the optimizer and network states during training. The final network and optimizer are always stored.
Saving the learner itself in the callback with a trained model is impossible, as the model slot is set after the last callback step is executed.
mlr3torch::CallbackSet -> CallbackSetCheckpoint
new()Creates a new instance of this R6 class.
CallbackSetCheckpoint$new(path, freq, freq_type = "epoch")
path(character(1))
The path to a folder where the models are saved.
freq(integer(1))
The frequency how often the model is saved.
Frequency is either per step or epoch, which can be configured through the freq_type parameter.
freq_type(character(1))
Can be be either "epoch" (default) or "step".
on_epoch_end()Saves the network and optimizer state dict.
Does nothing if freq_type or freq are not met.
CallbackSetCheckpoint$on_epoch_end()
on_batch_end()Saves the selected objects defined in save.
Does nothing if freq_type or freq are not met.
CallbackSetCheckpoint$on_batch_end()
on_exit()Saves the learner.
CallbackSetCheckpoint$on_exit()
clone()The objects of this class are cloneable with this method.
CallbackSetCheckpoint$clone(deep = FALSE)
deepWhether to make a deep clone.
Other Callback:
TorchCallback,
as_torch_callback(),
as_torch_callbacks(),
callback_set(),
mlr3torch_callbacks,
mlr_callback_set,
mlr_callback_set.progress,
mlr_callback_set.tb,
mlr_callback_set.unfreeze,
mlr_context_torch,
t_clbk(),
torch_callback()
cb = t_clbk("checkpoint", freq = 1)
task = tsk("iris")
pth = tempfile()
learner = lrn("classif.mlp", epochs = 3, batch_size = 1, callbacks = cb)
learner$param_set$set_values(cb.checkpoint.path = pth)
learner$train(task)
list.files(pth)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.