mlr_learners.mlp | R Documentation |
Fully connected feed forward network with dropout after each activation function.
The features can either be a single lazy_tensor
or one or more numeric columns (but not both).
This Learner can be instantiated using the sugar function lrn()
:
lrn("classif.mlp", ...) lrn("regr.mlp", ...)
Supported task types: 'classif', 'regr'
Predict Types:
classif: 'response', 'prob'
regr: 'response'
Feature Types: “integer”, “numeric”, “lazy_tensor”
Parameters from LearnerTorch
, as well as:
activation
:: [nn_module]
The activation function. Is initialized to nn_relu
.
activation_args
:: named list()
A named list with initialization arguments for the activation function.
This is intialized to an empty list.
neurons
:: integer()
The number of neurons per hidden layer. By default there is no hidden layer.
Setting this to c(10, 20)
would have a the first hidden layer with 10 neurons and the second with 20.
n_layers
:: integer()
The number of layers. This parameter must only be set when neurons
has length 1.
p
:: numeric(1)
The dropout probability. Is initialized to 0.5
.
shape
:: integer()
or NULL
The input shape of length 2, e.g. c(NA, 5)
.
Only needs to be present when there is a lazy tensor input with unknown shape (NULL
).
Otherwise the input shape is inferred from the number of numeric features.
mlr3::Learner
-> mlr3torch::LearnerTorch
-> LearnerTorchMLP
mlr3::Learner$base_learner()
mlr3::Learner$configure()
mlr3::Learner$encapsulate()
mlr3::Learner$help()
mlr3::Learner$predict()
mlr3::Learner$predict_newdata()
mlr3::Learner$reset()
mlr3::Learner$selected_features()
mlr3::Learner$train()
mlr3torch::LearnerTorch$dataset()
mlr3torch::LearnerTorch$format()
mlr3torch::LearnerTorch$marshal()
mlr3torch::LearnerTorch$print()
mlr3torch::LearnerTorch$unmarshal()
new()
Creates a new instance of this R6 class.
LearnerTorchMLP$new( task_type, optimizer = NULL, loss = NULL, callbacks = list() )
task_type
(character(1)
)
The task type, either "classif
" or "regr"
.
optimizer
(TorchOptimizer
)
The optimizer to use for training.
Per default, adam is used.
loss
(TorchLoss
)
The loss used to train the network.
Per default, mse is used for regression and cross_entropy for classification.
callbacks
(list()
of TorchCallback
s)
The callbacks. Must have unique ids.
clone()
The objects of this class are cloneable with this method.
LearnerTorchMLP$clone(deep = FALSE)
deep
Whether to make a deep clone.
Gorishniy Y, Rubachev I, Khrulkov V, Babenko A (2021). “Revisiting Deep Learning for Tabular Data.” arXiv, 2106.11959.
Other Learner:
mlr_learners.tab_resnet
,
mlr_learners.torch_featureless
,
mlr_learners_torch
,
mlr_learners_torch_image
,
mlr_learners_torch_model
# Define the Learner and set parameter values
learner = lrn("classif.mlp")
learner$param_set$set_values(
epochs = 1, batch_size = 16, device = "cpu",
neurons = 10
)
# Define a Task
task = tsk("iris")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.