makeLearner | R Documentation |
For a classification learner the predict.type
can be set to
“prob” to predict probabilities and the maximum value selects the
label. The threshold used to assign the label can later be changed using the
setThreshold function.
To see all possible properties of a learner, go to: LearnerProperties.
makeLearner(
cl,
id = cl,
predict.type = "response",
predict.threshold = NULL,
fix.factors.prediction = FALSE,
...,
par.vals = list(),
config = list()
)
cl |
( |
id |
( |
predict.type |
( |
predict.threshold |
(numeric) |
fix.factors.prediction |
( |
... |
(any) |
par.vals |
(list) |
config |
(named list) |
(Learner).
par.vals
vs. ...
The former aims at specifying default hyperparameter settings from mlr
which differ from the actual defaults in the underlying learner. For
example, respect.unordered.factors
is set to order
in mlr
while the
default in ranger::ranger depends on the argument splitrule
.
getHyperPars(<learner>)
can be used to query hyperparameter defaults that
differ from the underlying learner. This function also shows all
hyperparameters set by the user during learner creation (if these differ
from the learner defaults).
For this learner we added additional uncertainty estimation functionality
(predict.type = "se"
) for the randomForest, which is not provided by the
underlying package.
Currently implemented methods are:
If se.method = "jackknife"
the standard error of a prediction is
estimated by computing the jackknife-after-bootstrap, the mean-squared
difference between the prediction made by only using trees which did not
contain said observation and the ensemble prediction.
If se.method = "bootstrap"
the standard error of a prediction is
estimated by bootstrapping the random forest, where the number of bootstrap
replicates and the number of trees in the ensemble are controlled by
se.boot
and se.ntree
respectively, and then taking the standard deviation
of the bootstrap predictions. The "brute force" bootstrap is executed when
ntree = se.ntree
, the latter of which controls the number of trees in the
individual random forests which are bootstrapped. The "noisy bootstrap" is
executed when se.ntree < ntree
which is less computationally expensive. A
Monte-Carlo bias correction may make the latter option preferable in many
cases. Defaults are se.boot = 50
and se.ntree = 100
.
If se.method = "sd"
, the default, the standard deviation of the
predictions across trees is returned as the variance estimate. This can be
computed quickly but is also a very naive estimator.
For both “jackknife” and “bootstrap”, a Monte-Carlo bias correction is applied and, in the case that this results in a negative variance estimate, the values are truncated at 0.
Note that when using the “jackknife” procedure for se estimation, using a small number of trees can lead to training data observations that are never out-of-bag. The current implementation ignores these observations, but in the original definition, the resulting se estimation would be undefined.
Please note that all of the mentioned se.method
variants do not affect the
computation of the posterior mean “response” value. This is always the
same as from the underlying randomForest.
A very basic baseline method which is useful for model comparisons (if you don't beat this, you very likely have a problem). Does not consider any features of the task and only uses the target feature of the training data to make predictions. Using observation weights is currently not supported.
Methods “mean” and “median” always predict a constant value for each new observation which corresponds to the observed mean or median of the target feature in training data, respectively.
The default method is “mean” which corresponds to the ZeroR algorithm from WEKA.
Method “majority” predicts always the majority class for each new observation. In the case of ties, one randomly sampled, constant class is predicted for all observations in the test set. This method is used as the default. It is very similar to the ZeroR classifier from WEKA. The only difference is that ZeroR always predicts the first class of the tied class values instead of sampling them randomly.
Method “sample-prior” always samples a random class for each individual test observation according to the prior probabilities observed in the training data.
If you opt to predict probabilities, the class probabilities always correspond to the prior probabilities observed in the training data.
Other learner:
LearnerProperties
,
getClassWeightParam()
,
getHyperPars()
,
getLearnerId()
,
getLearnerNote()
,
getLearnerPackages()
,
getLearnerParVals()
,
getLearnerParamSet()
,
getLearnerPredictType()
,
getLearnerShortName()
,
getLearnerType()
,
getParamSet()
,
helpLearner()
,
helpLearnerParam()
,
makeLearners()
,
removeHyperPars()
,
setHyperPars()
,
setId()
,
setLearnerId()
,
setPredictThreshold()
,
setPredictType()
makeLearner("classif.rpart")
makeLearner("classif.lda", predict.type = "prob")
lrn = makeLearner("classif.lda", method = "t", nu = 10)
getHyperPars(lrn)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.