Description Usage Arguments Details
To assess model performance in the training and testing sets, we need:
Predictived values of each base learner in each data set.
If a stack model is build on the top of individual base learners, then we also need the predictive values of the stack model in both sets.
Compute performance metrics.
1 2 3 4 5 6 7 8 9 10 11 |
ref.lv |
reference level for categorical variables. |
method |
A string specifying which classification or
regression model to use. Possible values are found using
|
metric |
A string that specifies what summary metric will
be used to select the optimal model. By default, possible values
are "RMSE" and "Rsquared" for regression and "Accuracy" and
"Kappa" for classification. If custom performance metrics are
used (via the |
stack.wt |
??? |
trControl |
A list of values that define how this function
acts. See |
tuneLength |
An integer denoting the amount of granularity
in the tuning parameter grid. By default, this argument is the
number of levels for each tuning parameters that should be
generated by |
For consistency purpose (with stacking predictions), I use defaultSummary(pred)
In PredVal, you can
specify a stacking method
specify a weight for each ML algorithm
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.