Description Usage Arguments Examples
View source: R/Visualization.R
This function evalutates many different machine learning models and returns plots comparing them.
1 2 3 |
object |
The ModelComparison object |
labels |
The labels of the training set |
training.data |
The dataset to be trained on. If predictions are provided this is not needed. |
predictions |
The list of predictions from the models, optional if training.data is not provided. |
plot.type |
A vector of metrics (as characters) that are the values seen in the plot (examples include ROC, AUC, Accuracy, etc.) Note: ROC cannot be plotted with other metrics. |
format.data |
Whether the data should be transformed into one-hot encoding if it needs to be. The default is TRUE. If you would like to predict on unchanged data that is not in the right format, set this to false at your own risk. |
... |
Other arguments for plotting |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | # prepare the dataset
titanic <- PrepareNumericTitanic()
# create the models
comp <- GetModelComparisons(titanic[, -1], titanic[, 1], model.list = "all")
# Default. Plot AUC, Accuracy, Recall, and Precision
plot(comp, titanic[, 1], titanic[, -1], plot.type=c("All"))
# Choose specific metrics
plot(comp, titanic[, 1], titanic[, -1], plot.type=c("Specificity", "Precision", "AUC",
"Recall", "Detection Rate"))
# plot overlapping ROC lines
plot(comp, titanic[, 1], titanic[, -1], plot.type="roc")
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.