plot.ModelComparison: Plot Comparisons

Description Usage Arguments Examples

View source: R/Visualization.R

Description

This function evalutates many different machine learning models and returns plots comparing them.

Usage

1
2
3
## S3 method for class 'ModelComparison'
plot(object, labels, training.data = "none",
  predictions = "empty", plot.type = "ROC", format.data = TRUE, ...)

Arguments

object

The ModelComparison object

labels

The labels of the training set

training.data

The dataset to be trained on. If predictions are provided this is not needed.

predictions

The list of predictions from the models, optional if training.data is not provided.

plot.type

A vector of metrics (as characters) that are the values seen in the plot (examples include ROC, AUC, Accuracy, etc.) Note: ROC cannot be plotted with other metrics.

format.data

Whether the data should be transformed into one-hot encoding if it needs to be. The default is TRUE. If you would like to predict on unchanged data that is not in the right format, set this to false at your own risk.

...

Other arguments for plotting

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# prepare the dataset
titanic <- PrepareNumericTitanic()

# create the models
comp <- GetModelComparisons(titanic[, -1], titanic[, 1], model.list = "all")

# Default.  Plot AUC, Accuracy, Recall, and Precision
plot(comp, titanic[, 1], titanic[, -1], plot.type=c("All"))

# Choose specific metrics
plot(comp, titanic[, 1], titanic[, -1], plot.type=c("Specificity", "Precision", "AUC",
"Recall", "Detection Rate"))

# plot overlapping ROC lines
plot(comp, titanic[, 1], titanic[, -1], plot.type="roc")

orionw/BestModel documentation built on Aug. 17, 2019, 7:29 p.m.