model_metrics: Model Metrics and Performance

View source: R/model_metrics.R

model_metricsR Documentation

Model Metrics and Performance

Description

This function lets the user get a confusion matrix and accuracy, and for for binary classification models: AUC, Precision, Sensitivity, and Specificity, given the expected (tags) values and predicted values (scores).

Usage

model_metrics(
  tag,
  score,
  multis = NA,
  abc = TRUE,
  thresh = 10,
  auto_n = TRUE,
  thresh_cm = 0.5,
  target = "auto",
  type = "test",
  model_name = NA,
  plots = TRUE,
  quiet = FALSE,
  subtitle = NA
)

Arguments

tag

Vector. Real known label

score

Vector. Predicted value or model's result

multis

Data.frame. Containing columns with each category score (only used when more than 2 categories coexist)

abc

Boolean. Arrange columns and rows alphabetically when categorical values?

thresh

Integer. Threshold for selecting binary or regression models: this number is the threshold of unique values we should have in 'tag' (more than: regression; less than: classification)

auto_n

Add n_ before digits when it's categorical and not numerical, even though seems numerical?

thresh_cm

Numeric. Value to splits the results for the confusion matrix. Range of values: (0-1)

target

Value. Which is your target positive value? If set to 'auto', the target with largest mean(score) will be selected. Change the value to overwrite. Only used when binary categorical model.

type

Character. One of: "train", "test".

model_name

Character. Model's name for reference.

plots

Boolean. Create plots objects?

quiet

Boolean. Quiet all messages, warnings, recommendations?

subtitle

Character. Subtitle for plots

Value

List. Multiple performance metrics that vary depending on the type of model (classification or regression). If plot=TRUE, multiple plots are also returned.

See Also

Other Machine Learning: ROC(), conf_mat(), export_results(), gain_lift(), h2o_automl(), h2o_predict_MOJO(), h2o_selectmodel(), impute(), iter_seeds(), lasso_vars(), model_preprocess(), msplit()

Other Model metrics: ROC(), conf_mat(), errors(), gain_lift(), loglossBinary()

Other Calculus: corr(), dist2d(), quants()

Examples

data(dfr) # Results for AutoML Predictions
lapply(dfr, head)

# Metrics for Binomial Model
met1 <- model_metrics(dfr$class2$tag, dfr$class2$scores,
  model_name = "Titanic Survived Model",
  plots = FALSE
)
print(met1)

# Metrics for Multi-Categorical Model
met2 <- model_metrics(dfr$class3$tag, dfr$class3$score,
  multis = subset(dfr$class3, select = -c(tag, score)),
  model_name = "Titanic Class Model",
  plots = FALSE
)
print(met2)

# Metrics for Regression Model
met3 <- model_metrics(dfr$regr$tag, dfr$regr$score,
  model_name = "Titanic Fare Model",
  plots = FALSE
)
print(met3)

laresbernardo/lares documentation built on Oct. 23, 2024, 12:05 p.m.