performance_metric: Calculate metrics for model evaluation

View source: R/evaluate.R

performance_metricR Documentation

Calculate metrics for model evaluation

Description

Calculate some representative metrics for binary classification model evaluation.

Usage

performance_metric(
  pred,
  actual,
  positive,
  metric = c("ZeroOneLoss", "Accuracy", "Precision", "Recall", "Sensitivity",
    "Specificity", "F1_Score", "Fbeta_Score", "LogLoss", "AUC", "Gini", "PRAUC",
    "LiftAUC", "GainAUC", "KS_Stat", "ConfusionMatrix"),
  cutoff = 0.5,
  beta = 1
)

Arguments

pred

numeric. Probability values that predicts the positive class of the target variable.

actual

factor. The value of the actual target variable.

positive

character. Level of positive class of binary classification.

metric

character. The performance metrics you want to calculate. See details.

cutoff

numeric. Threshold for classifying predicted probability values into positive and negative classes.

beta

numeric. Weight of precision in harmonic mean for F-Beta Score.

Details

The cutoff argument applies only if the metric argument is "ZeroOneLoss", "Accuracy", "Precision", "Recall", "Sensitivity", "Specificity", "F1_Score", "Fbeta_Score", "ConfusionMatrix".

Value

numeric or table object. Confusion Matrix return by table object. and otherwise is numeric.: The performance metrics calculated are as follows.:

  • ZeroOneLoss : Normalized Zero-One Loss(Classification Error Loss).

  • Accuracy : Accuracy.

  • Precision : Precision.

  • Recall : Recall.

  • Sensitivity : Sensitivity.

  • Specificity : Specificity.

  • F1_Score : F1 Score.

  • Fbeta_Score : F-Beta Score.

  • LogLoss : Log loss / Cross-Entropy Loss.

  • AUC : Area Under the Receiver Operating Characteristic Curve (ROC AUC).

  • Gini : Gini Coefficient.

  • PRAUC : Area Under the Precision-Recall Curve (PR AUC).

  • LiftAUC : Area Under the Lift Chart.

  • GainAUC : Area Under the Gain Chart.

  • KS_Stat : Kolmogorov-Smirnov Statistic.

  • ConfusionMatrix : Confusion Matrix.

Examples


library(dplyr)

# Divide the train data set and the test data set.
sb <- rpart::kyphosis %>%
  split_by(Kyphosis)

# Extract the train data set from original data set.
train <- sb %>%
  extract_set(set = "train")

# Extract the test data set from original data set.
test <- sb %>%
  extract_set(set = "test")

# Sampling for unbalanced data set using SMOTE(synthetic minority over-sampling technique).
train <- sb %>%
  sampling_target(seed = 1234L, method = "ubSMOTE")

# Cleaning the set.
train <- train %>%
  cleanse

# Run the model fitting.
result <- run_models(.data = train, target = "Kyphosis", positive = "present")
result

# Predict the model.
pred <- run_predict(result, test)
pred

# Calculate Accuracy.
performance_metric(attr(pred$predicted[[1]], "pred_prob"), test$Kyphosis,
  "present", "Accuracy")
# Calculate Confusion Matrix.
performance_metric(attr(pred$predicted[[1]], "pred_prob"), test$Kyphosis,
  "present", "ConfusionMatrix")
# Calculate Confusion Matrix by cutoff = 0.55.
performance_metric(attr(pred$predicted[[1]], "pred_prob"), test$Kyphosis,
  "present", "ConfusionMatrix", cutoff = 0.55)

   

alookr documentation built on June 12, 2022, 5:08 p.m.