mrIMLperformance: Calculate general performance metrics of a mrIML model

View source: R/mrIMLperformance.R

mrIMLperformanceR Documentation

Calculate general performance metrics of a mrIML model

Description

Summarizes the performance of a mrIML object created using mrIMLpredicts() in a way that allows for easy comparison of different models. For regression models, root mean squared error (RMSE) and R-squared are reported, while for classification models, area under the ROC curve (AUC), Matthews correlation coefficient (MCC), positive predictive value (PPV), specificity, and sensitivity are reported.

Usage

mrIMLperformance(mrIMLobj)

Arguments

mrIMLobj

A list object created by mrIMLpredicts() containing multi-response models.

Value

A list with two slots:

  • ⁠$model_performance⁠: A tibble of commonly used metrics that can be used to compare model performance of classification models. Performance metrics are based on the test data defined during mrIMLpredicts().

  • ⁠$global_performance_summary⁠: A global performance metric: the average of a performance metric over all response models. MCC is used for classification models and RMSE for regression models.

Examples

library(tidymodels)

data <- MRFcov::Bird.parasites

# Prepare data for mrIML
Y <- data %>%
  select(-scale.prop.zos) %>%
  select(order(everything()))
X <- data %>%
  select(scale.prop.zos)
X1 <- Y

# Fit GN model
model_rf <- rand_forest(
  trees = 50, # 50 trees are set for brevity. Aim to start with 1000
  mode = "classification",
  mtry = tune(),
  min_n = tune()
) %>%
  set_engine("randomForest")

GN_model_rf <- mrIMLpredicts(
  X = X,
  Y = Y,
  X1 = X1,
  Model = model_rf,
  prop = 0.7,
  k = 2,
  racing = FALSE
)

perf <- mrIMLperformance(GN_model_rf)
perf[[1]]
perf[[2]]


nfj1380/mrIML documentation built on June 2, 2025, 1:03 a.m.