View source: R/mrIMLperformance.R
mrIMLperformance | R Documentation |
Summarizes the performance of a mrIML
object created using
mrIMLpredicts()
in a way that allows for easy comparison of different models.
For regression models, root mean squared error (RMSE) and R-squared are
reported, while for classification models, area under the ROC curve (AUC),
Matthews correlation coefficient (MCC), positive predictive value (PPV),
specificity, and sensitivity are reported.
mrIMLperformance(mrIMLobj)
mrIMLobj |
A list object created by |
A list with two slots:
$model_performance
: A tibble of commonly used metrics that can be used
to compare model performance of classification models. Performance metrics
are based on the test data defined during mrIMLpredicts()
.
$global_performance_summary
: A global performance metric: the average of
a performance metric over all response models. MCC is used for classification
models and RMSE for regression models.
library(tidymodels)
data <- MRFcov::Bird.parasites
# Prepare data for mrIML
Y <- data %>%
select(-scale.prop.zos) %>%
select(order(everything()))
X <- data %>%
select(scale.prop.zos)
X1 <- Y
# Fit GN model
model_rf <- rand_forest(
trees = 50, # 50 trees are set for brevity. Aim to start with 1000
mode = "classification",
mtry = tune(),
min_n = tune()
) %>%
set_engine("randomForest")
GN_model_rf <- mrIMLpredicts(
X = X,
Y = Y,
X1 = X1,
Model = model_rf,
prop = 0.7,
k = 2,
racing = FALSE
)
perf <- mrIMLperformance(GN_model_rf)
perf[[1]]
perf[[2]]
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.