Description Usage Arguments Value Author(s) See Also Examples
Generates tibble of performance measures by model, measure and arranged by score
1 | get_performance(models, test_x, test_y, format = "long")
|
models |
List of models of class |
test_x |
'data.frame' or 'tibble' of explanitory variables |
test_y |
vector of target variable |
format |
"long" for long format (default), or "wide" for wide format |
This function returns a tibble
of model performance including columns:
model model name
measure machine learning metrics
score double, usually a range between 0 and 1
When format = "wide"
, the function will return a tibble
including columns:
Type
Accuracy - Overall performance of model
Accuracy Lower
Accuracy Null
Accuracy P value
Accuracy Upper
Balanced Accuracy
Detection Prevalence
Detection Rate
F1 - Hybrid metric usefull fr unbalanced classes
Kappa - Compares an observed accuracy with an expected accuracy
McNemar P Value
Negagive Prediction Value
Positive Predictive Value
Precision - How accurate the positive predictions are
Prevalence
Recall - True positive rate, number of instances from the positive class that actually predictoed correctly
Sensitivity - Same as recall
Specificty - Number of instances from the negative class that were actually predicted correctly
Method the algorithm used to train each particular model
"Dallin Webb <dallinwebb@byui.edu>"
extract_measures
1 2 3 4 5 6 7 8 9 10 | ## Not run:
# Long format
p <- get_performance(models_list, test_x, test_y)
p
p %>% filter(measure == "F1")
# Wide format
get_performance(models_list, test_x, test_y, format = "wide")
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.