eval_chunks <- as.logical(Sys.getenv("local_vignette_build", FALSE)) # Change this via `Sys.setenv(local_vignette_build = "TRUE")` if(!eval_chunks) cat( "(These documents take a long time to create, so only the code", "is shown here. The full version is at", "[https://tidymodels.github.io/tidyposterior](https://tidymodels.github.io/tidyposterior).)" )
library(tidyposterior) library(ggplot2) library(tidyverse) theme_set(theme_bw()) options(width = 100, digits = 3)
The example that we will use here is from the analysis of a fairly large classification data set using 10-fold cross-validation with three models. Looking at the accuracy values, the differences are pretty clear. For the area under the ROC curve:
library(tidyposterior) data("precise_example") library(tidyverse) rocs <- precise_example %>% select(id, contains("ROC")) %>% setNames(tolower(gsub("_ROC$", "", names(.)))) rocs library(ggplot2) rocs_stacked <- gather(rocs) ggplot(rocs_stacked, aes(x = model, y = statistic, group = id, col = id)) + geom_line(alpha = .75) + theme(legend.position = "none")
Since the lines are fairly parallel, there is likely to be a strong resample-to-resample effect. Note that the variation is fairly small; the within-model results don't vary a lot and are not near the ceiling of performance (i.e. an AUC of one). It also seems pretty clear that the models are producing different levels of performance, but we'll use this package to clarify this. Finally, there seems to be roughly equal variation for each model despite the difference in performance.
rocs were produced by the
rsample package it is ready to use with
tidyposterior, which has a method for
The main question is:
When looking at resampling results, are the differences between models "real"?
To answer that, a model can be created where the outcome is the resampling statistics (the area under the ROC curve in this example). These values are explained by the model types. In doing this, we can get parameter estimates for each model's effect on the resampled ROC values and make statistical (and practical) comparisons between models.
We will try a simple linear model with Gaussian errors that has a random effect for the resamples so that the within-resample correlation can be estimated. Although the outcome is bounded in the interval [0,1], the variability of these estimates might be precise enough to achieve a well-fitting model.
To fit the model,
perf_mod will be used to fit a model using the
stan_glmer function in the
roc_model <- perf_mod(rocs, seed = 2824)
# Two objects are saved to be used in the man files # `build_site` runs these in `tidyposterior/docs/articles` try(save(roc_model, file = "../../inst/examples/roc_model.RData", compress = TRUE), silent = TRUE)
stan_glmer model is contained in the element
To evaluate the validity of this fit, the
shinystan package can be used to generate an interactive assessment of the model results. One other thing that we can do it to examine the posterior distributions to see if they make sense in terms of the range of values.
tidy function can be used to extract the distributions into a simple data frame:
roc_post <- tidy(roc_model) head(roc_post)
There is a basic
ggplot method for this object and we can overlay the observed statistics for each model:
ggplot(roc_post) + # Add the observed data to check for consistency geom_point( data = rocs_stacked, aes(x = model, y = statistic), alpha = .5, col = "blue" )
These results look fairly reasonable given that we estimated a common variance for each of the models.
We'll compare the generalized linear model with the neural network. Before doing so, it helps to specify what a real difference between models would be. Suppose that a 2% increase in accuracy was considered to be a substantive results. We can add this into the analysis.
First, we can compute the posterior for the difference in RMSE for the two models (parameterized as
glm_v_nnet <- contrast_models(roc_model, "nnet", "glm") head(glm_v_nnet)
try(save(glm_v_nnet, file = "../../inst/examples/glm_v_nnet.RData", compress = TRUE), silent = TRUE)
summary function can be used to quantify this difference. It has an argument called
size where we can add our belief about the size of a true difference.
summary(glm_v_nnet, size = 0.02)
probability column indicates the proportion of the posterior distribution that is greater than zero. This result indicates that the entire distribution is larger than one. The credible intervals reflect the large difference in the area under the ROC curves for these models.
Before discussing the ROPE estimates, let's plot the posterior distribution of the differences:
ggplot(glm_v_nnet, size = 0.02)
pract_neg reflects the area where the posterior distribution is less than -2% (i.e, practically negative). Similarly, the
pract_pos shows that most of the area is greater than 2% which leads us to believe that this is a truly a substantial difference in performance. The
pract_equiv reflects how much of the posterior is between [-2%, 2%]. If this were near one, it might indicate that the models are not practically different (based on the yardstick of 2%).
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.