knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )
#pacman::p_load(relevance) #library(ReplicationRelevance) #devtools::document() devtools::load_all() library(relevance)
In this document I will present the Analysis for the Master Thesis Project step by step. The data has already been cleaned and prepared for analysis.
Seemingly artificially dichotomised study. Original effect size constructed from table of original study.
scales.original <- data.frame( tv = as.factor(c(rep(0, 57), rep(1, 11), rep(0, 40), rep(1, 24))), category = as.factor(c(rep(0, 68), rep(1, 64))) ) # table(scales.original) looks correct (like in the original paper) scales.original.glm <- glm(formula = tv~category, data = scales.original, family="binomial", )
Here I load the data from the package.
data(scales, package = "ReplicationRelevance")
Now I manually create a study object as defined in the file study-obj.R
scales.study <- new("study", name = "scales", # name of the study manyLabs = "ml1", # which replication project it belongs to data.replication = as.data.frame(scales), # data of the replication original = c(inference(scales.original.glm)["category1",], "n.total" = nobs(scales.original.glm)), # summary of the original study variables = list(dv = "scales", iv = "scalesgroup", measure = "OR" # certain information to build the models ), var.of.interest = "scalesgroup", # variable of interest relevance.threshold = .1, # study specific effect size threshold family = "binomial" # distribution family of the model )
We can summarise the data:
summary(scales.study)
Uh it looks like some categories are heavily under-represented. Not good.
Lets fit the MEMo and look at the diagnostics plot.
scales.study@mixedModels <- MixedModels.study(scales.study) #diagnosticPlot.study(scales.study)
Ok lets calculate some logits adn plot them and their difference to the original logit.
scales.study@table <- relevance_table.study(scales.study) scales.study@difference.table <- effect_differences.study(scales.study)
plot_estimates.study(scales.study, cutoff = 5) plot_difference.study(scales.study, cutoff = 5) plot_both.study(scales.study, cutoff.diff = 5, cutoff.est = 5)
create_pub_tables.study(scales.study)
https://osf.io/a9hk2/ Chartier, Brumbaugh, Garinther & 3 more [@albarracin2008]
First, we load the data
data(alb5_rep, package = "ReplicationRelevance") data(alb5_rev, package = "ReplicationRelevance")
Create study object for Albarracin Study 5:
alb5 <- new("study", name = "alb5", manyLabs = "ml5", data.revised = as.data.frame(alb5_rev), data.replication = as.data.frame(alb5_rep), original = list(m.1 = 12.83, m.2 = 10.78, sd.1 = 1.86, sd.2 = 3.15, n.1 = 18, n.2 = 18), variables = list("dv" = "SATTotal", "iv" = "Condition", "measure" = "SMD"), var.of.interest = "Condition", relevance.threshold = .1, family = "gaussian" ) summary(alb5)
alb5@mixedModels <- MixedModels.study(alb5) #diagnosticPlot.study(alb5) alb5@table <- relevance_table.study(alb5) alb5@difference.table <- effect_differences.study(alb5, standardise = TRUE) plot_estimates.study(alb5) plot_difference.study(alb5, ci.type = "newcombe") plot_difference.study(alb5, ci.type = "wald") plot_both.study(alb5) create_pub_tables.study(alb5)
data(lobue, package = "ReplicationRelevance")
lobue.study <- new("study", name = "lobue", manyLabs = "ml5", data.revised = as.data.frame(subset(lobue, protocol == "RP")), data.replication = as.data.frame(subset(lobue, protocol == "NP")), original = data.frame("estimate" = .23, "ciLow" = 0.07, "ciUp" = 0.49, "n.total" = 48), variables = list(dv = "RT.correct", iv = "target_stimulus*child_parent*snake_experience", measure = "drop"), var.of.interest = "child_parent*snake_experience", relevance.threshold = .1, family = "gaussian" )
| Lobue and Deloache (2008) reported a significant main effect for target stimulus with a sizeable effect size (η2 = .23), meaning 23% of the variance in reaction time could be explained as a function of which condition participants were in. Our effect size for the main effect of condition was η2 = .032, or only 3.2% of the variance in reaction time could be explained as a function of participant condition.
lobue.study@table <- relevance_table.study(lobue.study) plot_estimates.study(lobue.study)
for the drop effect, we need to supply the final predictors and the comparisons predictor. Then we compare the R^2 of the models.
I used bootstrapped normal CI around the bias corrected R^2 estimate, while I plotted the observed R^2. Hence, the CI does not look symmetrical around the observed R^2.
create_pub_tables.study(lobue.study)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.