knitr::opts_chunk$set( message = FALSE, warning = FALSE, collapse = TRUE, comment = "#>" )
library(dplyr) library(knitr) library(tntpmetrics)
TNTP maintains two versions of leader and teacher high expectations questions. The current version contains four questions, all of which are positive coded. The old version, meanwhile, contains six questions, only two of which are positive coded. Below are the required column names for each version.
Teacher or Leader Expectations - CURRENT Version (4 questions)
metric = 'expectations'
Teacher or Leader Expectations - OLD Version (6 questions)
metric = 'expectation_old'
This vignette will focus on the current version. But, the process works the same for the old version.
Let's create a fake dataset to work with.
n <- 300 teacher_expectations <- data.frame( id = 1:n, exp_fairtomaster = sample(0:5, size = n, replace = TRUE, prob= c(.10, .15, .20, .20, .20, .15)), exp_oneyearenough = sample(0:5, size = n, replace = TRUE, prob= rev(c(.10, .15, .20, .20, .20, .15))), exp_allstudents = sample(0:5, size = n, replace = TRUE), exp_appropriate = sample(0:5, size = n, replace = TRUE, prob= c(.05, .10, .25, .30, .20, .10)), # create teacher expectation percentages by group teacher_group = sample(c('A', 'B', 'C'),size = n, replace= TRUE) )
We'll first calculate whether each teacher has high expectations of students.
teacher_expectations %>% make_metric(metric = "expectations") %>% head() %>% kable()
The column cm_expectations
is the teacher's expectations score. It's the sum of all the expectations columns. cm_binary_expectations
is a boolean representing TRUE
if the teacher has high expectations and FALSE
otherwise. Teachers have high expectations if their expectation score exceeds the cutoff. This value is 11 for the current expectations version: teachers with scores under 11 do not have high expectations, while those with scores over 11 do have high expectations.
We can use metric_mean
to calculate the percentage of teachers with high expectations, with standard errors included. Note that use_binary
is set to TRUE
so that we get the percentage of teacher with high expectations. If we simply wanted the average expectations score we would set this parameter to FALSE
.
expectations_mean <- metric_mean(teacher_expectations, metric = "expectations", use_binary = T) expectations_mean
The code below saves the mean value as an R object.
expectations_mean_value <- summary(expectations_mean[['Overall mean']])$emmean round(expectations_mean_value, 2)
metric_mean
also can be used to calculate percentages by group, along with standard errors and group comparisons.
group_expectations_mean <- metric_mean(teacher_expectations, metric = "expectations", equity_group = "teacher_group", by_class = F, use_binary = T) group_expectations_mean
Now, let's tidy up these results by placing them in a data frame.
summary(group_expectations_mean[['Group means']]) %>% as_tibble() %>% kable()
And let's do the same for the comparisons.
summary(group_expectations_mean[['Difference(s) between groups']]) %>% as_tibble() %>% kable()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.