metric_analyses: Compute single common metric mean, difference between groups,...

Description Usage Arguments Value Data and Variable Format Examples

Description

metric_mean computes the mean common metric score at a single point in time. metric_growth computes the mean change in the common metric score between two points in time. Both functions can disaggregate results based on a group characteristic used for equity comparisons. They can also account for metrics where multiple data points come from the same classroom, like those based on student surveys or assignments.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
metric_mean(
  data,
  metric,
  use_binary = F,
  equity_group = NULL,
  by_class = F,
  scaleusewarning = T
)

metric_growth(
  data1,
  data2,
  metric,
  use_binary = T,
  equity_group = NULL,
  by_class = F,
  scaleusewarning = T
)

Arguments

data

Data from a single timepoint. Used in metric_mean.

metric

Quoted name of the common metric. Options are "engagement", "belonging", "relevance", "assignments", "tntpcore", or "ipg".

use_binary

A logical (T/F) option to use the binary version of the metric. The default is FALSE so that the mean and growth calculations are based on the overall metric value. If you want these calculations done on the binary version (e.g., looking at the percent of teachers with 'high expectations' rather than the average expectations score) then set this option to TRUE. Note that the metric tntpcore has no binary version.

equity_group

Optional quoted name of the categorical column/variable in data that contains the equity group designation. For example, if data has an indicator variable called class_frl that is either "Under 50 FRL", then analyst could set equity_group = "class_frl" to get differences in metric between these two groups. Default is no equity comparison.

by_class

A logical (T/F) option indicating if multiple rows of data come from the same class. When by_class = T, analysis will automatically account for different sample sizes between classes and adjust the standard errors to account for the lack of independence between data deriving from the same class. If set to FALSE, data must have a variable titled class_id. Default if FALSE.

scaleusewarning

A logical (T/F) indicating whether function should generate a warning when not all values of a scale are used. For example, student survey data that only contains values of 1s and 2s could mean that data is on a 1-4 scale, when it should be on a 0-3 scale. When scaleusewarning = T, the function will warn you of this. This warning does not mean your data is wrong. For example, the Academic Ownership domain from TNTP CORE has 5 potential values: 1, 2, 3, 4, or 5. It's not uncommon to have data where teachers were never rated above a 4 on this domain. In this case, the printed warning can be ignored. Default if TRUE. If you are confident your data is on the right scale, you can suppress the warning by setting to TRUE.

data1

Data from the initial timepoint. Used in metric_growth.

data2

Data from the final timepoint. Used in metric_growth.

Value

A list of results including the overall mean or mean by equity group (for metric_mean), the mean change over time or mean change for each group (for metric_growth). Means are accompanied by standard errors and 95 intervals. Also included are list elements for number of data points used in analysis.

Data and Variable Format

metric_mean and metric_growth should be used with the raw metric data. Each row of data should represent a single rated outcome. For example, each row of data will be a single completed survey, a single rated assignment, a single classroom observation, etc. The data should not have the metric already calculated but instead have the components needed to make this calculation. For example, data on student engagement should not have a column or variable titled engagement, but should have variables corresponding the four survey questions used to calculate engagement. Leave all items in their raw form - the functions automatically account for items that need to be reverse coded. The only requirement is that the data contains the needed variables and that the variables are numeric (i.e., data values should be 0s and 1s, not 'No' and 'Yes'. This ensures that the common metrics are calculated correctly and consistently across projects. Each metric has its own set of needed variables that must be spelled exactly as shown below. They are:

engagement:

eng_like, eng_losttrack, eng_interest, eng_moreabout

belonging:

tch_problem, bel_ideas, bel_fitin, tch_interestedideas

relevance:

rel_asmuch, rel_future, rel_outside, rel_rightnow

expectations:

exp_fairtomaster, exp_oneyearenough, exp_allstudents, exp_appropriate

expectations_old:

exp_allstudents, exp_toochallenging, exp_oneyear, exp_different, exp_overburden, exp_began

tntpcore:

ec, ao, dl, cl

ipg:

form, grade_level, ca1_a, ca1_b, ca1_c, ca2_overall, ca3_overall, col. K-5 Literacy observations must also have rfs_overall. Science observations must also have ca1_d, ca1_e, ca1_f, and science_filter

assignments:

content, relevance, practice

Note that these are the NAMES of the variables needed in your data. It can be okay if some of these variables have NA values for specific rows. For example, K-5 Literacy observations on the IPG require either all of the Core Actions (ca1_a, ca1_b, ca1_c, ca2_overall, ca3_overall) and/or rfs_overall. If an observation has all the core actions it still needs a variable called rfs_overall, but the value can just be NA. See the vignette("analyzing_metrics") for more details. Note on Expectations. The items used to measure expectations shifted from a collection of six, mostly reverse-coded worded items to four positively worded items. Both expectations metrics are available, with the current 4-item expectations metric known as "expectations" and the older 6-item expectations metric known as "expectations_old". See the vignette("analyzing_metrics") for more details.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Compute the mean engagement score for an entire project at a single time point. Setting
# by_class = TRUE because multiple surveys come from the same class.
metric_mean(ss_data_final, metric = "engagement", by_class = TRUE)

# Do the same, but now compare results by a class's FRL population
metric_mean(ss_data_final, metric = "engagement", equity_group = "class_frl_cat", by_class = TRUE)

# Look at change in engagement over time, then look at how differences in engagement between a
# class's FRL population change over time
metric_growth(
  ss_data_initial,
  ss_data_final,
  metric = "engagement",
  by_class = TRUE
 )
 metric_growth(
  ss_data_initial,
  ss_data_final,
  metric = "engagement",
  equity_group = "class_frl_cat",
  by_class = TRUE
 )

adamMaier/tntpmetrics documentation built on Feb. 1, 2022, 1:03 p.m.