Description Usage Arguments Value Data and Variable Format Examples
metric_mean
computes the mean common metric score at a single point in time.
metric_growth
computes the mean change in the common metric score between two points
in time. Both functions can disaggregate results based on a group characteristic used for
equity comparisons. They can also account for metrics where multiple data points come from the
same classroom, like those based on student surveys or assignments.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | metric_mean(
data,
metric,
use_binary = F,
equity_group = NULL,
by_class = F,
scaleusewarning = T
)
metric_growth(
data1,
data2,
metric,
use_binary = T,
equity_group = NULL,
by_class = F,
scaleusewarning = T
)
|
data |
Data from a single timepoint. Used in |
metric |
Quoted name of the common metric. Options are "engagement", "belonging", "relevance", "assignments", "tntpcore", or "ipg". |
use_binary |
A logical (T/F) option to use the binary version of the metric. The default is FALSE so that the mean and growth calculations are based on the overall metric value. If you want these calculations done on the binary version (e.g., looking at the percent of teachers with 'high expectations' rather than the average expectations score) then set this option to TRUE. Note that the metric tntpcore has no binary version. |
equity_group |
Optional quoted name of the categorical column/variable in data that contains
the equity group designation. For example, if data has an indicator variable called
|
by_class |
A logical (T/F) option indicating if multiple rows of data come from the same
class. When |
scaleusewarning |
A logical (T/F) indicating whether function should generate a warning when
not all values of a scale are used. For example, student survey data that only contains values
of 1s and 2s could mean that data is on a 1-4 scale, when it should be on a 0-3 scale. When
|
data1 |
Data from the initial timepoint. Used in |
data2 |
Data from the final timepoint. Used in |
A list of results including the overall mean or mean by equity group (for
metric_mean
), the mean change over time or mean change for each group (for
metric_growth
). Means are accompanied by standard errors and 95
intervals. Also included are list elements for number of data points used in analysis.
metric_mean
and metric_growth
should be used with the raw metric data.
Each row of data should represent a single rated outcome. For example, each row of data will be
a single completed survey, a single rated assignment, a single classroom observation, etc.
The data should not have the metric already calculated but instead have the components needed
to make this calculation. For example, data on student engagement should not have a column or
variable titled engagement, but should have variables corresponding the four survey questions
used to calculate engagement. Leave all items in their raw form - the functions automatically
account for items that need to be reverse coded. The only requirement is that the data contains
the needed variables and that the variables are numeric (i.e., data values should be 0s and 1s,
not 'No' and 'Yes'. This ensures that the common metrics are calculated correctly and
consistently across projects. Each metric has its own set of needed variables that must be
spelled exactly as shown below. They are:
eng_like, eng_losttrack, eng_interest, eng_moreabout
tch_problem, bel_ideas, bel_fitin, tch_interestedideas
rel_asmuch, rel_future, rel_outside, rel_rightnow
exp_fairtomaster, exp_oneyearenough, exp_allstudents, exp_appropriate
exp_allstudents, exp_toochallenging, exp_oneyear, exp_different, exp_overburden, exp_began
ec, ao, dl, cl
form, grade_level, ca1_a, ca1_b, ca1_c, ca2_overall, ca3_overall, col. K-5 Literacy observations must also have rfs_overall. Science observations must also have ca1_d, ca1_e, ca1_f, and science_filter
content, relevance, practice
Note that these are the NAMES of the variables needed in your data. It can be okay if some of these
variables have NA values for specific rows. For example, K-5 Literacy observations on the IPG require
either all of the Core Actions (ca1_a, ca1_b, ca1_c, ca2_overall, ca3_overall) and/or rfs_overall. If
an observation has all the core actions it still needs a variable called rfs_overall, but the value
can just be NA. See the vignette("analyzing_metrics")
for more details.
Note on Expectations. The items used to measure expectations shifted from a collection of six,
mostly reverse-coded worded items to four positively worded items. Both expectations metrics are available,
with the current 4-item expectations metric known as "expectations" and the older 6-item expectations
metric known as "expectations_old". See the vignette("analyzing_metrics")
for more details.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | # Compute the mean engagement score for an entire project at a single time point. Setting
# by_class = TRUE because multiple surveys come from the same class.
metric_mean(ss_data_final, metric = "engagement", by_class = TRUE)
# Do the same, but now compare results by a class's FRL population
metric_mean(ss_data_final, metric = "engagement", equity_group = "class_frl_cat", by_class = TRUE)
# Look at change in engagement over time, then look at how differences in engagement between a
# class's FRL population change over time
metric_growth(
ss_data_initial,
ss_data_final,
metric = "engagement",
by_class = TRUE
)
metric_growth(
ss_data_initial,
ss_data_final,
metric = "engagement",
equity_group = "class_frl_cat",
by_class = TRUE
)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.