EISEquate: Apply equating based on equivalent expected score on an...

View source: R/EISequating.R

EISEquateR Documentation

Apply equating based on equivalent expected score on an anchor test

Description

At heart, this is the same method as the similar items method suggested by Bramley (2018). However, the name has been changed to reflect the fact it can be used with an actual "anchor" test rather than simply when we only have "similar" items. Expectes scores on the anchor test for each raw total test score are derived from a loglinear model with a single interaction. Linear interpolation is used to identify expected item scores for non-integer raw scores. Equated scores on each test form are the raw scores that give the same expected anchor test score. For this function to work THE ANCHOR TEST MUST BE INTERNAL TO THE TESTS BEING EQUATED.

Usage

EISEquate(dx, dy, maxX, maxY, maxA)

Arguments

dx

Data frame with variables "x" and "a" representing scores for individual candidates on form X and on the anchor test.

dy

Data frame with variables "y" and "a" representing scores for individual candidates on form Y and on the anchor test.

maxX

Maximum score on form X.

maxY

Maximum score on form Y.

maxA

Maximum score on anchor.

Details

This function assumes that only integer scores are available on each test form.

The loglinear models behind this approach are fitted to ensure a perfect fit to the marginal distributions of both raw test and anchor test scores. Furthermore, the fitted model will replicate the correlation between raw test and anchor test scores.

Value

The function returns a list with the following elements:

EqTable

A data frame showing the equivalent score on form Y for every integer score between 0 and maxX on form X.

References

Bramley, T. (2018, November). Evaluating the ‘similar items method’ for standard maintaining. Paper presented at the 19th annual conference of the Association for Educational Assessment in Europe, Arnhem-Nijmegen, The Netherlands.

Examples

#demonstrate method on a 30 item test with an internal 5 item anchor
#define 30 rasch item difficulties as equally spread
itedifs=rep(seq(-2,2,length=5),6)

#simulate population one item scores (and then form scores)
n1=300
t1=rnorm(n1,0.5,1)
ites1=0+(plogis(t1%*%t(rep(1,30))-rep(1,length(t1))%*%t(itedifs))>matrix(runif(n1*30),nrow=n1))
scoresX1=rowSums(ites1[,1:30])
scoresA1=rowSums(ites1[,26:30])
#simulate parallel tests in population two
n2=3000
t2=rnorm(n2,0,1)
ites2=0+(plogis(t2%*%t(rep(1,30))-rep(1,length(t2))%*%t(itedifs))>matrix(runif(n2*30),nrow=n2))
scoresY2=rowSums(ites2[,1:30])
scoresA2=rowSums(ites2[,26:30])

eis_eq=EISEquate(data.frame(x=scoresX1,a=scoresA1),data.frame(y=scoresY2,a=scoresA2),30,30,5)
eis_eq


CambridgeAssessmentResearch/KernEqWPS documentation built on Feb. 26, 2025, 2:39 p.m.