# index_fun: Compute optimal scores In TestGardener: Information Analysis for Test and Rating Scale Data

 index_fun R Documentation

## Compute optimal scores

### Description

The percentile score index values are estimated for each person. The estimates minimize the negative log likelihoods, which are a type of surprisal. The main optimization method is a safe-guarded Newton-Raphson method.

For any iteration the method uses only those scores that are within the interior of the interval [0,100] or at a boundary with a first derivative that would take a step into the interior, and have second derivative values exceeding the value of argument `crit`. Consequently the number of values being optimized decrease on each iteration, and iterations cease when either all values meet the convergence criterion or are optimized on a boundary, or when the number of iterations reaches `itermax`. At that point, if there are any interior scores still associated with either non-positive second derivatives or values that exceed `crit`, the minimizing value along a fine mesh is used.

If `itdisp` is positive, the number of values to be estimated are printed for each iteration.

### Usage

``````  index_fun(index, SfdList, chcemat, itermax = 20, crit = 0.001,
itdisp = FALSE)
``````

### Arguments

 `index` A vector of size `N` containing initial values for score indices in the interval [0,100]. `SfdList` A list vector of length equal to the number of questions. Each member contains eight results for the surprisal curves associated with a question. `chcemat` A matrix number of rows equal to the number of examinees or respondents, and number of columns equal to number of items. The values in the matrix are indices of choices made by each respondent to each question. `itermax` Maximum number of iterations for computing the optimal index values. Default is 20. `crit` Criterion for convergence of optimization. Default is 1e-8. `itdisp` If TRchcematE, results are displayed for each iteration.

### Value

A named list with these members:

 `index_out:` A vector of optimized score index value. `Fval:` The negative log likelihood criterion. `DFval:` The first derivative of the negative likelihood. `D2Fval:` The second derivative of the negative likelihood. `iter:` The number iterations used.

### Author(s)

Juan Li and James Ramsay

### References

Ramsay, J. O., Li J. and Wiberg, M. (2020) Full information optimal scoring. Journal of Educational and Behavioral Statistics, 45, 297-315.

Ramsay, J. O., Li J. and Wiberg, M. (2020) Better rating scale scores with information-based psychometrics. Psych, 2, 347-360.

`index_distn`, `Ffun`, `DFfun`, `index2info`, `scoreDensity`

### Examples

``````  #  Optimize the indices defining the data fits for the first five examinees
#  input the choice indices in the 1000 by 24 choice index matrix
chcemat   <- Quant_13B_problem_chcemat
#  First set up the list object for surprisal curves computed from
#  initial index estimates.
SfdList   <- Quant_13B_problem_dataList\$SfdList
#  Their initial values are the percent rank values ranging over [0,100]
index_in  <- Quant_13B_problem_dataList\$percntrnk[1:5]
#  set up choice indices for first five examinees
chcemat_in <- chcemat[1:5,]
#  optimize the initial indices
indexfunList <- index_fun(index_in, SfdList, chcemat_in)
#  optimal index values
index_out    <- indexfunList\$index_out
#  The surprisal data fit values
Fval_out     <- indexfunList\$Fval
#  The surprisal data fit first derivative values
DFval_out    <- indexfunList\$DFval
#  The surprisal data fit second derivative values
D2Fval_out   <- indexfunList\$D2Fval
#  The number of index values that have not reached the convergence criterion
active_out   <- indexfunList\$active
``````

TestGardener documentation built on May 29, 2024, 3:31 a.m.