pistar.ll | R Documentation |
pistar.ll
is used to find the value of the pi* index
of fit for any log-linear model estimated with loglin
;
pi* is estimated using the algorithm of Rudas, Clogg, and
Lindsay (1994). Standard errors for pi* and any other
estimates of parameters of interest can be obtained by jackknife as
proposed by Dayton (2003).
pistar.ll(data, margin = list(1, 2), start = rep(1, length(data)), eps = 0.1, iter = 1e3, param = TRUE, print = FALSE, from = .Machine$double.neg.eps^0.25, to = 1 - .Machine$double.neg.eps^0.25, jack = FALSE, lr_eps = .Machine$double.neg.eps^0.25, max_dif = .Machine$double.neg.eps, chi_stat = 0, u_iter = 1e3, tol = .Machine$double.eps^0.25, verbose = TRUE)
data |
a contingency table. |
margin |
passed to |
start |
|
eps |
|
iter |
maximum number of iterations for |
param |
logical: return parameter estimates? |
print |
logical: should |
from |
numeric: lower bound of the interval of out-of-model proportions to be explored. |
to |
numeric: upper bound of the interval of out-of-model proportions to be explored. |
jack |
logical: perform jackknife? |
lr_eps |
penalty for finding pi*, the largest small positive number that can be still considered practically indistinguishable from 0. |
max_dif |
largest acceptable difference, passed to |
chi_stat |
Chi squared statistic penalty; default 0. Supply a different e.g. if you want to find the lower endpoint of a one-sided confidence interval for pi*. |
u_iter |
maximum number of iterations for method |
tol |
tolerance passed to |
verbose |
logical: print during estimation? |
This is a version of the algorithm implemented in
pistar.ct
for log-linear models that is speed-optimized.
Object of
class
"Pistar"
, "PistarCT"
, , "PistarRCL"
, and "PistarLL"
with the following slots:
call |
the matched call. |
pistar |
a list of estimated values of the mixture index of fit.
|
pred |
a list of predicted values with three items:
|
data |
the supplied data. |
param |
a list of requested estimates of the parameters of interest of the model fit to an unscaled model density, i.e. to M and not (1-pi) x M.
|
llrs |
a list of values of log-likelihood ratio statistics
|
iter |
a list of the numbers of iterations of either
|
Juraj Medzihorsky
Developed from J.M.Grego's clr
and clr.root
functions.
Dayton, C. M. (2003) Applications and computational strategies for the two-point mixture index of fit. British Journal of Mathematical & Statistical Psychology, 56, 1-13.
Grego, J. M. clr
and clr.root
functions available at
http://www.stat.sc.edu/~grego/courses/stat770/CLR.txt
Rudas, T., Clogg, C. C., Lindsay, B. G. (1994) A New Index of Fit Based on Mixture Methods for the Analysis of Contingency Tables. Journal of the Royal Statistical Society. Series B (Methodological), Vol. 56, No. 4, 623-639.
Rudas, T. (2002) 'A Latent Class Approach to Measuring the Fit of a Statistical Model' in Hagenaars, J. A. and McCutcheon, A. L. (eds.) Applied Latent Class Analysis. Cambridge University Press. 345-365.
pistar.ct
loglin
data(HairEyeColor) # check if the data is an "array" is(HairEyeColor, "array") # it is not, so it first needs to be converted: HEC <- array(HairEyeColor, dim=dim(HairEyeColor), dimnames=dimnames(HairEyeColor)) # find pi* for independence in a 3-way table p <- pistar(proc="ll", data=HEC, margin=list(1, 2, 3), jack=FALSE) p summary(p) # plot does not work for n-way tables if n > 2 # plot(p) # create data H <- matrix((1:4)*1e1, byrow=TRUE, ncol=2) # find pi* and model parameter estimates and perform jackknife h <- pistar(proc="ll", data=H, margin=list(1, 2), param=TRUE, jack=TRUE) h summary(h)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.