pistar.ct | R Documentation |
pistar.ct
is used to find the value of the pi* for
any user-supplied model fit to a contingency table. The only requirements
are (1) that the model inputs only a contingency table with non-negative
continuous cell values, and (2) outputs a named
list in which the predicted values are a contingency table named "fit"
.
Optionally parameter estimates of interest can be outputed as a vector named
"param"
in the output list.
pi* for the model of interest is estimated using the algorithm of Rudas, Clogg, and Lindsay (1994). Standard errors for pi* and any other parameter estimates of interest can be obtained by jackknife as proposed by Dayton (2003).
pistar.ct(data, fn, from = .Machine$double.neg.eps^0.25, to = 1 - .Machine$double.neg.eps^0.25, jack = FALSE, method = "uniroot", u_iter = 1e3, zeta = 1, lr_eps = .Machine$double.neg.eps^0.25, max_dif = .Machine$double.neg.eps, chi_stat = 0, verbose = TRUE)
data |
a contingency table. |
fn |
a user supplied function that estimates the model of interests.
Must input only the observed values as a contingency table containging
non-negative continuous cell values. Must output the predicted values
as a contingency table as item named |
from |
numeric: lower bound of the interval of out-of-model proportions to be explored. |
to |
numeric: upper bound of the interval of out-of-model proportions to be explored. |
jack |
logical: perform jackknife? |
method |
character: method with to look for pi*.
|
u_iter |
maximum number of iterations for method |
zeta |
weighing constant; default is 1. The EM algorithm might crash due to very low cell values and in such case increasing the zeta might help. |
lr_eps |
penalty for finding pi*, the largest small positive number that can be still considered practically indistinguishable from 0. |
max_dif |
largest acceptable difference, passed to |
chi_stat |
Chi squared statistic penalty; default 0. Supply a different e.g. if you want to find the lower endpoint of a one-sided confidence interval for pi*. |
verbose |
logical: print during estimation? |
The EM algorithm implemented here was proposed by Rudas, Clogg and Lindsay
(1994). The jackknife procedure was proposed by Dayton (2003). The function
is developed from J.M.Grego's clr
and clr.root
functions.
Object of class
"Pistar"
, "PistarCT"
, and "PistarRCL"
with the
following slots:
call |
the matched call. |
pistar |
a list of estimated values of the mixture index of fit.
|
pred |
a list of predicted values with three items:
|
data |
an |
param |
a list of requested estimates of the parameters of interest of the model fit to an unscaled model density, i.e. to M and not (1-pi) x M.
|
llrs |
a list of values of log-likelihood ratio statistics
|
iter |
a list of the numbers of iterations of either
|
Juraj Medzihorsky
Developed from J.M.Grego's clr
and clr.root
functions.
Dayton, C. M. (2003) Applications and computational strategies for the two-point mixture index of fit. British Journal of Mathematical & Statistical Psychology, 56, 1-13.
Grego, J. M. clr
and clr.root
functions available at
http://www.stat.sc.edu/~grego/courses/stat770/CLR.txt
Rudas, T., Clogg, C. C., Lindsay, B. G. (1994) A New Index of Fit Based on Mixture Methods for the Analysis of Contingency Tables. Journal of the Royal Statistical Society. Series B (Methodological), Vol. 56, No. 4, 623-639.
Rudas, T. (2002) 'A Latent Class Approach to Measuring the Fit of a Statistical Model' in Hagenaars, J. A. and McCutcheon, A. L. (eds.) Applied Latent Class Analysis. Cambridge University Press. 345-365.
piplot.ct
rcl.em
rcl.s
# load data data(Fienberg1980a) # define a function: log-linear model of independence in a # 2-way table mf <- function(x){ loglin(table=x, margin=list(1,2), fit=TRUE, print=FALSE) } # find pi* p <- pistar(proc="ct", data=Fienberg1980a, fn=mf, jack=FALSE) p summary(p) plot(p)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.