smooth.surp | R Documentation |
Surprisal is -log(probability) where the logarithm is to the base being the dimension
M
of the multinomial observation vector. The surprisal curves for each question are estimated
by fitting the surprisal values of binned data using curves whose values are within the
M-1 dimensional surprisal subspace that is within the space of non-negative M-dimensional vectors.
smooth.surp(argvals, y, Bmat0, WfdPar, wtvec=NULL, conv=1e-4,
iterlim=50, dbglev=0)
argvals |
Argument value array of length N, where N is the number of observed curve values for each curve. It is assumed that that these argument values are common to all observed curves. If this is not the case, you will need to run this function inside one or more loops, smoothing each curve separately. |
y |
A |
Bmat0 |
A |
WfdPar |
A functional parameter or fdPar object. This object contains the specifications for the functional data object to be estimated by smoothing the data. See comment lines in function fdPar for details. The functional data object WFD in WFDPAROBJ is used to initialize the optimization process. Its coefficient array contains the starting values for the iterative minimization of mean squared error. |
wtvec |
A vector of weights to be used in the smoothing. |
conv |
A convergence criterion. |
iterlim |
the maximum number of iterations allowed in the minimization of error sum of squares. |
dbglev |
Either 0, 1, or 2. This controls the amount information printed out on each iteration, with 0 implying no output, 1 intermediate output level, and 2 full output. If either level 1 or 2 is specified, it can be helpful to turn off the output buffering feature of S-PLUS. |
A named list surpFd
with these members:
Wfdobj |
a functional data object defining function $W(x)$ that that optimizes the fit to the data of the positive function that it defines. |
Flist |
a named list containing three results for the final converged solution: (1) f: the optimal function value being minimized, (2) grad: the gradient vector at the optimal solution, and (3) norm: the norm of the gradient vector at the optimal solution. |
argvals |
the corresponding input arguments |
y |
the corresponding input arguments |
Juan Li and James Ramsay
Ramsay, James O., Hooker, Giles, and Graves, Spencer (2009), Functional data analysis with R and Matlab, Springer, New York.
Ramsay, James O., and Silverman, Bernard W. (2005), Functional Data Analysis, 2nd ed., Springer, New York.
Ramsay, James O., and Silverman, Bernard W. (2002), Applied Functional Data Analysis, Springer, New York.
eval.surp
oldpar <- par(no.readonly=TRUE)
# evaluation points
x = seq(-2,2,len=11)
# evaluate a standard normal distribution function
p = pnorm(x)
# combine with 1-p
mnormp = cbind(p,1-p)
# convert to surprisal values
mnorms = -log2(mnormp)
# plot the surprisal values
matplot(x, mnorms, type="l", lty=c(1,1), col=c(1,1),
ylab="Surprisal (2-bits)")
# add some log-normal error
mnormdata = exp(log(mnorms) + rnorm(11)*0.1)
# set up a b-spline basis object
nbasis = 7
sbasis = create.bspline.basis(c(-2,2),nbasis)
# define an initial coefficient matrix
cmat = matrix(0,7,1)
# set up a fdPar object for suprisal smoothing
sfd = fd(cmat, sbasis)
sfdPar = fdPar(sfd, Lfd=2, lambda=0)
# smooth the noisy data
result = smooth.surp(x, mnormdata, cmat, sfdPar)
# plot the data and the fits of the two surprisal curves
xfine = seq(-2,2,len=51)
sfine = eval.surp(xfine, result$Wfd)
matplot(xfine, sfine, type="l", lty=c(1,1), col=c(1,1))
points(x, mnormdata[,1])
points(x, mnormdata[,2])
# convert the surprisal fit values to probabilities
pfine = 2^(-sfine)
# check that they sum to one
apply(pfine,1,sum)
par(oldpar)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.