s is used in the definition of (vector) smooth terms within
This corresponds to 1st-generation VGAMs that use backfitting
for their estimation.
The effective degrees of freedom is prespecified.
s(x, df = 4, spar = 0, ...)
covariate (abscissae) to be smoothed.
numerical vector of length r.
Effective degrees of freedom: must lie between 1 (linear fit)
and n (interpolation).
Thus one could say that
numerical vector of length r.
Positive smoothing parameters (after scaling) .
Larger values mean more smoothing so that the solution approaches
a linear fit for that component function.
A zero value means that
Ignored for now.
In this help file M is the number of additive predictors
and r is the number of component functions to be
estimated (so that r is an element from the set
Also, if n is the number of distinct abscissae, then
s will fail if n < 7.
s, which is symbolic and does not perform any smoothing itself,
only handles a single covariate.
s works in
It has no effect in
(actually, it is similar to the identity function
s(x2) is the same as
x2 in the LM model matrix).
It differs from the
s() of the gam package and
s of the mgcv package;
they should not be mixed together.
Also, terms involving
s should be simple additive terms, and not
involving interactions and nesting etc.
myfactor:s(x2) is not a good idea.
A vector with attributes that are (only) used by
The vector cubic smoothing spline which
s() represents is
computationally demanding for large M.
The cost is approximately O(n M^3) where n is the
number of unique abscissae.
Currently a bug relating to the use of
s() is that
only constraint matrices whose columns are orthogonal are handled
correctly. If any
s() term has a constraint matrix that
does not satisfy this condition then a warning is issued.
is.buggy for more information.
A more modern alternative to using
vgam is to use
This does not require backfitting
and allows automatic smoothing parameter selection.
However, this alternative should only be used when the
sample size is reasonably large (> 500, say).
These are called Generation-2 VGAMs.
Another alternative to using
The latter implements half-stepping, which is helpful if
convergence is difficult.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
# Nonparametric logistic regression fit1 <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) ## Not run: plot(fit1, se = TRUE) # Bivariate logistic model with artificial data nn <- 300 bdata <- data.frame(x1 = runif(nn), x2 = runif(nn)) bdata <- transform(bdata, y1 = rbinom(nn, size = 1, prob = logitlink(sin(2 * x2), inverse = TRUE)), y2 = rbinom(nn, size = 1, prob = logitlink(sin(2 * x2), inverse = TRUE))) fit2 <- vgam(cbind(y1, y2) ~ x1 + s(x2, 3), trace = TRUE, binom2.or(exchangeable = TRUE), data = bdata) coef(fit2, matrix = TRUE) # Hard to interpret ## Not run: plot(fit2, se = TRUE, which.term = 2, scol = "blue")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.