Description Usage Arguments Details Value See Also Examples
The function fits the same models with the same approximation as in gvcm.cat
but the choice of the tuning parameter lambda for the penalty differs: instead of weighting the penalty terms and choosing on global tuning parmeter based on (generalized) cross-validation methods that again rely on the converged model, gvcm.cat.flex
estimates several penalty parameteres lambda_i by linking the local quadratic approximation of gvcm.cat
with the fantastic methods implemented in the package mgcv
. This is why the arguments of gvcm.cat
and gvcm.cat.flex
differ. gvcm.cat.flex
is not as well-developed as gvcm.cat
.
1 2 3 |
whichCoefs |
vector with covariates (as characters) |
intercept |
logical |
data |
a data frame, with named and coded covariates |
family |
a |
method |
see |
tuning |
for function |
indexNrCoefs |
vector with number of coefficients per covariate |
indexPenNorm |
vector with norm of the employed penalty (as.character) |
indexPenA |
list with the penalty matrices A_j for each covariate j |
indexPenWeight |
list, possible weights for the penalty terms (each entry is a vector) |
control |
a list of parameters for controlling the fitting process; must be |
The local quadratic approximation are linked to the methods of mgcv
by alternating the update of the penalty and the update of the PIRLS algorithm/estimating the tuning parameters lambda_i via mgcv
. Therefore, gvcm.cat.flex
can be slow (but will be faster than gvcm.cat
for the most part).
A gamObject
.
Function gvcm.cat
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | ## Not run:
# compare gvcm.cat.flex and gvcm.cat for Lasso-type penalties:
n <- 100
ncov <- 7
set.seed(123)
X <- matrix(rnorm(n*ncov, sd=5), ncol=ncov)
coefs <- rpois(ncov + 1, 2)
y <- cbind(1, X)
data <- as.data.frame(cbind(y, X))
names(data) <- c("y", paste("x", 1:ncov, sep=""))
m1 <- gvcm.cat.flex(
whichCoefs = paste("x", 1:ncov, sep=""),
data=data,
indexNrCoefs=rep(1, ncov),
indexPenNorm=rep("L1", ncov),
indexPenA=list(1,1,1,1,1,1,1),
indexPenWeight=list(1,1,1,1,1,1,1)
)
m2 <- gvcm.cat(y ~ 1 + p(x1) + p(x2) + p(x3) + p(x4) + p(x5) + p(x6) + p(x7),
data=data, tuning=list(lambda=m1$sp, specific=TRUE), start=rep(1, 8))
rbind(m1$coefficients, m2$coefficients)
# Lasso-type fusion penalty with gvcm.cat.flex
n <- 100
ncat <- 8
set.seed(567)
X <- t(rmultinom(n, 1, rep(1/ncat, ncat)))[, -1]
coefs <- c(rpois(1, 2), sort(rpois(ncat-1, 1)))
y <- cbind(1, X)
data <- as.data.frame(y)
data$x1 <- X
names(data) <- c("y", "x1")
A <- a(1:(ncat-1), ncat-2)
m3 <- gvcm.cat.flex(
whichCoefs = c("x1"),
data = data,
indexNrCoefs = c(ncat-1),
indexPenNorm = c("L1"),
indexPenA = list(A),
indexPenWeight = list(rep(1, ncol(A))),
tuning = 100 # fixed and large - in order to demonstrate the fusion of the coefficients
)
m3$coefficients
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.