View source: R/ridgePrepAndCo.R
optPenaltyPrep.kCVauto | R Documentation |
Function that performs an automatic search of the optimal penalty parameter for the ridgePrep
call by employing either the Nelder-Mead or quasi-Newton
method to calculate the cross-validated (negative) log-likelihood score.
optPenaltyPrep.kCVauto(Y, ids, lambdaInit,
fold=nrow(Y), CVcrit,
splitting="stratified",
targetZ=matrix(0, ncol(Y), ncol(Y)),
targetE=matrix(0, ncol(Y), ncol(Y)),
nInit=100, minSuccDiff=10^(-10))
Y |
Data |
ids |
A |
lambdaInit |
A |
fold |
A |
CVcrit |
A |
splitting |
A |
targetZ |
A semi-positive definite target |
targetE |
A semi-positive definite target |
nInit |
A |
minSuccDiff |
A |
The function returns an all-positive numeric
, the cross-validated optimal penalty parameters.
W.N. van Wieringen.
van Wieringen, W.N., Chen, Y. (2021), "Penalized estimation of the Gaussian graphical model from data with replicates", Statistics in Medicine, 40(19), 4279-4293.
ridgePrep
# set parameters
p <- 10
Se <- diag(runif(p))
Sz <- matrix(3, p, p)
diag(Sz) <- 4
# draw data
n <- 100
ids <- numeric()
Y <- numeric()
for (i in 1:n){
Ki <- sample(2:5, 1)
Zi <- mvtnorm::rmvnorm(1, sigma=Sz)
for (k in 1:Ki){
Y <- rbind(Y, Zi + mvtnorm::rmvnorm(1, sigma=Se))
ids <- c(ids, i)
}
}
# find optimal penalty parameters
### optLambdas <- optPenaltyPrep.kCVauto(Y, ids,
### lambdaInit=c(1,1),
### fold=nrow(Y),
### CVcrit="LL")
# estimate the precision matrices
### Ps <- ridgePrep(Y, ids, optLambdas[1], optLambdas[2])
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.