optPenaltyPrep.kCVauto: Automatic search for optimal penalty parameters (for...

View source: R/ridgePrepAndCo.R

optPenaltyPrep.kCVautoR Documentation

Automatic search for optimal penalty parameters (for precision estimation of data with replicates).

Description

Function that performs an automatic search of the optimal penalty parameter for the ridgePrep call by employing either the Nelder-Mead or quasi-Newton method to calculate the cross-validated (negative) log-likelihood score.

Usage

optPenaltyPrep.kCVauto(Y, ids, lambdaInit, 
                       fold=nrow(Y), CVcrit, 
                       splitting="stratified",
                       targetZ=matrix(0, ncol(Y), ncol(Y)),
                       targetE=matrix(0, ncol(Y), ncol(Y)),
                       nInit=100, minSuccDiff=10^(-10))

Arguments

Y

Data matrix with samples (including the repetitions) as rows and variates as columns.

ids

A numeric indicating which rows of Y belong to the same individal.

lambdaInit

A numeric giving the initial (starting) values for the two penalty parameters.

fold

A numeric or integer specifying the number of folds to apply in the cross-validation.

CVcrit

A character with the cross-validation criterion to applied. Either CVcrit="LL" (the loglikelihood) or CVcrit="Qloss" (the quadratic loss).

splitting

A character, either splitting="replications", splitting="samples", or splitting="stratified", specifying either how the splits are to be formed: either replications or samples are randomly divided over the fold splits (first two options, respectively), or samples are randomly divided over the fold splits but in a stratified manner such that the total number of replicates in each group is roughly comparable.

targetZ

A semi-positive definite target matrix towards which the signal precision matrix estimate is shrunken.

targetE

A semi-positive definite target matrix towards which the error precision matrix estimate is shrunken.

nInit

A numeric specifying the number of iterations.

minSuccDiff

A numeric: minimum successive difference (in terms of the relative change in the absolute difference of the penalized loglikelihood) between two succesive estimates to be achieved.

Value

The function returns an all-positive numeric, the cross-validated optimal penalty parameters.

Author(s)

W.N. van Wieringen.

References

van Wieringen, W.N., Chen, Y. (2021), "Penalized estimation of the Gaussian graphical model from data with replicates", Statistics in Medicine, 40(19), 4279-4293.

See Also

ridgePrep

Examples

# set parameters
p        <- 10
Se       <- diag(runif(p))
Sz       <- matrix(3, p, p)
diag(Sz) <- 4

# draw data
n <- 100
ids <- numeric()
Y   <- numeric()
for (i in 1:n){
     Ki <- sample(2:5, 1)
     Zi <- mvtnorm::rmvnorm(1, sigma=Sz)
     for (k in 1:Ki){
          Y   <- rbind(Y, Zi + mvtnorm::rmvnorm(1, sigma=Se))
          ids <- c(ids, i)
     }
}

# find optimal penalty parameters
### optLambdas <- optPenaltyPrep.kCVauto(Y, ids,             
###                                      lambdaInit=c(1,1),  
###                                      fold=nrow(Y),       
###                                      CVcrit="LL")        

# estimate the precision matrices
### Ps <- ridgePrep(Y, ids, optLambdas[1], optLambdas[2])    

porridge documentation built on May 29, 2024, 1:37 a.m.