Description Usage Arguments Details Value Note Author(s) See Also Examples

Function that selects the optimal penalty parameter for the `ridgeP`

call
by usage of *K*-fold cross-validation.
Its output includes (a.o.) the precision matrix under the optimal value of the penalty parameter.

1 2 3 4 | ```
optPenalty.kCV(Y, lambdaMin, lambdaMax, step, fold = nrow(Y),
cor = FALSE, target = default.target(covML(Y)),
type = "Alt", output = "light", graph = TRUE,
verbose = TRUE)
``` |

`Y` |
Data |

`lambdaMin` |
A |

`lambdaMax` |
A |

`step` |
An |

`fold` |
A |

`cor` |
A |

`target` |
A target |

`type` |
A |

`output` |
A |

`graph` |
A |

`verbose` |
A |

The function calculates a cross-validated negative log-likelihood score (using a regularized ridge
estimator for the precision matrix) for each
value of the penalty parameter contained in the search grid by way of
*K*-fold cross-validation. The value of the penalty parameter
that achieves the lowest cross-validated negative log-likelihood score
is deemed optimal. The penalty parameter must be positive such that `lambdaMin`

must
be a positive scalar. The maximum allowable value of `lambdaMax`

depends on the type of ridge estimator employed. For details on the type of
ridge estimator one may use (one of: "Alt", "ArchI", "ArchII") see `ridgeP`

.
The ouput consists of an object of class list (see below). When `output = "light"`

(default) only
the `optLambda`

and `optPrec`

elements of the list are given.

An object of class list:

`optLambda` |
A |

`optPrec` |
A |

`lambdas` |
A |

`LLs` |
A |

When `cor = TRUE`

correlation matrices are used in the computation of the (cross-validated) negative
log-likelihood score, i.e., the *K*-fold sample covariance matrix is a matrix on the correlation scale.
When performing evaluation on the correlation scale the data are assumed to be standardized.
If `cor = TRUE`

and one wishes to used the default target specification one may consider using `target = default.target(covML(Y, cor = TRUE))`

. This gives a default target under the assumption of standardized data.

Under the default setting of the fold-argument, `fold = nrow(Y)`

, one performes leave-one-out cross-validation.

Carel F.W. Peeters <[email protected]>, Wessel N. van Wieringen

`ridgeP`

, `optPenalty.kCVauto`

, `optPenalty.aLOOCV`

,

`default.target`

, `covML`

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | ```
## Obtain some (high-dimensional) data
p = 25
n = 10
set.seed(333)
X = matrix(rnorm(n*p), nrow = n, ncol = p)
colnames(X)[1:25] = letters[1:25]
## Obtain regularized precision under optimal penalty using K = n
OPT <- optPenalty.kCV(X, lambdaMin = .5, lambdaMax = 30, step = 100); OPT
OPT$optLambda # Optimal penalty
OPT$optPrec # Regularized precision under optimal penalty
## Another example with standardized data
X <- scale(X, center = TRUE, scale = TRUE)
OPT <- optPenalty.kCV(X, lambdaMin = .5, lambdaMax = 30, step = 100, cor = TRUE,
target = default.target(covML(X, cor = TRUE))); OPT
OPT$optLambda # Optimal penalty
OPT$optPrec # Regularized precision under optimal penalty
## Another example using K = 5
OPT <- optPenalty.kCV(X, lambdaMin = .5, lambdaMax = 30, step = 100, fold = 5); OPT
OPT$optLambda # Optimal penalty
OPT$optPrec # Regularized precision under optimal penalty
``` |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.