tuning: Calculating Tuning Parameters

Description Usage Arguments Details Value Author(s) References

View source: R/tuning.R

Description

Calculate tuning parameters based on given criteria.

Usage

1
tuning(Y, X, K_mat, mode, lambda)

Arguments

Y

(matrix, n*1) The vector of response variable.

X

(matrix, n*d_fix) The fixed effect matrix.

K_mat

(list of matrices) A nested list of kernel term matrices, corresponding to each kernel term specified in the formula for a base kernel function in kern_func_list.

mode

(character) A character string indicating which tuning parameter criteria is to be used.

lambda

(numeric) A numeric string specifying the range of tuning parameter to be chosen. The lower limit of lambda must be above 0.

Details

There are seven tuning parameter selections here:

leave-one-out Cross Validation

λ_{n-CV}={argmin}_{λ \in Λ}\;\Big\{log\;y^{\star T}[I-diag(A_λ)-\frac{1}{n}I]^{-1}(I-A_λ)^2[I-diag(A_λ)- \frac{1}{n}I]^{-1}y^\star \Big\}

Akaike Information Criteria

λ_{AIC}={argmin}_{λ \in Λ}\Big\{log\; y^{\star T}(I-A_λ)^2y^\star+\frac{2[tr(A_λ)+2]}{n}\Big\}

Akaike Information Criteria (small-sample variant)

λ_{AICc}={argmin}_{λ \in Λ}\Big\{log\; y^{\star T}(I-A_λ)^2y^\star+\frac{2[tr(A_λ)+2]}{n-tr(A_λ)-3}\Big\}

Bayesian Information Criteria

λ_{BIC}={argmin}_{λ \in Λ}\Big\{log\; y^{\star T}(I-A_λ)^2y^\star+\frac{log(n)[tr(A_λ)+2]}{n}\Big\}

Generalized Cross Validation

λ_{GCV}={argmin}_{λ \in Λ}\Big\{log\; y^{\star T}(I-A_λ)^2y^\star-2log[1-\frac{tr(A_λ)}{n}-\frac{1}{n}]_+\Big\}

Generalized Cross Validation (small-sample variant)

λ_{GCVc}={argmin}_{λ \in Λ}\Big\{log\; y^{\star T}(I-A_λ)^2y^\star-2log[1-\frac{tr(A_λ)}{n}-\frac{2}{n}]_+\Big\}

Generalized Maximum Profile Marginal Likelihood

λ_{GMPML}={argmin}_{λ \in Λ}\Big\{log\; y^{\star T}(I-A_λ)y^\star-\frac{1}{n-1}log \mid I-A_λ \mid \Big\}

Value

lambda0

(numeric) The selected tuning parameter.

Author(s)

Wenying Deng

References

Philip S. Boonstra, Bhramar Mukherjee, and Jeremy M. G. Taylor. A Small-Sample Choice of the Tuning Parameter in Ridge Regression. July 2015.

Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition. Springer Series in Statistics. Springer- Verlag, New York, 2 edition, 2009.

Hirotogu Akaike. Information Theory and an Extension of the Maximum Likelihood Principle. In Selected Papers of Hirotugu Akaike, Springer Series in Statistics, pages 199–213. Springer, New York, NY, 1998.

Clifford M. Hurvich and Chih-Ling Tsai. Regression and time series model selection in small samples. June 1989.

Hurvich Clifford M., Simonoff Jeffrey S., and Tsai Chih-Ling. Smoothing parameter selection in nonparametric regression using an improved Akaike information criterion. January 2002.


CVEK documentation built on Jan. 8, 2021, 5:42 p.m.