6. Sparse Selection Index (SSI) | R Documentation |
Computes the entire Elastic-Net solution for the regression coefficients of a Selection Index for a grid of values of the penalization parameter.
An optimal penalization can be chosen using cross-validation (CV) within a specific training set.
SSI(y, X = NULL, b = NULL, Z = NULL, K, trn_tst = NULL,
varU = NULL, varE = NULL, intercept = TRUE,
alpha = 1, lambda = NULL, nlambda = 100,
lambda.min = .Machine$double.eps^0.5,
common.lambda = TRUE, subset = NULL, tol = 1E-4,
maxiter = 500, method = c("REML","ML"), name = NULL,
save.at = NULL, mc.cores = 1L,
precision.format = c("single","double"),
verbose = TRUE)
SSI.CV(y, X = NULL, b = NULL, Z = NULL, K, trn_tst = NULL,
varU = NULL, varE = NULL, intercept = TRUE,
alpha = 1, lambda = NULL, nlambda = 100,
lambda.min = .Machine$double.eps^0.5,
common.lambda = TRUE, nCV = 1L, nfolds = 5, seed = NULL,
tol = 1E-4, maxiter = 500, method = c("REML","ML"),
name = NULL, mc.cores = 1L, verbose = TRUE)
y |
(numeric matrix) Response variable. It can contain >1 columns for multi-trait analysis |
X |
(numeric matrix) Design matrix for fixed effects. When |
b |
(numeric vector) Fixed effects. When |
K |
(numeric matrix) Kinship relationship matrix |
Z |
(numeric matrix) Design matrix for random effects. When |
varU, varE |
(numeric) Genetic and residual variances. When either |
intercept |
|
trn_tst |
(integer vector) Which elements from matrix |
subset |
(integer vector) |
alpha |
(numeric) Value between 0 and 1 for the weights given to the L1 and L2-penalties |
lambda |
(numeric vector) Penalization parameter sequence. Default is max(abs(G[trn,tst])/alpha) to a minimum equal to zero. If |
nlambda |
(integer) Number of lambdas generated when |
lambda.min |
(numeric) Minimum value of lambda in the generated grid when |
nfolds |
(integer/character) Either 2,3,5,10 or 'n' indicating the number of non-overlaping folds in which the data is split into to do cross-validation. When |
seed |
(numeric vector) Seed to fix randomization when creating folds for cross-validation. If it is a vector, a number equal to its length of CV repetitions are performed |
nCV |
(integer) Number of CV repetitions to be performed. Default is |
common.lambda |
|
mc.cores |
(integer) Number of cores used. When |
tol |
(numeric) Maximum error between two consecutive solutions of the CD algorithm to declare convergence |
maxiter |
(integer) Maximum number of iterations to run the CD algorithm at each lambda step before convergence is reached |
save.at |
(character) Path where files (regression coefficients and output object) are to be saved (this may include a prefix added to the files). Default |
precision.format |
(character) Either 'single' or 'double' for numeric precision and memory occupancy (4 or 8 bytes, respectively) of the regression coefficients. This is only used when |
method |
(character) Either 'REML' (Restricted Maximum Likelihood) or 'ML' (Maximum Likelihood) to calculate variance components as per the function 'fitBLUP' |
name |
(character) Name given to the output for tagging purposes. Default |
verbose |
|
The basic linear mixed model that relates phenotypes with genetic values is of the form
y = X b + Z g + e
where y is a vector with the response, b is the vector of fixed effects, g is the vector of the genetic values of the genotypes, e is the vector of environmental residuals, and X and Z are design matrices conecting the fixed and genetic effects with replicates. Genetic values are assumed to follow a Normal distribution as g ~ N(0,σ2uK), and environmental terms are assumed e ~ N(0,σ2eI).
The resulting vector of genetic values u = Z g will therefore follow u ~ N(0,σ2uG) where G = Z K Z'. In the un-replicated case, Z = I is an identity matrix, and hence u = g and G = K.
The values utst = (ui), i = 1,2,...,ntst, for a testing set are estimated individual-wise using (as predictors) all available observations in a training set as
ui = β'i (ytrn - Xtrnb)
where βi is a vector of weights that are found separately for each individual in the testing set, by minimizing the penalized mean squared error function
-σ2uG'trn,tst(i) βi + 1/2 β'i(σ2uGtrn + σ2eI)βi + λ J(βi)
where Gtrn,tst(i) is the ith column of the sub-matrix of G whose rows correspond to the training set and columns to the testing set; Gtrn is the sub-matrix corresponding to the training set; λ is the penalization parameter; and J(βi) is a penalty function given by
1/2(1-α)||βi||22 + α||βi||1
where 0 ≤ α ≤ 1, and ||βi||1 = ∑j=1|βij| and ||βi||22 = ∑j=1βij2 are the L1 and (squared) L2-norms, respectively.
Function 'SSI' calculates each individual solution using the function 'solveEN' (via the Coordinate Descent algorithm, see help(solveEN)
) by setting the argument Sigma
equal to
σ2uGtrn + σ2eI
and Gamma
equal to
σ2uGtrn,tst(i).
Function 'SSI.CV' performs cross-validation within the training data specified in argument trn
. Training data is divided into k
folds and the SSI is sequentially calculated for (all individuals in) one fold (as testing set) using information from the remaining folds (as training set).
Function 'SSI' returns a list object of the class 'SSI' for which methods coef
, fitted
, plot
, and summary
exist. Functions 'net.plot' and 'path.plot' can be also used. It contains the elements:
b
: (vector) fixed effects solutions (including the intercept).
Xb
: (vector) product of the design matrix 'X' times the fixed effects solutions.
u
: (matrix) genetic values for testing individuals (in rows) associated to each value of lambda (in columns).
varU
, varE
, h2
: variance components solutions.
alpha
: value for the elastic-net weights used.
lambda
: (matrix) sequence of values of lambda used (in columns) for each testing individual (in rows).
nsup
: (matrix) number of non-zero predictors at each solution given by lambda for each testing individual (in rows).
file_beta
: path where regression coefficients are saved.
Function 'SSI.CV' returns a list object of length nCV
of the class 'SSI.CV' for which methods plot
and Optimal cross-validated penalization values can be obtained using thesummary
method. Method plot
is also available.
Efron B, Hastie T, Johnstone I, Tibshirani R (2004). Least angle regression. The Annals of Statistics, 32(2), 407–499.
Friedman J, Hastie T, Höfling H, Tibshirani R (2007). Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2), 302–332.
Hoerl AE, Kennard RW (1970). Ridge regression: biased estimation for nonorthogonal problems. Technometrics, 12(1), 55–67.
Lush JL (1947). Family merit an individual merit as bases for selection. Part I. The American Naturalist, 81(799), 241–261.
Tibshirani R (1996). Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society B, 58(1), 267–288.
VanRaden PM (2008). Efficient methods to compute genomic predictions. Journal of Dairy Science, 91(11), 4414–4423.
Zou H, Hastie T (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society B, 67(2), 301–320
require(SFSI)
data(wheatHTP)
index = which(Y$trial %in% 1:8) # Use only a subset of data
Y = Y[index,]
M = scale(M[index,])/sqrt(ncol(M)) # Subset and scale markers
G = tcrossprod(M) # Genomic relationship matrix
y = as.vector(scale(Y[,"E1"])) # Scale response variable
# Sets (testing=0, training=1)
trn_tst = ifelse(Y$trial %in% 1:2, 0, 1)
# Sparse selection index
# Fixed effects b and variances varU and varE are
# computed internally from training data
fm1 = SSI(y,K=G,trn_tst=trn_tst)
summary(fm1)$optCOR
varU = fm1$varU
varE = fm1$varE
b = fm1$b
#---------------------------------------------------
# Predicting a testing set using a value of lambda
# obtained from cross-validation in a traning set
#---------------------------------------------------
# Run a cross validation in training set
fm2 = SSI.CV(y,K=G,varU=varU,varE=varE,b=b,trn_tst=trn_tst,nfolds=5,name="1 5CV")
lambda = summary(fm2)$optCOR["lambda"]
# Fit the index with the obtained lambda
fm3 = SSI(y,K=G,varU=varU,varE=varE,b=b,trn_tst=trn_tst,lambda=lambda)
summary(fm3)$accuracy # Testing set accuracy
# Compare the accuracy with that of the non-sparse index (G-BLUP)
summary(fm1)$accuracy[fm1$nlambda,1] # we take the last one
# Obtain an 'optimal' lambda by repeating the CV several times
fm22 = SSI.CV(y,K=G,varU=varU,varE=varE,b=b,trn_tst=trn_tst,nCV=5,name="5 5CV")
plot(fm22,fm2)
#---------------------------------------------------
# Multi-trait SSI
#---------------------------------------------------
y = scale(Y[,4:6]) # Response variable
# Sets (testing=0, training=1)
trn_tst = matrix(NA,ncol=ncol(y),nrow=nrow(y))
trn_tst[,1] = ifelse(Y$trial %in% 1:2, 0, 1)
trn_tst[,2] = ifelse(Y$trial %in% 1, 0, 1)
trn_tst[,3] = ifelse(Y$trial %in% 2, 0, 1)
fm = SSI(y, K=G, trn_tst=trn_tst, mc.cores=1)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.