SSI: Sparse Selection Index

View source: R/SSI.R

6. Sparse Selection Index (SSI)R Documentation

Sparse Selection Index

Description

Computes the entire Elastic-Net solution for the regression coefficients of a Selection Index for a grid of values of the penalization parameter.

An optimal penalization can be chosen using cross-validation (CV) within a specific training set.

Usage

SSI(y, X = NULL, b = NULL, Z = NULL, K, trn_tst = NULL, 
    varU = NULL, varE = NULL, intercept = TRUE, 
    alpha = 1, lambda = NULL, nlambda = 100,
    lambda.min = .Machine$double.eps^0.5,
    common.lambda = TRUE, subset = NULL, tol = 1E-4, 
    maxiter = 500, method = c("REML","ML"), name = NULL, 
    save.at = NULL, mc.cores = 1L,
    precision.format = c("single","double"),
    verbose = TRUE)
    
SSI.CV(y, X = NULL, b = NULL, Z = NULL, K, trn_tst = NULL,
       varU = NULL, varE = NULL, intercept = TRUE, 
       alpha = 1, lambda = NULL, nlambda = 100, 
       lambda.min = .Machine$double.eps^0.5,
       common.lambda = TRUE, nCV = 1L, nfolds = 5, seed = NULL,
       tol = 1E-4, maxiter = 500, method = c("REML","ML"),
       name = NULL, mc.cores = 1L, verbose = TRUE)

Arguments

y

(numeric matrix) Response variable. It can contain >1 columns for multi-trait analysis

X

(numeric matrix) Design matrix for fixed effects. When X=NULL, a vector of ones is constructed only for the intercept (default)

b

(numeric vector) Fixed effects. When b=NULL, only the intercept is estimated from training data using generalized least squares (default)

K

(numeric matrix) Kinship relationship matrix

Z

(numeric matrix) Design matrix for random effects. When Z=NULL an identity matrix is considered (default) thus G = K; otherwise G = Z K Z' is used

varU, varE

(numeric) Genetic and residual variances. When either varU=NULL or varE=NULL (default), they are calculated from training data using the function 'fitBLUP' (see help(fitBLUP)). When y has >1 columns both varU and varE are matrices pairwise estimated (see help(getGenCov))

intercept

TRUE or FALSE to whether fit an intercept. When FALSE, the model assumes a null intercept

trn_tst

(integer vector) Which elements from matrix y are in training/testing set (0: testing, 1: training, NA are ignored). Default trn_tst=NULL will consider all individuals as training

subset

(integer vector) c(m,M) to fit the model only for the mth subset out of M subsets that the testing set will be divided into. Results can be automatically saved when saveAt argument is provided and can be retrieved later using function 'collect' (see help(collect)). Default is subset=NULL for no subsetting, then the model is fitted for all testing data

alpha

(numeric) Value between 0 and 1 for the weights given to the L1 and L2-penalties

lambda

(numeric vector) Penalization parameter sequence. Default is lambda=NULL, in this case a decreasing grid of nlambda lambdas will be generated starting from a maximum equal to

max(abs(G[trn,tst])/alpha)

to a minimum equal to zero. If alpha=0 the grid is generated starting from a maximum equal to 5

nlambda

(integer) Number of lambdas generated when lambda=NULL

lambda.min

(numeric) Minimum value of lambda in the generated grid when lambda=NULL

nfolds

(integer/character) Either 2,3,5,10 or 'n' indicating the number of non-overlaping folds in which the data is split into to do cross-validation. When nfolds='n' leave-one-out CV is performed

seed

(numeric vector) Seed to fix randomization when creating folds for cross-validation. If it is a vector, a number equal to its length of CV repetitions are performed

nCV

(integer) Number of CV repetitions to be performed. Default is nCV=1

common.lambda

TRUE or FALSE to whether computing the coefficients for a grid of lambdas common to all individuals in testing set or for a grid of lambdas specific to each individual in testing set. Default is common.lambda=TRUE

mc.cores

(integer) Number of cores used. When mc.cores > 1, the analysis is run in parallel for each testing set individual. Default is mc.cores=1

tol

(numeric) Maximum error between two consecutive solutions of the CD algorithm to declare convergence

maxiter

(integer) Maximum number of iterations to run the CD algorithm at each lambda step before convergence is reached

save.at

(character) Path where files (regression coefficients and output object) are to be saved (this may include a prefix added to the files). Default save.at=NULL will no save any results and they are returned in the output object

precision.format

(character) Either 'single' or 'double' for numeric precision and memory occupancy (4 or 8 bytes, respectively) of the regression coefficients. This is only used when save.at is not NULL

method

(character) Either 'REML' (Restricted Maximum Likelihood) or 'ML' (Maximum Likelihood) to calculate variance components as per the function 'fitBLUP'

name

(character) Name given to the output for tagging purposes. Default name=NULL will give the name of the method used

verbose

TRUE or FALSE to whether printing each step

Details

The basic linear mixed model that relates phenotypes with genetic values is of the form

y = X b + Z g + e

where y is a vector with the response, b is the vector of fixed effects, g is the vector of the genetic values of the genotypes, e is the vector of environmental residuals, and X and Z are design matrices conecting the fixed and genetic effects with replicates. Genetic values are assumed to follow a Normal distribution as g ~ N(02uK), and environmental terms are assumed e ~ N(02eI).

The resulting vector of genetic values u = Z g will therefore follow u ~ N(02uG) where G = Z K Z'. In the un-replicated case, Z = I is an identity matrix, and hence u = g and G = K.

The values utst = (ui), i = 1,2,...,ntst, for a testing set are estimated individual-wise using (as predictors) all available observations in a training set as

ui = β'i (ytrn - Xtrnb)

where βi is a vector of weights that are found separately for each individual in the testing set, by minimizing the penalized mean squared error function

2uG'trn,tst(i) βi + 1/2 β'i2uGtrn + σ2eI)βi + λ J(βi)

where Gtrn,tst(i) is the ith column of the sub-matrix of G whose rows correspond to the training set and columns to the testing set; Gtrn is the sub-matrix corresponding to the training set; λ is the penalization parameter; and J(βi) is a penalty function given by

1/2(1-α)||βi||22 + α||βi||1

where 0 ≤ α ≤ 1, and ||βi||1 = ∑j=1ij| and ||βi||22 = ∑j=1βij2 are the L1 and (squared) L2-norms, respectively.

Function 'SSI' calculates each individual solution using the function 'solveEN' (via the Coordinate Descent algorithm, see help(solveEN)) by setting the argument Sigma equal to σ2uGtrn + σ2eI and Gamma equal to σ2uGtrn,tst(i).

Function 'SSI.CV' performs cross-validation within the training data specified in argument trn. Training data is divided into k folds and the SSI is sequentially calculated for (all individuals in) one fold (as testing set) using information from the remaining folds (as training set).

Value

Function 'SSI' returns a list object of the class 'SSI' for which methods coef, fitted, plot, and summary exist. Functions 'net.plot' and 'path.plot' can be also used. It contains the elements:

  • b: (vector) fixed effects solutions (including the intercept).

  • Xb: (vector) product of the design matrix 'X' times the fixed effects solutions.

  • u: (matrix) genetic values for testing individuals (in rows) associated to each value of lambda (in columns).

  • varU, varE, h2: variance components solutions.

  • alpha: value for the elastic-net weights used.

  • lambda: (matrix) sequence of values of lambda used (in columns) for each testing individual (in rows).

  • nsup: (matrix) number of non-zero predictors at each solution given by lambda for each testing individual (in rows).

  • file_beta: path where regression coefficients are saved.

Function 'SSI.CV' returns a list object of length nCV of the class 'SSI.CV' for which methods plot and Optimal cross-validated penalization values can be obtained using thesummary method. Method plot is also available.

References

Efron B, Hastie T, Johnstone I, Tibshirani R (2004). Least angle regression. The Annals of Statistics, 32(2), 407–499.

Friedman J, Hastie T, Höfling H, Tibshirani R (2007). Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2), 302–332.

Hoerl AE, Kennard RW (1970). Ridge regression: biased estimation for nonorthogonal problems. Technometrics, 12(1), 55–67.

Lush JL (1947). Family merit an individual merit as bases for selection. Part I. The American Naturalist, 81(799), 241–261.

Tibshirani R (1996). Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society B, 58(1), 267–288.

VanRaden PM (2008). Efficient methods to compute genomic predictions. Journal of Dairy Science, 91(11), 4414–4423.

Zou H, Hastie T (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society B, 67(2), 301–320

Examples

  require(SFSI)
  data(wheatHTP)
  
  index = which(Y$trial %in% 1:8)     # Use only a subset of data
  Y = Y[index,]
  M = scale(M[index,])/sqrt(ncol(M))  # Subset and scale markers
  G = tcrossprod(M)                   # Genomic relationship matrix
  y = as.vector(scale(Y[,"E1"]))      # Scale response variable
  
  # Sets (testing=0, training=1)
  trn_tst = ifelse(Y$trial %in% 1:2, 0, 1)
  
  # Sparse selection index
  # Fixed effects b and variances varU and varE are 
  # computed internally from training data
  fm1 = SSI(y,K=G,trn_tst=trn_tst)
  summary(fm1)$optCOR
  varU = fm1$varU
  varE = fm1$varE
  b = fm1$b
  
  #---------------------------------------------------
  # Predicting a testing set using a value of lambda
  # obtained from cross-validation in a traning set
  #---------------------------------------------------
  # Run a cross validation in training set
  fm2 = SSI.CV(y,K=G,varU=varU,varE=varE,b=b,trn_tst=trn_tst,nfolds=5,name="1 5CV")
  lambda = summary(fm2)$optCOR["lambda"]

  # Fit the index with the obtained lambda
  fm3 = SSI(y,K=G,varU=varU,varE=varE,b=b,trn_tst=trn_tst,lambda=lambda)
  summary(fm3)$accuracy        # Testing set accuracy

  # Compare the accuracy with that of the non-sparse index (G-BLUP)
  summary(fm1)$accuracy[fm1$nlambda,1] # we take the last one
  
  # Obtain an 'optimal' lambda by repeating the CV several times
  fm22 = SSI.CV(y,K=G,varU=varU,varE=varE,b=b,trn_tst=trn_tst,nCV=5,name="5 5CV")
  plot(fm22,fm2)
  
  #---------------------------------------------------
  # Multi-trait SSI
  #---------------------------------------------------
  y = scale(Y[,4:6])    # Response variable
  
  # Sets (testing=0, training=1)
  trn_tst = matrix(NA,ncol=ncol(y),nrow=nrow(y))
  trn_tst[,1] = ifelse(Y$trial %in% 1:2, 0, 1)
  trn_tst[,2] = ifelse(Y$trial %in% 1, 0, 1)
  trn_tst[,3] = ifelse(Y$trial %in% 2, 0, 1)
  
  fm = SSI(y, K=G, trn_tst=trn_tst, mc.cores=1)
  

SFSI documentation built on Nov. 18, 2023, 9:06 a.m.