Vignette of the oosse package

Introduction

This vignette demonstrates the use of the oosse package for estimating for estimating out-of-sample R² and its standard error through resampling algorithms, described in "Out-of-sample R²: estimation and inference" by Hawinkel et al. 2023.

\setcounter{tocdepth}{5} \tableofcontents

Installation instructions

install.packages("oosse")
library(oosse)

Illustrative examples

The R2oosse function works with any pair of fitting and prediction functions. Here we illustrate a number of them, but any prediction function implemented in R can be used. The built-in dataset Brassica is used, which contains rlog-transformed gene expression measurements for the 1,000 most expressed genes in the Expr slot, as well as 5 outcome phenotypes in the Pheno slot.

data(Brassica)

Linear model

The fitting model must accept at least an outcome vector y and a regressor matrix x:

fitFunLM = function(y, x){lm.fit(y = y, x = cbind(1, x))}

The predictive model must accept arguments mod (the fitted model) and x, the regressor matrix for a new set of observations.

predFunLM = function(mod, x) {cbind(1,x) %*% mod$coef}

Now that these functions have been defined, we apply the prediction model for leaf_8_width using the first 10 genes. Multithreading is used automatically using the BiocParallel package. Change the following setup depending on your system.

nCores = 2 # For CRAN build max 2
library(BiocParallel)
if(.Platform$OS.type == "unix"){
    #On unix-based systems, use MulticoreParam
    register(MulticoreParam(nCores))
} else {
    #On windows, use makeCluster
    library(doParallel)
    Clus = makeCluster(nCores)
    registerDoParallel(Clus)
    register(DoparParam(), default = TRUE)
}

Now estimate out-of-sample $R^2$, also a rough estimate of the computation time is given. Remember to provide the cluster for multithreading.

R2lm = R2oosse(y = Brassica$Pheno$Leaf_8_width, x = Brassica$Expr[, 1:5], 
               fitFun = fitFunLM, predFun = predFunLM)

Estimates and standard error of the different components are now available.

#R2
R2lm$R2
#MSE
R2lm$MSE
#MST
R2lm$MST

Also confidence intervals can be constructed:

# R2
buildConfInt(R2lm)
#MSE, 90% confidence interval
buildConfInt(R2lm, what = "MSE", conf = 0.9)
#MST, based on chi-square distribution
buildConfInt(R2lm, what = "MST")

By default, cross-validation (CV) is used to estimate the MSE, and nonparametric bootstrapping is used to estimate the correlation between MSE and MST estimators. Other choices can be supplied though, e.g. for bootstrap .632 estimation of the MSE and jackknife estimation of the correlation:

R2lm632jn = R2oosse(y = Brassica$Pheno$Leaf_8_width, x = Brassica$Expr[, 1:5], 
                    fitFun = fitFunLM, predFun = predFunLM, methodMSE = "bootstrap",
                    methodCor = "jackknife")

Supplying a dataframe with predictor variables is not allowed. The user is asked to build the design matrix yourself with model.matrix prior to calling R2oosse:

#We construct a fake data frame also containing genotypes
fakeDf = data.frame(Brassica$Expr[, 1:5], "genotype" = sample(c("Genotype1", "Genotype2", "Genotype3"), replace = TRUE, nrow(Brassica$Expr)))
#Build the design matrix. The model.matrix variables automatically constructs dummy variables
designMatrix = model.matrix(~ .  , data = fakeDf)[, -1] #Include no intercept as fitting function already does this
#Now run oosse
R2modMat = R2oosse(y = Brassica$Pheno$Leaf_8_width, x = designMatrix, 
                    fitFun = fitFunLM, predFun = predFunLM)

Regularised linear model

For high-dimensional problems, such as the Brassica dataset, a regularised linear model is better suited to incorporate information for all genes. We use the cv.glmnet function from the glmnet package, which includes internal cross-validation for tuning the penalty parameter. Following custom function definitions are needed to fit in with the naming convention of the oosse package.

fitFunReg = function(y, x, ...) {cv.glmnet(y = y, x = x, ...)}
predFunReg = function(mod, x, ...){predict(mod, newx = x, ...)}

We adapt the parameter settings a bit to reduce computation time of the vignette, it is recommended to use 10-fold cross-validation, at least 200 repeats of the cross-validation splits and 50 bootstrap replicates.

nFolds = 5; cvReps = 1e2; nBoots = 4e1;numFeat = 25
if(require(glmnet)){
    if(onWindows <- (.Platform$OS.type == "windows")){
        clusterEvalQ(Clus, require(glmnet))
    }
    R2pen = R2oosse(y = Brassica$Pheno$Leaf_8_width, x = Brassica$Expr[, seq_len(numFeat)], #Subset genes for speed
                    nFolds = nFolds, cvReps = cvReps, nBootstrapsCor = nBoots,
                        fitFun = fitFunReg, predFun = predFunReg, alpha = 1)#Lasso model
    R2pen$R2
}

Random forest

As a final example we use a random forest as a prediction model. We use the implementation from the randomForest package.

if(require(randomForest)){
   if(onWindows){
        clusterEvalQ(Clus, require(randomForest))
    }
    fitFunrf = function(y, x, ...){randomForest(y = y, x, ...)}
    predFunrf = function(mod, x, ...){predict(mod, x, ...)}
    R2rf = R2oosse(y = Brassica$Pheno$Leaf_8_width, x = Brassica$Expr[, seq_len(numFeat)],
                     nFolds = nFolds, cvReps = cvReps, nBootstrapsCor = nBoots,
                        fitFun = fitFunrf, predFun = predFunrf)
    R2rf$R2
}
if(onWindows){
    stopCluster(Clus)
}

The $R^2$ estimate is comparable to that of the penalised regression model.

Session info

sessionInfo()

\clearpage



Try the oosse package in your browser

Any scripts or data that you put into this service are public.

oosse documentation built on May 29, 2024, 10:35 a.m.