Description Usage Arguments Details Value Author(s) References See Also Examples
An observation is removed and the model is fit the the remaining data and this fit used to predict the value of the deleted observation. This is repeated, n times, for each of the n observations and the mean square error is computed.
1 | LOOCV(X, y)
|
X |
training inputs |
y |
training output |
LOOCV for linear regression is exactly equivalent to the PRESS method suggested by Allen (1971) who also provided an efficient algorithm.
Vector of two components comprising the cross-validation MSE and its sd based on the MSE in each validation sample.
A.I. McLeod and C. Xu
Hastie, T., Tibshirani, R. and Friedman, J. (2009). The Elements of Statistical Learning. 2nd Ed.
Allen, D.M. (1971). Mean Square Error of Prediction as a Criterion for Selecting Variables. Technometrics, 13, 469 -475.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | #Example. Compare LOO CV with K-fold CV.
#Find CV MSE's for LOOCV and compare with K=5, 10, 20, 40, 50, 60
#Takes about 30 sec
## Not run:
data(zprostate)
train<-(zprostate[zprostate[,10],])[,-10]
X<-train[,1:2]
y<-train[,9]
CVLOO<-LOOCV(X,y)
KS<-c(5,10,20,40,50,60)
nKS<-length(KS)
cvs<-numeric(nKS)
set.seed(1233211231)
for (iK in 1:nKS)
cvs[iK]<-CVDH(X,y,K=KS[iK],REP=10)[1]
boxplot(cvs)
abline(h=CVLOO, lwd=3, col="red")
title(sub="Boxplot of CV's with K=5,10,20,40,50,60 and LOO CV in red")
## End(Not run)
|
Loading required package: leaps
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.