leaps | R Documentation |
leaps() performs an exhaustive search for the best subsets of the
variables in x for predicting y in linear regression, using an efficient
branch-and-bound algorithm. It is a compatibility wrapper for
regsubsets
does the same thing better.
Since the algorithm returns a best model of each size, the results do not depend on a penalty model for model size: it doesn't make any difference whether you want to use AIC, BIC, CIC, DIC, ...
leaps(x=, y=, wt=rep(1, NROW(x)), int=TRUE, method=c("Cp", "adjr2", "r2"), nbest=10,
names=NULL, df=NROW(x), strictly.compatible=TRUE)
x |
A matrix of predictors |
y |
A response vector |
wt |
Optional weight vector |
int |
Add an intercept to the model |
method |
Calculate Cp, adjusted R-squared or R-squared |
nbest |
Number of subsets of each size to report |
names |
vector of names for columns of |
df |
Total degrees of freedom to use instead of |
strictly.compatible |
Implement misfeatures of leaps() in S |
A list with components
which |
logical matrix. Each row can be used to select the columns of |
size |
Number of variables, including intercept if any, in the model |
cp |
or |
label |
vector of names for the columns of x |
With strictly.compatible=T
the function will stop with an error if x
is not of full rank or if it has more than 31 columns. It will ignore the column names of x
even if names==NULL
and will replace them with "0" to "9", "A" to "Z".
Alan Miller "Subset Selection in Regression" Chapman & Hall
regsubsets
, regsubsets.formula
,
regsubsets.default
x<-matrix(rnorm(100),ncol=4)
y<-rnorm(25)
leaps(x,y)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.