tuneParamCL2: Tune parameters w and lamda using the CL2 penalty

Description Usage Arguments Details Value

View source: R/add.r

Description

Does k-fold cross-validation with the function optimPenalLikL2 and returns the values of lamda and w that maximize the area under the ROC.

Usage

1
tuneParamCL2(Data, nfolds = nfolds, grid, algorithm = c("QN"))

Arguments

Data

a data frame, as a first column should have the response variable y and the other columns the predictors

nfolds

the number of folds used for cross-validation. OBS! nfolds>=2

grid

a grid (data frame) with values of lamda and w that will be used for tuning to tune the model. It is created by expand.grid see example below

algorithm

choose between BFGS ("QN") and hjk (Hooke-Jeeves optimization free) to be used for optmization

Details

It supports the BFGS optimization method ('QN') from the optim stats function, the Hooke-Jeeves derivative-free minimization algorithm ('hjk'). The value of lamda and w that yield the maximum AUC on the cross-validating data set is selected. If more that one value of lamda nad w yield the same AUC, then the biggest values of lamda and w are choosen.

Value

A matrix with the following: the average (over folds) cross-validated AUC, the totalVariables selected on the training set, and the standard deviation of the AUC over the nfolds


stepPenal documentation built on May 1, 2019, 10:11 p.m.