Description Usage Arguments Details Value Author(s) References See Also Examples
Function that performs a bootstrap experiment of a learning system on a given data set. The function is completely generic. The generality comes from the fact that the function that the user provides as the system to evaluate, needs in effect to be a user-defined function that takes care of the learning, testing and calculation of the statistics that the user wants to estimate using the bootstrap method.
1 | bootstrap(sys, ds, sets, itsInfo = F, verbose = T)
|
sys |
|
ds |
|
sets |
|
itsInfo |
Boolean value determining whether the object returned by the function should include as an attribute a list with as many components as there are iterations in the experimental process, with each component containing information that the user-defined function decides to return on top of the standard error statistics. See the Details section for more information. |
verbose |
A boolean value controlling the level of output of the function
execution, defaulting to |
The idea of this function is to carry out a bootstrap experiment of a given learning system on a given data set. The goal of this experiment is to estimate the value of a set of evaluation statistics by means of the bootstrap method. Bootstrap estimates are obtained by averaging over a set of k scores each obtained in the following way: i) draw a random sample with replacement with the same size as the original data set; ii) obtain a model with this sample; iii) test it and obtain the estimates for this run on the observations of the original data set that were not used in the sample obtained in step i). This process is repeated k times and the average scores are the bootstrap estimates.
It is the user responsibility to decide which statistics are to be evaluated on each iteration and how they are calculated. This is done by creating a function that the user knows it will be called by this hold out routine at each repetition of the learn+test process. This user-defined function must assume that it will receive in the first 3 arguments a formula, a training set and a testing set, respectively. It should also assume that it may receive any other set of parameters that should be passed towards the learning algorithm. The result of this user-defined function should be a named vector with the values of the statistics to be estimated obtained by the learner when trained with the given training set, and tested on the given test set. See the Examples section below for an example of these functions.
If the itsInfo
parameter is set to the value
TRUE
then the hldRun
object that is the result
of the function will have an attribute named itsInfo
that will contain extra information from the individual repetitions of
the hold out process. This information can be accessed by the user by
using the function attr()
,
e.g. attr(returnedObject,'itsInfo')
. For this
information to be collected on this attribute the user needs to code
its user-defined functions in a way that it returns the vector of the
evaluation statistics with an associated attribute named
itInfo
(note that it is "itInfo" and not "itsInfo" as
above), which should be a list containing whatever information the
user wants to collect on each repetition. This apparently complex
infra-structure allows you to pass whatever information you which from
each iteration of the experimental process. A typical example is the
case where you want to check the individual predictions of the model
on each test case of each repetition. You could pass this vector of
predictions as a component of the list forming the attribute
itInfo
of the statistics returned by your user-defined
function. In the end of the experimental process you will be able to
inspect/use these predictions by inspecting the attribute
itsInfo
of the bootRun
object returned by the
bootstrap()
function. See the Examples section on the help page
of the function holdout()
for an
illustration of this potentiality.
The result of the function is an object of class bootRun
.
Luis Torgo ltorgo@dcc.fc.up.pt
Torgo, L. (2010) Data Mining using R: learning with case studies, CRC Press (ISBN: 9781439810187).
http://www.dcc.fc.up.pt/~ltorgo/DataMiningWithR
experimentalComparison
,
bootRun
,bootSettings
, monteCarlo
, holdOut
, loocv
, crossValidation
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | ## Estimating the mean absolute error and the normalized mean squared
## error of rpart on the swiss data, using one repetition of 10-fold CV
data(swiss)
## First the user defined function (note: can have any name)
user.rpart <- function(form, train, test, ...) {
require(rpart)
model <- rpart(form, train, ...)
preds <- predict(model, test)
regr.eval(resp(form, test), preds,
stats=c('mae','nmse'), train.y=resp(form, train))
}
## Now the evaluation
eval.res <- bootstrap(learner('user.rpart',pars=list()),
dataset(Infant.Mortality ~ ., swiss),
bootSettings(1234,10)) # bootstrap with 10 repetitions
## Check a summary of the results
summary(eval.res)
## Plot them
## Not run:
plot(eval.res)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.