View source: R/apollo_outOfSample.R
apollo_outOfSample | R Documentation |
Randomly generates estimation and validation samples, estimates the model on the first and calculates the likelihood for the second, then repeats.
apollo_outOfSample(
apollo_beta,
apollo_fixed,
apollo_probabilities,
apollo_inputs,
estimate_settings = list(estimationRoutine = "bgw", maxIterations = 200, writeIter =
FALSE, hessianRoutine = "none", printLevel = 3L, silent = TRUE),
outOfSample_settings = list(nRep = 10, validationSize = 0.1, samples = NA, rmse = NULL)
)
apollo_beta |
Named numeric vector. Names and values for parameters. |
apollo_fixed |
Character vector. Names (as defined in
|
apollo_probabilities |
Function. Returns probabilities of the model to be estimated. Must receive three arguments:
|
apollo_inputs |
List grouping most common inputs. Created by function apollo_validateInputs. |
estimate_settings |
List. Options controlling the estimation process. See apollo_estimate. |
outOfSample_settings |
List. Contains settings for this function. User input is required for all settings except those with a default or marked as optional.
|
A common way to test for overfitting of a model is to measure its fit on a sample not used during estimation that is, measuring its out-of-sample fit. A simple way to do this is splitting the complete available dataset in two parts: an estimation sample, and a validation sample. The model of interest is estimated using only the estimation sample, and then those estimated parameters are used to measure the fit of the model (e.g. the log-likelihood of the model) on the validation sample. Doing this with only one validation sample, however, may lead to biased results, as a particular validation sample need not be representative of the population. One way to minimise this issue is to randomly draw several pairs of estimation and validation samples from the complete dataset, and apply the procedure to each pair.
The splitting of the database into estimation and validation samples is done
at the individual level, not at the observation level. If the sampling wants
to be done at the individual level (not recommended on panel data), then the
optional outOfSample_settings$samples
argument should be provided.
This function writes two different files to the working/output directory:
modelName_outOfSample_params.csv
: Records the
estimated parameters, final log-likelihood, and number of
observations on each repetition.
modelName_outOfSample_samples.csv
: Records the
sample composition of each repetition.
The first two files are updated throughout the run of this function, while the last one is only written once the function finishes.
When run, this function will look for the two files above in the working/output directory. If they are found, the function will attempt to pick up re-sampling from where those files left off. This is useful in cases where the original bootstrapping was interrupted, or when additional re-sampling wants to be performed.
A matrix with the average log-likelihood per observation for both the estimation and validation samples, for each repetition. Two additional files with further details are written to the working/output directory.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.