| Experiment | R Documentation |
R6 class representing a simulation experiment.A simulation experiment with any number of DGP, Method, Evaluator, and Visualizer objects.
Generally speaking, users won't directly interact with the Experiment R6
class, but instead indirectly through create_experiment() and the tidy
Experiment helpers listed in below in the See also section.
When run, an Experiment seamlessly combines DGPs and
Methods, computing results in parallel. Those results can then be
evaluated using Evaluators and visualized using Visualizers.
nameThe name of the Experiment.
new()Initialize a new Experiment object.
Experiment$new( name = "experiment", dgp_list = list(), method_list = list(), evaluator_list = list(), visualizer_list = list(), future.globals = TRUE, future.packages = NULL, clone_from = NULL, save_dir = NULL, save_in_bulk = TRUE, ... )
nameThe name of the Experiment.
dgp_listAn optional list of DGP objects.
method_listAn optional list of Method objects.
evaluator_listAn optional list of Evaluator objects.
visualizer_listAn optional list of Visualizer objects.
future.globalsCharacter vector of names in the global environment to
pass to parallel workers. Passed as the argument of the same name to
future.apply::future_lapply() and related functions. To set for a
specific run of the experiment, use the same argument in
run_experiment().
future.packagesCharacter vector of packages required by parallel
workers. Passed as the argument of the same name to
future.apply::future_lapply() and related functions. To set for a
specific run of the experiment, use the same argument in
run_experiment().
clone_fromAn optional Experiment object to use as a base for
this one.
save_dirAn optional directory in which to save the experiment's
results. If NULL, results are saved in the current working directory
in a directory called "results" with a sub-directory named after
Experiment$name when using run_experiment() or fit_experiment()
with save=TRUE.
save_in_bulkA logical, indicating whether or not to save the
fit, evaluator, and visualizer outputs, each as a single bulk .rds file
(i.e., as fit_results.rds, eval_results.rds, viz_results.rds).
Default is TRUE. If FALSE, each fit replicate is saved as a
separate .rds file while each evaluator/visualizer is saved as a
separate .rds file. One can alternatively specify a character vector
with some subset of "fit", "eval", and/or "viz", indicating the
elements to save in bulk to disk.
...Not used.
A new instance of Experiment.
run()Run the full Experiment pipeline (fitting, evaluating,
and visualizing).
Experiment$run( n_reps = 1, parallel_strategy = "reps", future.globals = NULL, future.packages = NULL, future.seed = TRUE, use_cached = FALSE, return_all_cached_reps = FALSE, save = FALSE, record_time = FALSE, checkpoint_n_reps = 0, verbose = 1, ... )
n_repsThe number of replicates of the Experiment for this run.
parallel_strategyA vector with some combination of "reps", "dgps", or "methods". Determines how computation will be distributed across available resources. Currently only the default, "reps", is supported.
future.globalsCharacter vector of names in the global environment to
pass to parallel workers. Passed as the argument of the same name to
future.apply::future_lapply and related functions. To set for all runs of
the experiment, use the same argument during initialization.
future.packagesCharacter vector of packages required by parallel
workers. Passed as the argument of the same name to
future.apply::future_lapply and related functions. To set for all runs of
the experiment, use the same argument during initialization.
future.seedPassed as the argument of the same name in
future.apply::future_apply.
use_cachedLogical. If TRUE, find and return previously saved
results. If cached results cannot be found, continue as if use_cached was
FALSE.
return_all_cached_repsLogical. If FALSE (default), returns
only the fit results for the requested n_reps. If TRUE,
returns fit results for the requested n_reps plus any additional
cached replicates from the (DGP, Method) combinations in the
Experiment. Note that even if return_all_cached_reps = TRUE,
only the n_reps replicates are used when evaluating and visualizing
the Experiment.
saveA logical, indicating whether or not to save the fit, evaluator, and visualizer outputs to disk. Alternatively, one can specify a character vector with some subset of "fit", "eval", and/or "viz", indicating the elements to save to disk.
record_timeA logical, indicating whether or not to record the
time taken to run each Method, Evaluator, and Visualizer in the
Experiment. Alternatively, one can specify a character vector with
some subset of "fit", "eval", and/or "viz", indicating the elements for
which to record the time taken.
checkpoint_n_repsThe number of experiment replicates to compute before saving results to disk. If 0 (the default), no checkpoints are saved.
verboseLevel of verbosity. Default is 1, which prints out messages after major checkpoints in the experiment. If 2, prints additional debugging information for warnings and messages from user-defined functions (in addition to error debugging information). If 0, no messages are printed other than user-defined function error debugging information.
...Not used.
A named list of results from the simulation experiment with the following entries:
A tibble containing results from the fit
method. In addition to results columns, has columns named '.rep', '.dgp_name',
'.method_name', '.time_taken', and the vary_across parameter names if applicable.
A list of tibbles containing results from the
evaluate method, which evaluates each Evaluator in
the Experiment. Length of list is equivalent to the number of
Evaluators.
A list of tibbles containing results from the
visualize method, which visualizes each Visualizer in
the Experiment. Length of list is equivalent to the number of
Visualizers.
generate_data()Generate sample data from all DGP objects that were added
to the Experiment, including their varied params. Primarily useful
for debugging. Note that results are not generated in parallel.
Experiment$generate_data(n_reps = 1, ...)
n_repsThe number of datasets to generate per DGP.
...Not used.
A list of length equal to the number of DGPs in the
Experiment. If the Experiment does not have a
vary_across component, then each element in the list is a list
of n_reps datasets generated by the given DGP. If the
Experiment does have a vary_across component, then each
element in the outermost list is a list of lists. The second layer of
lists corresponds to a specific parameter setting within the
vary_across scheme, and the innermost layer of lists is of
length n_reps with the dataset replicates, generated by the
DGP.
fit()Fit Methods in the Experiment across all
DGPs for n_reps repetitions and return results from fits.
Experiment$fit( n_reps = 1, parallel_strategy = "reps", future.globals = NULL, future.packages = NULL, future.seed = TRUE, use_cached = FALSE, return_all_cached_reps = FALSE, save = FALSE, record_time = FALSE, checkpoint_n_reps = 0, verbose = 1, ... )
n_repsThe number of replicates of the Experiment for this run.
parallel_strategyA vector with some combination of "reps", "dgps", or "methods". Determines how computation will be distributed across available resources. Currently only the default, "reps", is supported.
future.globalsCharacter vector of names in the global environment to
pass to parallel workers. Passed as the argument of the same name to
future.apply::future_lapply and related functions. To set for all runs of
the experiment, use the same argument during initialization.
future.packagesCharacter vector of packages required by parallel
workers. Passed as the argument of the same name to
future.apply::future_lapply and related functions. To set for all runs of
the experiment, use the same argument during initialization.
future.seedPassed as the argument of the same name in
future.apply::future_apply.
use_cachedLogical. If TRUE, find and return previously saved
results. If cached results cannot be found, continue as if use_cached was
FALSE.
return_all_cached_repsLogical. If FALSE (default), returns
only the fit results for the requested n_reps. If TRUE,
returns fit results for the requested n_reps plus any additional
cached replicates from the (DGP, Method) combinations in the
Experiment.
saveLogical. If TRUE, save outputs to disk.
record_timeLogical. If TRUE, record the amount of time taken to
fit each Method per replicate.
checkpoint_n_repsThe number of experiment replicates to compute before saving results to disk. If 0 (the default), no checkpoints are saved.
verboseLevel of verbosity. Default is 1, which prints out messages after major checkpoints in the experiment. If 2, prints additional debugging information for warnings and messages from user-defined functions (in addition to error debugging information). If 0, no messages are printed other than user-defined function error debugging information.
...Additional future.* arguments to pass to future.apply
functions. See future.apply::future_lapply() and
future.apply::future_mapply().
A tibble containing the results from fitting all Methods
across all DGPs for n_reps repetitions. In addition to
results columns, has columns named '.rep', '.dgp_name', '.method_name',
'.time_taken' (if record_time = TRUE), and the
vary_across parameter names if applicable.
evaluate()Evaluate the performance of method(s) across all
Evaluator objects in the Experiment and return results.
Experiment$evaluate( fit_results, use_cached = FALSE, save = FALSE, record_time = FALSE, verbose = 1, ... )
fit_resultsA tibble, as returned by fit_experiment().
use_cachedLogical. If TRUE, find and return previously saved
results. If cached results cannot be found, continue as if use_cached was
FALSE.
saveLogical. If TRUE, save outputs to disk.
record_timeLogical. If TRUE, record the amount of time taken to
evaluate each Evaluator.
verboseLevel of verbosity. Default is 1, which prints out messages after major checkpoints in the experiment. If 2, prints additional debugging information for warnings and messages from user-defined functions (in addition to error debugging information). If 0, no messages are printed other than user-defined function error debugging information.
...Not used.
A list of evaluation result tibbles, one for each
Evaluator.
visualize()Visualize the performance of methods and/or its evaluation metrics
using all Visualizer objects in the Experiment and return
visualization results.
Experiment$visualize( fit_results, eval_results = NULL, use_cached = FALSE, save = FALSE, record_time = FALSE, verbose = 1, ... )
fit_resultsA tibble, as returned by fit_experiment().
eval_resultsA list of result tibbles, as returned by
evaluate_experiment().
use_cachedLogical. If TRUE, find and return previously saved
results. If cached results cannot be found, continue as if use_cached was
FALSE.
saveLogical. If TRUE, save outputs to disk.
record_timeLogical. If TRUE, record the amount of time taken to
visualize each Visualizer.
verboseLevel of verbosity. Default is 1, which prints out messages after major checkpoints in the experiment. If 2, prints additional debugging information for warnings and messages from user-defined functions (in addition to error debugging information). If 0, no messages are printed other than user-defined function error debugging information.
...Not used.
A list of visualizations, one for each Visualizer.
add_dgp()Add a DGP object to the Experiment.
Experiment$add_dgp(dgp, name = NULL, ...)
dgpA DGP object.
nameA name to identify the DGP.
...Not used.
The Experiment object, invisibly.
update_dgp()Update a DGP object in the Experiment.
Experiment$update_dgp(dgp, name, ...)
dgpA DGP object.
nameAn existing name identifying the DGP to be updated.
...Not used.
The Experiment object, invisibly.
remove_dgp()Remove a DGP object from the Experiment.
Experiment$remove_dgp(name = NULL, ...)
nameAn existing name identifying the DGP to be removed.
...Not used
The Experiment object, invisibly.
get_dgps()Retrieve the DGP objects associated with the Experiment.
Experiment$get_dgps()
A named list of the DGP objects in the Experiment.
add_method()Add a Method object to the Experiment.
Experiment$add_method(method, name = NULL, ...)
methodA Method object.
nameA name to identify the Method to be updated.
...Not used.
The Experiment object, invisibly.
update_method()Update a Method object in the Experiment.
Experiment$update_method(method, name, ...)
methodA Method object.
nameAn existing name identifying the Method to be updated.
...Not used.
The Experiment object, invisibly.
remove_method()Remove a Method object from the Experiment.
Experiment$remove_method(name = NULL, ...)
nameAn existing name identifying the Method to be removed.
...Not used
The Experiment object, invisibly.
get_methods()Retrieve the Method objects associated with the Experiment.
Experiment$get_methods()
A named list of the Method objects in the Experiment.
add_evaluator()Add an Evaluator object to the Experiment.
Experiment$add_evaluator(evaluator, name = NULL, ...)
evaluatorAn Evaluator object.
nameA name to identify the Evaluator.
...Not used.
The Experiment object, invisibly.
update_evaluator()Update an Evaluator object in the Experiment.
Experiment$update_evaluator(evaluator, name, ...)
evaluatorAn Evaluator object.
nameAn existing name identifying the Evaluator to be updated.
...Not used.
The Experiment object, invisibly.
remove_evaluator()Remove an Evaluator object from the Experiment.
Experiment$remove_evaluator(name = NULL, ...)
nameAn existing name identifying the Evaluator to be removed.
...Not used
The Experiment object, invisibly.
get_evaluators()Retrieve the Evaluator objects associated with the Experiment.
Experiment$get_evaluators()
A named list of the Evaluator objects in the Experiment.
add_visualizer()Add a Visualizer object to the Experiment.
Experiment$add_visualizer(visualizer, name = NULL, ...)
visualizerA Visualizer object.
nameA name to identify the Visualizer.
...Not used.
The Experiment object, invisibly.
update_visualizer()Update a Visualizer object in the Experiment.
Experiment$update_visualizer(visualizer, name, ...)
visualizerA Visualizer object.
nameAn existing name identifying the Visualizer to be updated.
...Not used.
The Experiment object, invisibly.
remove_visualizer()Remove a Visualizer object from the Experiment.
Experiment$remove_visualizer(name = NULL, ...)
nameAn existing name identifying the Visualizer to be removed.
...Not used
The Experiment object, invisibly.
get_visualizers()Retrieve the Visualizer objects associated with the Experiment.
Experiment$get_visualizers()
A named list of the Visualizer objects in the Experiment.
vary_across component is added and the Experiment is run, the
Experiment is systematically varied across values of the specified
parameter in the DGP or Method while all other parameters are
held constant.
add_vary_across()Experiment$add_vary_across(.dgp, .method, ...)
.dgpName of DGP to vary in the Experiment. Can also be a
DGP object that matches one in the Experiment or even a
vector/list of DGP names/objects, assuming they all support the
target arguments provided via ....
.methodName of Method to vary in the Experiment. Can also be a
Method object that matches one in the Experiment or even a
vector/list of Method names/objects, assuming they all support the
target arguments provided via ....
...Any number of named arguments where names match an argument in the
user-specified DGP or Method function and values are vectors (for
scalar parameters) or lists (for arbitrary parameters).
One of the .dgp or .method arguments (but not both) must
be provided.
The Experiment object, invisibly.
update_vary_across()Update a vary_across component in the Experiment.
Experiment$update_vary_across(.dgp, .method, ...)
.dgpName of DGP to vary in the Experiment. Can also be a
DGP object that matches one in the Experiment or even a
vector/list of DGP names/objects, assuming they all support the
target arguments provided via ....
.methodName of Method to vary in the Experiment. Can also be a
Method object that matches one in the Experiment or even a
vector/list of Method names/objects, assuming they all support the
target arguments provided via ....
...Any number of named arguments where names match an argument in the
user-specified DGP or Method function and values are vectors (for
scalar parameters) or lists (for arbitrary parameters).
One of the .dgp or .method arguments (but not both) must
be provided.
The Experiment object, invisibly.
remove_vary_across()Remove a vary_across component in the Experiment.
Experiment$remove_vary_across(dgp, method, param_names = NULL)
dgpName of DGP to vary in the Experiment. Can also be a
DGP object that matches one in the Experiment or even a
vector/list of DGP names/objects.
methodName of Method to vary in the Experiment. Can also be a
Method object that matches one in the Experiment or even a
vector/list of Method names/objects.
param_namesA character vector of parameter names to remove. If not
provided, the entire set of vary_across parameters will be removed for
the specified DGP/Method.
If both the dgp and method arguments are not provided,
then all vary_across parameters from the experiment are removed.
The Experiment object, invisibly.
get_vary_across()Retrieve the parameters to vary across for each DGP and
Method in the Experiment.
Experiment$get_vary_across()
a nested list with entries "dgp" and "method".
clear_cache()Clear (or delete) cached results from an Experiment to
start the experiment fresh/from scratch.
Experiment$clear_cache()
The Experiment object, invisibly.
get_cached_results()Read in cached results from disk from a previously saved
Experiment run.
Experiment$get_cached_results(results_type, verbose = 0)
results_typeCharacter string indicating the type of results to read in. Must be one of "experiment", "experiment_cached_params", "fit", "eval", or "viz".
verboseLevel of verbosity. Default is 1, which prints out messages after major checkpoints in the experiment. If 2, prints additional debugging information for warnings and messages from user-defined functions (in addition to error debugging information). If 0, no messages are printed other than user-defined function error debugging information.
The cached results, specifically the cached Experiment object
if results_type = "experiment", the cached fit results if
results_type = "fit", the cached evaluation results if
results_type = "eval", the cached visualization results if
results_type = "viz", and the experiment parameters used in
the cache if results_type = "experiment_cached_params".
set_doc_options()Set R Markdown options for Evaluator or Visualizer
outputs in the summary report. Some options include the height/width of
plots and number of digits to show in tables.
Experiment$set_doc_options(field_name, name, show = NULL, nrows, ...)
field_nameOne of "evaluator" or "visualizer".
nameName of Evaluator or Visualizer to set R Markdown
options.
showIf TRUE, show output; if FALSE, hide output in
R Markdown report. Default NULL does not change the "doc_show" field
in Evaluator/Visualizer.
nrowsMaximum number of rows to show in the Evaluator's results
table in the R Markdown report. If NULL, shows all rows. Default does
not change the "doc_nrows" field in the Evaluator. Argument is
ignored if field_name = "visualizer".
...Named R Markdown options to set. If field_name = "visualizer",
options are "height" and "width". If field_name = "evaluator",
see options for vthemes::pretty_DT().
The Experiment object, invisibly.
set_export_viz_options()Set options to use in ggplot2::ggsave() when exporting
the Visualizer's visualization with export_visualizers().
Experiment$set_export_viz_options(name, ...)
nameName of Visualizer to set ggplot2::ggsave() options.
...Named options to set. See arguments of ggplot2::ggsave() for
possible options.
The Experiment object, invisibly.
get_save_dir()Get the directory in which the Experiment's results and
visualizations are saved.
Experiment$get_save_dir()
The relative path to where the Experiment's results and
visualizations are saved.
set_save_dir()Set the directory in which the Experiment's results and
visualizations are saved.
Experiment$set_save_dir(save_dir)
save_dirThe directory in which the Experiment's results
will be saved.
The Experiment object, invisibly.
save()Save the Experiment object to a .rds file under the
Experiment's results directory (see get_save_dir()).
Experiment$save()
The Experiment object, invisibly.
get_save_in_bulk()Get the save_in_bulk parameter for the Experiment.
Experiment$get_save_in_bulk()
Logical, indicating whether the results are saved in bulk or not.
export_visualizers()Export all cached Visualizer results from an
Experiment to images in the viz_results/ directory under the
Experiment's results directory (see get_save_dir()).
Experiment$export_visualizers(device = "png", ...)
deviceSee device argument of ggplot2::ggsave().
...Additional arguments to pass to ggplot2::ggsave() to be used
for all visualizations. If not provided, the export_options from
each Visualizer will be used.
The Experiment object, invisibly.
print()Print the Experiment in a nice format, showing the
DGP, Method, Evaluator, Visualizers, and varied parameters
involved in the Experiment.
Experiment$print()
The original Experiment object, invisibly.
clone()The objects of this class are cloneable with this method.
Experiment$clone(deep = FALSE)
deepWhether to make a deep clone.
The following tidy helpers take an Experiment object as their
first argument: create_experiment(), generate_data(),
fit_experiment(), evaluate_experiment(), visualize_experiment(),
run_experiment(), clear_cache(), get_cached_results(),
get_save_dir(), set_save_dir(), save_experiment(),
set_export_viz_options(), export_visualizers(), set_doc_options(),
add_*(), update_*(),
remove_*(), get_*(), and
*_vary_across().
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.