evaluate: Evaluate outputs of methods according to provided metrics.

View source: R/evaluate.R

evaluateR Documentation

Evaluate outputs of methods according to provided metrics.

Description

Given a Metric object or list of Metric objects, this function evaluates an Output object according to these metrics. The computed values of the metrics are saved to file. The "user" time to run the method (as measured by system.time) is added to metrics by default unless one of the passed metrics has name "time".

Usage

evaluate(object, metrics)

Arguments

object

object of class OutputRef as produced by run_method (or list of such objects). If object is a Simulation, then function is applied to the referenced outputs in that simulation and returns the same Simulation object but with references added to the new evals created.

metrics

a list of Metric objects or a single Metric object.

Details

This function creates objects of class Evals and saves each to file (at dir/model_name/<out_loc>/r<index>_<method_name>_evals.Rdata. Since evaluating metrics is usually (in statistical methodological papers) fast, parallel functionality has not been developed for the evaluation component.

See Also

generate_model simulate_from_model run_method

Examples

## Not run: 
 # suppose previously we had run the following:
 sim <- new_simulation(name = "normal-example",
                       label = "Normal Mean Estimation",
                       dir = tempdir()) %>%
   generate_model(make_my_example_model, n = 20) %>%
   simulate_from_model(nsim = 50, index = 1:3) %>%
   run_method(my_example_method)
 # then we could add
 sim <- evaluate(sim, my_example_loss)
 
## End(Not run)

simulator documentation built on Feb. 16, 2023, 9:34 p.m.