create_evaluator: Create a new 'Evaluator'

View source: R/evaluator.R

create_evaluatorR Documentation

Create a new Evaluator

Description

Create an Evaluator which can evaluate() the performance of methods in an Experiment.

Usage

create_evaluator(
  .eval_fun,
  .name = NULL,
  .doc_options = list(),
  .doc_show = TRUE,
  ...
)

Arguments

.eval_fun

The user-defined evaluation function.

.name

(Optional) The name of the Evaluator, helpful for later identification. The argument must be specified by position or typed out in whole; no partial matching is allowed for this argument.

.doc_options

(Optional) List of options to control the aesthetics of the displayed Evaluator's results table in the knitted R Markdown report. See vthemes::pretty_DT() for possible options. The argument must be specified by position or typed out in whole; no partial matching is allowed for this argument.

.doc_show

If TRUE (default), show Evaluator's results as a table in the R Markdown report; if FALSE, hide output in the R Markdown report.

...

User-defined arguments to pass into .eval_fun().

Details

When evaluating or running the Experiment (see evaluate_experiment() or run_experiment()), the named arguments fit_results and vary_params are automatically passed into the Evaluator function .eval_fun() and serve as placeholders for the fit_experiment() results (i.e., the results from the method fits) and the name of the varying parameter(s), respectively.

To evaluate the performance of a method(s) fit then, the Evaluator function .eval_fun() should almost always take in the named argument fit_results. See Experiment$fit() or fit_experiment() for details on the format of fit_results. If the Evaluator is used for Experiments with varying parameters, vary_params should be used as a stand in for the name of this varying parameter(s).

Value

A new Evaluator object.

Examples

# create DGP
dgp_fun <- function(n, beta, rho, sigma) {
  cov_mat <- matrix(c(1, rho, rho, 1), byrow = TRUE, nrow = 2, ncol = 2)
  X <- MASS::mvrnorm(n = n, mu = rep(0, 2), Sigma = cov_mat)
  y <- X %*% beta + rnorm(n, sd = sigma)
  return(list(X = X, y = y))
}
dgp <- create_dgp(.dgp_fun = dgp_fun,
                  .name = "Linear Gaussian DGP",
                  n = 50, beta = c(1, 0), rho = 0, sigma = 1)

# create Method
lm_fun <- function(X, y, cols) {
  X <- X[, cols]
  lm_fit <- lm(y ~ X)
  pvals <- summary(lm_fit)$coefficients[-1, "Pr(>|t|)"] %>%
    setNames(paste(paste0("X", cols), "p-value"))
  return(pvals)
}
lm_method <- create_method(
  .method_fun = lm_fun,
  .name = "OLS",
  cols = c(1, 2)
)

# create Experiment
experiment <- create_experiment() %>%
  add_dgp(dgp) %>%
  add_method(lm_method) %>%
  add_vary_across(.dgp = dgp, rho = seq(0.91, 0.99, 0.02))

fit_results <- fit_experiment(experiment, n_reps=10)

# create an example Evaluator function
reject_prob_fun <- function(fit_results, vary_params = NULL, alpha = 0.05) {
  fit_results[is.na(fit_results)] <- 1
  group_vars <- c(".dgp_name", ".method_name", vary_params)
  eval_out <- fit_results %>%
    dplyr::group_by(across({{group_vars}})) %>%
    dplyr::summarise(
      n_reps = dplyr::n(),
      `X1 Reject Prob.` = mean(`X1 p-value` < alpha),
      `X2 Reject Prob.` = mean(`X2 p-value` < alpha)
    )
  return(eval_out)
}

reject_prob_eval <- create_evaluator(.eval_fun = reject_prob_fun,
                                     .name = "Rejection Prob (alpha = 0.05)")

reject_prob_eval$evaluate(fit_results, vary_params = "rho")


Yu-Group/simChef documentation built on March 25, 2024, 3:22 a.m.