ts_model_compare: Compare Two Time Series Models

View source: R/ts-model-compare.R

ts_model_compareR Documentation

Compare Two Time Series Models

Description

This function will expect to take in two models that will be used for comparison. It is useful to use this after appropriately following the modeltime workflow and getting two models to compare. This is an extension of the calibrate and plot, but it only takes two models and is most likely better suited to be used after running a model through the ts_model_auto_tune() function to see the difference in performance after a base model has been tuned.

Usage

ts_model_compare(
  .model_1,
  .model_2,
  .type = "testing",
  .splits_obj,
  .data,
  .print_info = TRUE,
  .metric = "rmse"
)

Arguments

.model_1

The model being compared to the base, this can also be a hyperparameter tuned model.

.model_2

The base model.

.type

The default is the testing tibble, can be set to training as well.

.splits_obj

The splits object

.data

The original data that was passed to splits

.print_info

This is a boolean, the default is TRUE

.metric

This should be one of the following character strings:

  • "mae"

  • "mape"

  • "mase"

  • "smape"

  • "rmse"

  • "rsq"

Details

This function expects to take two models. You must tell it if it will be assessing the training or testing data, where the testing data is the default. You must therefore supply the splits object to this function along with the origianl dataset. You must also tell it which default modeltime accuracy metric should be printed on the graph itself. You can also tell this function to print information to the console or not. A static ggplot2 polot and an interactive plotly plot will be returned inside of the output list.

Value

The function outputs a list invisibly.

Author(s)

Steven P. Sanderson II, MPH

See Also

Other Utility: auto_stationarize(), calibrate_and_plot(), internal_ts_backward_event_tbl(), internal_ts_both_event_tbl(), internal_ts_forward_event_tbl(), model_extraction_helper(), ts_get_date_columns(), ts_info_tbl(), ts_is_date_class(), ts_lag_correlation(), ts_model_auto_tune(), ts_model_rank_tbl(), ts_model_spec_tune_template(), ts_qq_plot(), ts_scedacity_scatter_plot(), ts_to_tbl(), util_difflog_ts(), util_doublediff_ts(), util_doubledifflog_ts(), util_log_ts(), util_singlediff_ts()

Examples

## Not run: 
suppressPackageStartupMessages(library(modeltime))
suppressPackageStartupMessages(library(timetk))
suppressPackageStartupMessages(library(rsample))
suppressPackageStartupMessages(library(dplyr))

data_tbl <- ts_to_tbl(AirPassengers) %>%
  select(-index)

splits <- time_series_split(
  data       = data_tbl,
  date_var   = date_col,
  assess     = "12 months",
  cumulative = TRUE
)

rec_obj <- ts_auto_recipe(
 .data     = data_tbl,
 .date_col = date_col,
 .pred_col = value
)

wfs_mars <- ts_wfs_mars(.recipe_list = rec_obj)

wf_fits <- wfs_mars %>%
  modeltime_fit_workflowset(
    data = training(splits)
    , control = control_fit_workflowset(
         allow_par = FALSE
         , verbose = TRUE
       )
 )

calibration_tbl <- wf_fits %>%
    modeltime_calibrate(new_data = testing(splits))

base_mars <- calibration_tbl %>% pluck_modeltime_model(1)
date_mars <- calibration_tbl %>% pluck_modeltime_model(2)

ts_model_compare(
 .model_1    = base_mars,
 .model_2    = date_mars,
 .type       = "testing",
 .splits_obj = splits,
 .data       = data_tbl,
 .print_info = TRUE,
 .metric     = "rmse"
 )$plots$static_plot

## End(Not run)


spsanderson/healthyR.ts documentation built on Jan. 19, 2024, 10:02 p.m.