knitr::opts_chunk$set(
    message = FALSE,
    warning = FALSE,
    fig.width = 8, 
    fig.height = 4.5,
    fig.align = 'center',
    out.width='95%', 
    dpi = 100
)

# devtools::load_all() # Travis CI fails on load_all()

Anomaly detection is an important part of time series analysis:

  1. Detecting anomalies can signify special events
  2. Cleaning anomalies can improve forecast error

This tutorial will cover:

library(dplyr)
library(purrr)
library(timetk)

Data

This tutorial will use the wikipedia_traffic_daily dataset:

wikipedia_traffic_daily %>% glimpse()

Visualization

Using the plot_time_series() function, we can interactively detect anomalies at scale.

wikipedia_traffic_daily %>%
  group_by(Page) %>%
  plot_time_series(date, value, .facet_ncol = 2)

Anomalize: breakdown, identify, and clean in 1 easy step

The anomalize() function is a feature rich tool for performing anomaly detection. Anomalize is group-aware, so we can use this as part of a normal pandas groupby chain. In one easy step:

anomalize_tbl <- wikipedia_traffic_daily %>%
  group_by(Page) %>%
  anomalize(
      .date_var      = date, 
      .value         = value,
      .iqr_alpha     = 0.05,
      .max_anomalies = 0.20,
      .message       = FALSE
  )

anomalize_tbl %>% glimpse()

The anomalize() function returns:

  1. The original grouping and datetime columns.
  2. The seasonal decomposition: observed, seasonal, seasadj, trend, and remainder. The objective is to remove trend and seasonality such that the remainder is stationary and representative of normal variation and anomalous variations.
  3. Anomaly identification and scoring: anomaly, anomaly_score, anomaly_direction. These identify the anomaly decision (Yes/No), score the anomaly as a distance from the centerline, and label the direction (-1 (down), zero (not anomalous), +1 (up)).
  4. Recomposition: recomposed_l1 and recomposed_l2. Think of these as the lower and upper bands. Any observed data that is below l1 or above l2 is anomalous.
  5. Cleaned data: observed_clean. Cleaned data is automatically provided, which has the outliers replaced with data that is within the recomposed l1/l2 boundaries. With that said, you should always first seek to understand why data is being considered anomalous before simply removing outliers and using the cleaned data.

The most important aspect is that this data is ready to be visualized, inspected, and modifications can then be made to address any tweaks you would like to make.

Anomaly Visualization 1: Seasonal Decomposition Plot

The first step in my normal process is to analyze the seasonal decomposition. I want to see what the remainders look like, and make sure that the trend and seasonality are being removed such that the remainder is centered around zero.

anomalize_tbl %>%
    group_by(Page) %>%
    plot_anomalies_decomp(
        .date_var = date, 
        .interactive = FALSE
    )

Anomaly Visualization 2: Anomaly Detection Plot

Once I’m satisfied with the remainders, my next step is to visualize the anomalies. Here I’m looking to see if I need to grow or shrink the remainder l1 and l2 bands, which classify anomalies.

anomalize_tbl %>%
    group_by(Page) %>%
    plot_anomalies(
        date,
        .facet_ncol = 2
    )

Anomaly Visualization 3: Anomalies Cleaned Plot

There are pros and cons to cleaning anomalies. I’ll leave that discussion for another time. But, should you be interested in seeing what your data looks like cleaned (with outliers removed), this plot will help you compare before and after.

anomalize_tbl %>%
    group_by(Page) %>%
    plot_anomalies_cleaned(
        date,
        .facet_ncol = 2
    )

Learning More

My Talk on High-Performance Time Series Forecasting

Time series is changing. Businesses now need 10,000+ time series forecasts every day. This is what I call a High-Performance Time Series Forecasting System (HPTSF) - Accurate, Robust, and Scalable Forecasting.

High-Performance Forecasting Systems will save companies MILLIONS of dollars. Imagine what will happen to your career if you can provide your organization a "High-Performance Time Series Forecasting System" (HPTSF System).

I teach how to build a HPTFS System in my High-Performance Time Series Forecasting Course. If interested in learning Scalable High-Performance Forecasting Strategies then take my course. You will learn:

Unlock the High-Performance Time Series Forecasting Course



business-science/timekit documentation built on Feb. 2, 2024, 2:51 a.m.