t_runningmoments: Compute first K moments over a sliding time-based window

Description Usage Arguments Details Value Time Windowing Note Author(s) References See Also Examples

Description

Compute the (standardized) 2nd through kth moments, the mean, and the number of elements over an infinite or finite sliding time based window, returning a matrix.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
t_running_sd3(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, na_rm = FALSE, min_df = 0L, used_df = 1,
  restart_period = 100L, variable_win = FALSE, wts_as_delta = TRUE,
  check_wts = FALSE, normalize_wts = TRUE)

t_running_skew4(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, na_rm = FALSE, min_df = 0L, used_df = 1,
  restart_period = 100L, variable_win = FALSE, wts_as_delta = TRUE,
  check_wts = FALSE, normalize_wts = TRUE)

t_running_kurt5(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, na_rm = FALSE, min_df = 0L, used_df = 1,
  restart_period = 100L, variable_win = FALSE, wts_as_delta = TRUE,
  check_wts = FALSE, normalize_wts = TRUE)

t_running_sd(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, na_rm = FALSE, min_df = 0L, used_df = 1,
  restart_period = 100L, variable_win = FALSE, wts_as_delta = TRUE,
  check_wts = FALSE, normalize_wts = TRUE)

t_running_skew(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, na_rm = FALSE, min_df = 0L, used_df = 1,
  restart_period = 100L, variable_win = FALSE, wts_as_delta = TRUE,
  check_wts = FALSE, normalize_wts = TRUE)

t_running_kurt(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, na_rm = FALSE, min_df = 0L, used_df = 1,
  restart_period = 100L, variable_win = FALSE, wts_as_delta = TRUE,
  check_wts = FALSE, normalize_wts = TRUE)

t_running_cent_moments(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, max_order = 5L, na_rm = FALSE,
  max_order_only = FALSE, min_df = 0L, used_df = 0,
  restart_period = 100L, variable_win = FALSE, wts_as_delta = TRUE,
  check_wts = FALSE, normalize_wts = TRUE)

t_running_std_moments(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, max_order = 5L, na_rm = FALSE,
  min_df = 0L, used_df = 0, restart_period = 100L, variable_win = FALSE,
  wts_as_delta = TRUE, check_wts = FALSE, normalize_wts = TRUE)

t_running_cumulants(v, time = NULL, time_deltas = NULL, window = NULL,
  wts = NULL, lb_time = NULL, max_order = 5L, na_rm = FALSE,
  min_df = 0L, used_df = 0, restart_period = 100L, variable_win = FALSE,
  wts_as_delta = TRUE, check_wts = FALSE, normalize_wts = TRUE)

Arguments

v

a vector of data.

time

an optional vector of the timestamps of v. If given, must be the same length as v. If not given, we try to infer it by summing the time_deltas.

time_deltas

an optional vector of the deltas of timestamps. If given, must be the same length as v. If not given, and wts are given and wts_as_delta is true, we take the wts as the time deltas. The deltas must be positive. We sum them to arrive at the times.

window

the window size, in time units. if given as finite integer or double, passed through. If NULL, NA_integer_, NA_real_ or Inf are given, and variable_win is true, then we infer the window from the lookback times: the first window is infinite, but the remaining is the deltas between lookback times. If variable_win is false, then these undefined values are equivalent to an infinite window. If negative, an error will be thrown.

wts

an optional vector of weights. Weights are ‘replication’ weights, meaning a value of 2 is shorthand for having two observations with the corresponding v value. If NULL, corresponds to equal unit weights, the default. Note that weights are typically only meaningfully defined up to a multiplicative constant, meaning the units of weights are immaterial, with the exception that methods which check for minimum df will, in the weighted case, check against the sum of weights. For this reason, weights less than 1 could cause NA to be returned unexpectedly due to the minimum condition. When weights are NA, the same rules for checking v are applied. That is, the observation will not contribute to the moment if the weight is NA when na_rm is true. When there is no checking, an NA value will cause the output to be NA.

lb_time

a vector of the times from which lookback will be performed. The output should be the same size as this vector. If not given, defaults to time.

na_rm

whether to remove NA, false by default.

min_df

the minimum df to return a value, otherwise NaN is returned. This can be used to prevent moments from being computed on too few observations. Defaults to zero, meaning no restriction.

used_df

the number of degrees of freedom consumed, used in the denominator of the centered moments computation. These are subtracted from the number of observations.

restart_period

the recompute period. because subtraction of elements can cause loss of precision, the computation of moments is restarted periodically based on this parameter. Larger values mean fewer restarts and faster, though less accurate results.

variable_win

if true, and the window is not a concrete number, the computation window becomes the time between lookback times.

wts_as_delta

if true and the time and time_deltas are not given, but wts are given, we take wts as the time_deltas.

check_wts

a boolean for whether the code shall check for negative weights, and throw an error when they are found. Default false for speed.

normalize_wts

a boolean for whether the weights should be renormalized to have a mean value of 1. This mean is computed over elements which contribute to the moments, so if na_rm is set, that means non-NA elements of wts that correspond to non-NA elements of the data vector.

max_order

the maximum order of the centered moment to be computed.

max_order_only

for running_cent_moments, if this flag is set, only compute the maximum order centered moment, and return in a vector.

Details

Computes the number of elements, the mean, and the 2nd through kth centered (and typically standardized) moments, for k=2,3,4. These are computed via the numerically robust one-pass method of Bennett et. al.

Given the length n vector x, we output matrix M where M_i,j is the order - j + 1 moment (i.e. excess kurtosis, skewness, standard deviation, mean or number of elements) of some elements x_i defined by the sliding time window. Barring NA or NaN, this is over a window of time width window.

Value

Typically a matrix, where the first columns are the kth, k-1th through 2nd standardized, centered moments, then a column of the mean, then a column of the number of (non-nan) elements in the input, with the following exceptions:

t_running_cent_moments

Computes arbitrary order centered moments. When max_order_only is set, only a column of the maximum order centered moment is returned.

t_running_std_moments

Computes arbitrary order standardized moments, then the standard deviation, the mean, and the count. There is not yet an option for max_order_only, but probably should be.

t_running_cumulants

Computes arbitrary order cumulants, and returns the kth, k-1th, through the second (which is the variance) cumulant, then the mean, and the count.

Time Windowing

This function supports time (or other counter) based running computation. Here the input are the data x_i, and optional weights vectors, w_i, defaulting to 1, and a vector of time indices, t_i of the same length as x. The times must be non-decreasing:

t_1 <= t_2 <= ...

It is assumed that t_0 = -∞. The window, W is now a time-based window. An optional set of lookback times are also given, b_j, which may have different length than the x and w. The output will correspond to the lookback times, and should be the same length. The jth output is computed over indices i such that

b_j - W < t_i ≤ b_j.

For comparison functions (like Z-score, rescaling, centering), which compare values of x_i to local moments, the lookbacks may not be given, but a lookahead L is admitted. In this case, the jth output is computed over indices i such that

t_j - W + L < t_i ≤ t_j + L.

If the times are not given, ‘deltas’ may be given instead. If δ_i are the deltas, then we compute the times as

t_i = ∑_{1 ≤ j ≤ i} δ_j.

The deltas must be the same length as x. If times and deltas are not given, but weights are given and the ‘weights as deltas’ flag is set true, then the weights are used as the deltas.

Some times it makes sense to have the computational window be the space between lookback times. That is, the jth output is to be computed over indices i such that

b_{j-1} - W < t_i ≤ b_j.

This can be achieved by setting the ‘variable window’ flag true and setting the window to null. This will not make much sense if the lookback times are equal to the times, since each moment computation is over a set of a single index, and most moments are underdefined.

Note

the kurtosis is excess kurtosis, with a 3 subtracted, and should be nearly zero for Gaussian input.

The moment computations provided by fromo are numerically robust, but will often not provide the same results as the 'standard' implementations, due to differences in roundoff. We make every attempt to balance speed and robustness. User assumes all risk from using the fromo package.

Note that when weights are given, they are treated as replication weights. This can have subtle effects on computations which require minimum degrees of freedom, since the sum of weights will be compared to that minimum, not the number of data points. Weight values (much) less than 1 can cause computations to return NA somewhat unexpectedly due to this condition, while values greater than one might cause the computation to spuriously return a value with little precision.

As this code may add and remove observations, numerical imprecision may result in negative estimates of squared quantities, like the second or fourth moments. We do not currently correct for this issue, although it may be somewhat mitigated by setting a smaller restart_period. In the future we will add a check for this case. Post an issue if you experience this bug.

Author(s)

Steven E. Pav shabbychef@gmail.com

References

Terriberry, T. "Computing Higher-Order Moments Online." http://people.xiph.org/~tterribe/notes/homs.html

J. Bennett, et. al., "Numerically Stable, Single-Pass, Parallel Statistics Algorithms," Proceedings of IEEE International Conference on Cluster Computing, 2009. https://www.semanticscholar.org/paper/Numerically-stable-single-pass-parallel-statistics-Bennett-Grout/a83ed72a5ba86622d5eb6395299b46d51c901265

Cook, J. D. "Accurately computing running variance." http://www.johndcook.com/standard_deviation.html

Cook, J. D. "Comparing three methods of computing standard deviation." http://www.johndcook.com/blog/2008/09/26/comparing-three-methods-of-computing-standard-deviation

See Also

running_sd3.

Examples

1
2
3
4
5
x <- rnorm(1e5)
xs3 <- t_running_sd3(x,time=seq_along(x),window=10)
xs4 <- t_running_skew4(x,time=seq_along(x),window=10)
# but what if you only cared about some middle values?
xs4 <- t_running_skew4(x,time=seq_along(x),lb_time=(length(x) / 2) + 0:10,window=20)

shabbychef/fromo documentation built on April 11, 2021, 11:03 p.m.