fade | R Documentation |
Reduces the weight of old observations in the data stream.
build
has a learning rate parameter
lambda
. If this parameter is set, build
automatically
fades all counts before a new data point is added. The second
mechanism is to explicitly call the function~fade
whenever
fading is needed. This has the advantage that the overhead of manipulating
all counts in the EMM can be reduced and that fading can be used in a more
flexible manner. For example, if the data points are arriving at an irregular
rate, fade
could be called at regular time intervals
(e.g., every second).
fade(x, t, lambda)
x |
an object of class |
t |
number of time intervals (if missing 1 is used) |
lambda |
learning rate. If |
Old data points are faded by using a weight.
We define the weight
for data that is t
timesteps in the past by the following strictly
decreasing function:
w_t = 2^{-\lambda t}
Since the weight is multiplicative, it can be applied iteratively by
multiplying at each time step all counts by 2^{-\lambda}
.
For the clustering algorithm the weight of the clusters (number of data
points assigned to the cluster) is faded. For the EMM the initial count vector
and all transition counts are faded.
Returns a reference to the changed object x
.
EMM
and build
data("EMMTraffic")
## For the example we use a very high learning rate
## this calls fade after each new data point
emm_l <- EMM(measure="eJaccard", threshold=0.2, lambda = 1)
build(emm_l, EMMTraffic)
## build a regular EMM for comparison
emm <- EMM(measure="eJaccard", threshold=0.2)
build(emm, EMMTraffic)
## compare the transition matrix
transition_matrix(emm)
transition_matrix(emm_l)
## compare graphs
op <- par(mfrow = c(1, 2), pty = "m")
plot(emm, main = "regular EMM")
plot(emm_l, main = "EMM with high learning rate")
par(op)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.