Be able to find the roots of a 2nd order polynomial
$$ x = \frac{ - b \pm \sqrt {b^2 - 4ac} }{2a} $$
$$ z = a + bi$$ $$ i = \sqrt{-1} $$ a is real b is imaginary $$ z^{} = a - bi $$ z is complex conjugate
It is basically a vector so absolute value is just the magnitude
Unit circle is the values in the complex plane in which the magnitude of z is equal to one. If Z is a complex number with magnitude greater than one, it is outside the unit circle.
quad_form <- function(a,b,c) { rad <- b^2 - 4 * a * c if(is.complex(rad) || all(rad >= 0)) { rad <- sqrt(rad) } else { rad <- sqrt(as.complex(rad)) } round(cbind(-b - rad, -b + rad) / (2*a),3) } quad_form(c(1,4), c(1,-5), c(6,2))
A filter will turn $Z_t$ into $X_t$
for example:
$$\mathrm{difference:} X_t = Z_t - Z_{t-1}$$
Moving average also a filter (smoother)
Consider $ Z_1 = 8 , Z_2 =14, Z_3 = 14, Z_4 = 7$
If we apply the difference filter to this, we have that: $$X_2 = 6, X_3 = 0, X_4 = 7$$
Note: the differenced data are a realization of length n-1
We can use the difference filter to remove the wandering behjaviior
We can only get a realization of n-4. (n-nopoints-1). What does the 5 point moving average do (it averages a point and two points ahead and two behind). This filter can filter out some frequencies
mafun <- function(xs, n) { stats::filter(xs,rep(1,n))/n } dfun <- function(xs,n){ diff(xs, lag = n) } th <- ggthemes::theme_few() data(fig1.21a) ma <- mafun(fig1.21a,5) d <- dfun(fig1.21a,1) fp <- tplot(fig1.21a) + th mp <- tplot(ma)+th dp <- tplot(d)+th
And here lets view the 5 point moving average
cowplot::plot_grid(fp, mp, nrow = 2,labels = c("Original Data","5-point MA"))
And now a look at the difference filter
cowplot::plot_grid(fp, dp, nrow = 2,labels = c("Original Data","1st order difference"))
rlz <- gen.sigplusnoise.wge(200,coef = c(5,3),freq = c(.1,.45),vara = 10,sn = 1, plot=F)
pfun <- function(x,n,l) { fp <- tplot(x) + th mp <- tplot(mafun(x,n)) + th dp <- tplot(dfun(x,l)) + th list("original" = fp,"MA" = mp,"dif" = dp) } plts <- pfun(rlz,5,1) plts$original plts$MA plts$dif
parzen.wge(rlz) parzen.wge(na.omit(mafun(rlz,5))) parzen.wge(na.omit(dfun(rlz,1)))
a GLP is a linear filter with a white noise input
We take white noise input and put it through a filter and get something that isnt white noise.
$$\sum_{j=0}^\infty \psi_j a_{t-j} = X_t - \mu$$
GLP's are an infinite sum of white noise terms. This might be weird now but later on this concept will return.
The $\psi_j$'s are called psi weights, these will be useful again later. AR, MA, and ARMA are all special cases of GLP's, and will be useful when we study confidence intervals.
AR(p) in which p = 1
We will go through well known forms of the AR(1) model.
$$X_t = \beta + \phi_1 X_{t-1} + a_t$$
$$\beta = (1-\phi_1)\mu$$
Beta is moving average constant
We are saying that the vluse of x depends on some constant, the previous value of x, and some random white noise. Looks a lot like regression except there is a wierd variable
we can rewirte tis as
$$ X_t = (1-\phi_1)\mu + \phi_1 X_{t-1} + a_t $$
An are 1 process is stationary iff the magnitude of $\phi_1$ is less than 1
We will deal with this a lot in the near future
$$E\left[ X_t \right] = \mu ?$$ $$E \left[ X_t \right] =E \left[ 1-\phi_1 \mu \right] + E \left[ \phi_1 X_{t-1} \right] + E[a_t]$$
We can rewrite this as $$E \left[ X_t \right] = 1-\phi_1 \mu + \phi_1E \left[ X_{t} \right] + 0$$
$$E \left[ X_t \right] ( 1-\phi_1) = 1-\phi_1 \mu $$ $$E\left[ X_t \right] = \mu $$
Mean does not depend on T
The variance also does not, and if phi1 is less than one variance is finite
$$\sigma_X^2 = \frac{\sigma_a^2}{1-\phi_1^2}$$
and rhok is phi1 to the k
$$\rho_k = \phi_1^k, k \geq 0$$
Spectral Density of AR(1) also does not depend on time, it just monotonically increases or decreases depending on phi1:
$$S_X(f) = \frac{\sigma_a^2}{\sigma_X^2} \left( \frac{1}{\mid 1 - \phi_1 e^{-2\pi i f}\mid^2} \right)$$
with zero mean, $$X_t = \phi_1 X_{t-1} + a_t$$ OR $$X_t- \phi_1 X_{t-1}$$A
remember we have $$\rho_k = \phi_1^k$$
With positive $\phi_1$, we have:
realizations are wandering and aperiodic
Autocorrelations are damped exponentials
Spectral density Peaks at zero.
Higher values = stronger versions of these characterstics
We have:
realization are oscillating
Autocorrelations are damped and alternating(negative to a power you fool)
Spectral density* has a peak at f = 0.5 (cycle length of 2)
Higher magnitude = stronger characteristics
if phi = 1 or phi> 1, we have that realizations are from a nonstationary process. with it equal to one, it looks ok, and is actualy a special ARIMA model. WIth it > 1, we have crazy explosive realizations. Check this out:
nonstarma <- function(n,phi){ x <- rep(0,n) a <- rnorm(n) x[1:n] <- 0 for(k in 2:n){ x[k] = phi*x[k-1] +a[k] } tplot(x) } nonstarma(n = 50, phi = 1.5) + th
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.