We have the process [ Y_t = 5 + e_t - \frac{1}{2}e_{t-i} + \frac{1}{4}e_{t-2} ] and begin by working out its variance [ \begin{aligned} \text{Var}(Y_t) & = \text{Var}(5 + e_t - \frac{1}{2}e_{t-i} + \frac{1}{4}e_{t-2})\ & = \text{Var}(e_t) + \frac{1}{4}\text{Var}(e_t) + \frac{1}{16}\text{Var}(e_t)\ & = \frac{21}{16}\sigma_e^2 \end{aligned} ] and then the autocovariance at lag 1 [ \begin{aligned} \text{Cov}(Y_t, Y_{t-1}) & = \text{Cov}(5+e_t-\frac{1}{2}e_{t-1}+\frac{1}{4}e_{t-2}, 5+e_{t-1}-\frac{1}{2}e_{t-2}+\frac{1}{4}e_{t-3})\ & = \text{Cov}(-\frac{1}{2}e_{t-1},e_{t-1}) + \text{Cov}(\frac{1}{4}e_{t-2},-\frac{1}{2}e_{t-2}) \ & = -\frac{1}{2}\text{Var}(e_{t-1}) -\frac{1}{8}\text{Var}(e_{t-2})\ & = -\frac{5}{8}\sigma_e^2 \end{aligned} ] lag 2 [ \begin{aligned} \text{Cov}(Y_t, Y_{t-2}) & = \text{Cov}(5+e_t-\frac{1}{2}e_{t-1}+\frac{1}{4}e_{t-2}, 5+e_{t-2}-\frac{1}{2}e_{t-3}+\frac{1}{4}e_{t-4})\ & = \frac{1}{4}\text{Var}(e_{t-2}) \ & = \frac{1}{4}\sigma_e^2 \end{aligned} ] and lag 3 [ \text{Cov}(Y_t, Y_{t-2}) = \text{Cov}(5+e_t-\frac{1}{2}e_{t-1}+\frac{1}{4}e_{t-2}, 5+e_{t-2}-\frac{1}{2}e_{t-3}+\frac{1}{4}e_{t-4}) = 0 ] which results in the autocorrelation [ \rho_k = \begin{cases} 1 & k = 0\ \frac{-\frac{5}{8}\sigma_e^2}{\frac{21}{16}\sigma_e^2}=-\frac{10}{21} & k = 1\ \frac{\frac{1}{4}\sigma_e^2}{\frac{21}{16}\sigma_e^2}=\frac{4}{21} & k = 2\ 0 & k = 3\ \end{cases} \tag*{$\square$} ]
tacf(ma = list(-0.5, -0.4))
tacf(ma = list(-1.2, 0.7))
tacf(ma = list(1, 0.6))
For [ \rho_1 = \frac{-\theta}{1+\theta^2} ] we retrieve extreme values at [ \frac{\partial}{\partial \theta}\rho_1 = \frac{-1(1+\theta^2)-2\theta(-\theta)}{(1+\theta^2)^2} = \frac{\theta^2 - 1}{(1+\theta^2)^2} = 0 ] when $t = \begin{cases}-1\1\end{cases}$, which gives us [ \begin{aligned} \max \rho_1 & = \frac{-1(-1)}{1+(-1)^2} = 0.5\ \min \rho_1 & = \frac{-1}{1+1^2} = -0.5 \end{aligned} ] which we graph in figure \@ref(fig:minmax)
theta <- seq(-10, 10, by = 0.01) p1 <- (-theta) / (1 + theta^2) plot(theta, p1, type = "l") points(theta[which.max(p1)], max(p1)) points(theta[which.min(p1)], min(p1))
[ \frac{-\frac{1}{\theta}}{1 + \left( \frac{1}{\theta}\right)^2} = \frac{-\frac{1}{\theta}\times\theta^2}{\left( 1 + \frac{1}{\theta^2} \right) \theta^2} = \frac{-\theta}{1+\theta^2} \tag*{$\square$} ]
theta <- c(0.6, -0.6, 0.95, 0.3) lag <- c(10, 10, 20, 10) for (i in seq_along(theta)) { print(tacf(ar = theta[i], lag.max = lag[i])) }
[ \begin{aligned} \text{Cov}(\triangledown Y_t, \triangledown Y_{t-k}) & = \text{Cov}(Y_t-Y_{t-1}, Y{t-k}-Y_{t-k-1})\ & = \text{Cov}(Y_t, Y_{t-k}) - \text{Cov}(Y_{t-1},Y_{t-k}) - \text{Cov}(Y_t, Y_{t-k-1}) + \text{Cov}(Y_{t-1}, Y_{t-k-1})\ & = \frac{\sigma_e^2}{1-\phi^2}(\phi^2 - \phi^{k-1}-\phi^{k+1}+\phi^k) \ & = \frac{\sigma_e^2}{1-\phi^2}\phi^{k-1}(2\phi-\phi2-1)\ & = - \frac{\sigma_e^2}{1-\phi^2}(1-\phi)^2\phi^{k-1}\ & = - \sigma_e^2 \frac{(1-\phi)^2}{(1-\phi)(1+\phi)}\ & = -\sigma_e^2 \frac{1-\phi}{1+\phi}\phi^{k-1} \end{aligned} ] as required.
[ \begin{aligned} \text{Var}(W_t) & = \text{Var}(Y_t-Y_{t-1})\ & = \text{Var}(\phi_1Y_{t-1}+e_t-Y_{t-1})\ & = \text{Var}(Y_{t-1}(\phi-1)+\sigma_e^2)\ & = (\phi-1)^2\text{Var}(Y_{t-1}) + \text{Var}(e_t)\ & = \frac{\sigma_e^2}{1-\phi^2}(\phi^2-2\phi+1) + \sigma_e^2\ & = \frac{\sigma_e^2(\phi^2-2\phi+1+1-\phi^2)}{1-\phi^2}\ & = \frac{2\sigma_e^2(1-\phi)}{1-\phi^2} \ & = \frac{2\sigma_e^2}{1+\phi} \tag*{$\square$} \end{aligned} ]
Only correlation at lag 1.
Only autocorrelation at lag 1 and 2. Shape of process depends on values of coefficients.
Exponentially decaying correlation from lag 0.
Different patterns in ACF that depends on whether roots are complex or real.
Exponentially decaying correlations from lag 1.
First, we have variance [ \text{Var}(Y_t) = \text{Var}(\phi_2 Y_{t-2} + e_t) = \phi_2^2 \text{Var}Y_{t-2} + \sigma_e^2 ] which, assuming stationarity, is equivalent to [ \text{Var}(Y_{t-2}) = \phi_2^2\text{Var}(Y_{t-2}) + \sigma_e^2 \iff \ \sigma_e^2 = (1-\phi_2^2) \text{Var}(Y_{t-2}) \iff \ \text{Var}(Y_{t-2}) = \frac{\sigma_e^2}{1-\phi^2_2} ] which requires that $-1 < \phi_2 < 1$ since $\text{Var}(Y_{t-2}) \geq 0$.
[ \begin{split} \rho_1 & = 0.6\rho_0 + 0.3\rho{-1} = 0.6 + 0.3\rho_1 = 0.8571 \ \rho_2 & = 0.6\rho_1+0.3\rho_0 = 0.81426\ \rho_3 & = 0.6\rho_2 + 0.3\rho_1 = 0.7457 \end{split} ]
The roots to the characteristic equation are given by
[ \frac{\phi_1 \pm \sqrt{\phi_1^2 + 4\phi_2}}{-2\phi_2} = \frac{0.6 \pm \sqrt{0.6 + 4 \times 0.3}}{-2 \times 0.3} = -1 \pm 2.0817 = {1.0817, -3.0817}. ]
Since both of these roots exceed 1 in absolute value, they are real. Next, we sketch the theoretical autocorrelation function (\@ref(fig:ar2a)).
tacf(ar = c(0.6, 0.3))
Next we write a function to do the work for us.
ar2solver <- function(phi1, phi2) { roots <- polyroot(c(1, -phi1, -phi2)) cat("Roots:\t\t", roots, "\n") if (any(Im(roots) > sqrt(.Machine$double.eps))) { damp <- sqrt(-phi2) freq <- acos(phi1 / (2 * damp)) cat("Dampening:\t", damp, "\n") cat("Frequency:\t", freq, "\n") } tacf(ar = c(phi1, phi2)) }
ar2solver(-0.4, 0.5)
ar2solver(1.2, -0.7)
ar2solver(-1, -0.6)
ar2solver(0.5, -0.9)
ar2solver(-0.5, -0.6)
tacf(ar = 0.7, ma = -0.4)
tacf(ar = 0.7, ma = 0.4)
[ \begin{aligned} \text{Cov}(Y_t, Y_{t-k}) & = \text{E}[(0.8Y_{t-1}+e_t+0.7e_{t-1}+0.6e_{t-2})Y_{t-k}] - \text{E}(Y_t)\text{E}(Y_{t-k})\ & = \text{E}(0.8Y_{t-1}Y_{t-k} + Y_{t-k}e_t + 0.7e_{t-1}Y_{t-k}+0.6e_{t-2}Y_{t-k}) - 0\ & = 0.8\text{E}(Y_{t-1}Y_{t-k}) + \text{E}(Y_{t-k}e_t) + 0.7\text{E}(e_{t-1}Y_{t-k}) + 0.7\text{E}(e_{t-2}Y_{t-k})\ & = 0.8\text{E}(Y_{t-1}Y_{t-k})\ & = 0.8\text{Cov}(Y_t,Y_{t-k+1})\ & = 0.8\gamma_{k-1} & \square \end{aligned} ]
[ \begin{split} \text{Cov}(Y_t,Y_{t-2}) & = \text{E}[0.8Y_{t-1}+e_t+0.7e_{t-1}+0.6e_{t-2})Y_{t-2}]\ & = \text{E}[(0.8Y_{t-1}+0.6e_{t-2})Y_{t-2}]\ & = 0.8\text{Cov}(Y_{t-1},Y_{t-2})+0.6\text{E}(e_{t-2}Y_{t-2})\ & = 0.8\gamma_1 + 0.6\text{E}(e_t Y_t)\ & = 0.8\gamma_1 + 0.6\text{E}[e_t(0.8Y_{t-1}+e_t+0.7e_{t-1}+0.6e_{t-2})]\ & = 0.8\gamma_1 + 0.6\sigma_e^2 \iff \ \rho_2 & = 0.8\rho_1+0.6\sigma_e^2/\gamma_0 & \square \end{split} ]
For $\theta_1 = \theta_2 = 1/6$ we have [ \rho_k = \frac{-\frac{1}{6}+\frac{1}{6}\times\frac{1}{6}}{1 + \left(\frac{1}{6}\right)^2 + \left(\frac{1}{6}\right)^2} = \frac{\frac{1}{6}\left(\frac{1}{6}-1\right)}{1 + \frac{2}{36}} = - \frac{5}{38}. ]
For $\theta_1 = -1$ and $\theta_2 = 6$, [ \rho_k = \frac{1-6}{1+1^2+36} = - \frac{5}{38} \tag*{$\square$}. ]
For $\theta_1 = \theta_2 = 1/6$ we have roots given by [ \frac{\frac{1}{6} \pm \sqrt{\frac{1}{36}+ 4 \times \frac{1}{6}}}{-2\times \frac{1}{6}} = - \frac{1}{2} \pm \frac{\sqrt{\frac{25}{36}}}{-\frac{1}{3}} = - \frac{1}{2} \pm \frac{\frac{5}{6}}{\frac{1}{3}} = {-3, -2} ]
and for $\theta_1 = -1$ and $\theta_2 = 6$, [ \frac{-1 \pm \sqrt{1 + 4 \times 6}}{-2\times6} = \frac{-1\pm 5}{-12} = \frac{1}{12} \pm \frac{5}{12} = \left{-\frac{1}{3}, \frac{1}{2}\right} ]
[ \begin{aligned} & \text{Var}(Y_{n+1}+Y_n+Y_{n-1}+ \dots + Y_1) = \left((n+1)+2n\rho_1\right)\gamma_0 = \left(1+n(1+2\rho_1)\right)\gamma_0\ & \text{Var}(Y_{n+1}-Y_n+Y_{n-1}- \dots + Y_1) = \left((n+1)-2n\rho_1 \right)\gamma_0 = \left(1 + n(1-2\rho_1)\right)\gamma_0 \end{aligned} ]
[ \begin{cases} \left(1+n(1+2\rho-1)\right) \geq 0 \ \left(1+n(1-2\rho-1)\right) \geq 0 \ \end{cases} \iff
\begin{cases} 1+n+2\rho_1n \geq 0 \ 1+n-2\rho_1n \geq 0 \ \end{cases} \iff
\begin{cases} \rho_1 \geq \frac{-(n+1)}{2n}\ \rho_1 \leq \frac{n+1}{2n} \end{cases} \iff
\begin{cases} \rho_1 \geq -\frac{1}{2}(1+\frac{1}{n})\ \rho_1 \leq \frac{1}{2}(1+\frac{1}{n}) \end{cases} ] where $\rho_1 \geq |1/2|$ for all $n$.
We set $Y_t=e_t−θe_t−1$ and then we have
$$ \begin{split} e_t & = \sum_{j=0}^\infty \theta^j Y_{t-j}\quad \text{and expanding into} \ & = \sum_{j=1}^\infty \theta^j Y_{t-j} + \theta^0 Y_{t-0} \ & \iff \ Y_t & = e_t - \sum_{j=1}^\infty \theta^j Y_{t-j} \end{split} $$ which is equivalent to [ Y_t = \mu_0 + (1 + \theta B + \theta^2 B^2 + \dots + \theta^n B^n)e_t ] which is the definition of a MA(1) process where $B$ is the backshift operator such that $Y_t B^k =Y_{t-k}$.
[ \begin{split} \text{Var}(Y_t) & = \text{Var}(\phi Y_{t-1}+e_t) = \phi^2\text{Var}(Y_{t-1})+\sigma_e^2\ & = \phi^2 \text{Var}(\phi Y_{t-2}+e_t)+\sigma_e^2\ & = \phi^4 \text{Var}(Y_{t-2})+2\sigma_e^2\ & = \phi^{2n}\text{Var}(Y_{t-n})+n\sigma_e^2 \end{split} ] And $\lim_{n \rightarrow \infty} \text{Var}(Y_t) = \infty$ if $|\phi$=1$, which is impossible.
[ \begin{split} Y_t & = \phi Y_{t-1}+e_t \implies \ - \sum_{j=1}^\infty \left(\frac{1}{3}\right)^j e_{t+j} & = 3 \left(-\sum_{j=1}^\infty \left(\frac{1}{3}\right)^j e_{t-1+j}\right) + e_t\ - \sum_{j=1}^\infty \left(\frac{1}{3}\right)^{j+1} e_{t+j} & = -\sum_{j=1}^\infty \left(\frac{1}{3}\right)^j e_{t-1+j} + \frac{1}{3} e_t\ - \sum_{j=1}^\infty \left(\frac{1}{3}\right)^{j+1} e_{t+j} & = -\sum_{j=2}^\infty \left(\frac{1}{3}\right)^j e_{t-1+j}\ - \sum_{j=1}^\infty \left(\frac{1}{3}\right)^{j+1} e_{t+j} & = -\sum_{j+1=2}^\infty \left(\frac{1}{3}\right)^{j+1} e_{t+j} & \square \end{split} ]
[ \text{E}(Y_t) = \text{E}(\sum_{j=1}^\infty \left(\frac{1}{3}\right)^j e_{t+j}) = 0 ] since all terms are uncorrelated white noise.
[ \begin{gathered} \text{Cov}(Y_t,Y_{t-1}) = \text{Cov}\left( -\sum_{j=1}^\infty \left(\frac{1}{3}\right)^j e_{t+j}, \sum_{j=1}^\infty \left(\frac{1}{3}\right)^j e_{t+j-1} \right) = \ \text{Cov}\left(-\frac{1}{3}e_{t+1} - \left(\frac{1}{3}\right)^2e_{t+2} - \dots - \left(\frac{1}{3}\right)^n e_{t+n}, -\frac{1}{3}e_{t} - \left(\frac{1}{3}\right)^2e_{t+1} - \dots - \left(\frac{1}{3}\right)^n e_{t+n-1} \right) = \ \text{Cov}\left(-\frac{1}{3}e_{t+1},-\frac{1}{3^2}e_{t+1}\right) + \text{Cov}\left(-\frac{1}{3}e_{t+2},-\frac{1}{3^3}e_{t+2}\right) + \dots + \text{Cov}\left(-\frac{1}{3}e_{t+n},-\frac{1}{3^{n+1}}e_{t+n}\right) = \ \frac{1}{26}\sigma_e^2\left(1 + \frac{1}{3} + \frac{1}{3^2} + \dots + \frac{1}{3^n} \right) \end{gathered} ] which is free of $t$.
It is unsatisfactory because Y_t depends on future observations.
[ \frac{1}{2}\left(10\left(\frac{1}{2}\right)^{t-1} + e_{t-1} + \frac{1}{2}e_{t-2} + \left(\frac{1}{2}\right)^2e_{t-3} + \dots + \left(\frac{1}{2}\right)^{n-1}e_{t-n}\right) + e_{t-1} = \ 10 \left(\frac{1}{2}\right)^t + \frac{1}{2}e_{t-1} + \left(\frac{1}{2}\right)^2e_{t-2} + \left(\frac{1}{2}\right)^3 e_{t-3} + \dots + \left( \frac{1}{2}\right)^n e_{t-n} + e_{t-1} = \ 10\left(\frac{1}{2}\right)^{t-1} + e_{t-1} + \frac{1}{2}e_{t-2} + \left(\frac{1}{2}\right)^2e_{t-3} + \dots + \left(\frac{1}{2}\right)^{n-1}e_{t-n} \qquad \square ]
$\text{E}(Y_t) = 10 \left( \frac{1}{2}\right)^t$ varies with $t$ and thus is not stationary.
[ \text{E}(W_t) = \text{E}(Y_t + c\phi^t) = \text{E}(Y_t) + \text{E}(c\phi^t) = 0 + c\phi^t = c\phi^t \tag*{$\square$} ]
[ \phi(Y_{t-1}+c\phi^{t-1})+e_t = \phi Y_{t-1} + c\phi^t + e_t = \phi \left( \frac{Y_t - e_t}{\phi}\right) + c \phi^t + e_t = Y_t + c\phi^t \tag*{$\square$} ]
No, \text{E}(W_t) is not free of $t$.
This is similar to an AR(1) with $\rho_k = -(-0.5)^k$.
ARMAacf(ar = -0.5, lag.max = 7) ARMAacf(ma = -c(0.5, -0.25, 0.125, -0.0625, 0.03125, -0.0015625))
This is like an ARMA(1,1) with $\phi = -0.5$ and $\theta = 0.5$.
[ \begin{cases} \psi_1 = \phi - \theta = 1\ \psi_2 = (\phi - \theta)\phi = -0.5 \end{cases}
\implies
\begin{cases} \theta = 0.5\ \phi = -0.5 \end{cases} ]
ARMAacf(ma = -c(1, -0.5, 0.25, -0.125, 0.0625, -0.03125, 0.015625)) ARMAacf(ar = -0.5, ma = -0.5, lag.max = 8)
[ \begin{gathered} \text{Cov}(Y_t, Y_{t-k}) = \text{Cov}(e_{t-1}-e_{t-2}+0.5e_{t-3}, e_{t-1-k}-e_{t-2-k}+0.5e_{t-3-k}) = \ \gamma_k = \begin{cases} \sigma_e^2 + \sigma_e^2 + 0.25\sigma_e^2 = 2.25\sigma_e^2 & k = 0\ -\sigma_e^2-0.5\sigma_e^2=-1.5\sigma_e^2 & k = 1\ 0.5\sigma_e^2 & k = 2 \end{cases} \end{gathered} ]
This is an ARMA(p,q) in the sense that $p = 0$ and $q = 2$, that is, it is in fact an MA(2) process $Y_t = e_t - e_{t-1}+0.5e_{t-2}$ with $\theta_1 = 1, \theta2 = -0.5$.
[ 1 - \phi_1x - \phi_2x^2 - \dots - \phi_p x^p \implies x^k \left( \left(\frac{1}{k}\right)^p - \phi_1 \left(\frac{1}{k}\right)^{p-1} \dots - \phi_p \right) ]
Thus if $x_1 = G$ is a root to the above, $\frac{1}{x_1} = \frac{1}{G}$ must be a root to [ x^p - \phi_1 x^{p-1} -\phi_2 x^{p-2} - \dots -\phi_p ]
[ \begin{gathered} \text{Cov}(Y_t - \phi Y_{t+1}, Y_{t-k} - \phi Y_{t-k+1}) = \ \text{Cov}(Y_t, Y_{t-k}) - \phi \text{Cov}(Y_t, Y_{t+1-k}) - \phi\text{Cov}(Y_{t+1},Y_{t-k}) + \phi^2\text{Cov}(Y_{t+1, Y_{t+1-k}}) = \ \frac{\sigma_e^2}{1-\phi^2}\left(\phi^k - \phi \phi^{k-1} - \phi \phi^{k+1} + \phi^2 \phi\right) =\ \frac{\sigma_e^2}{1-\phi^2}\left( \phi^k - \phi^k - \phi^{k+2} + \phi^{k+2} \right) = 0 \end{gathered} ]
First,
[ Y_{t+k} = \phi Y_{t+k-1} + e_t + k \implies ]
[ \begin{split} \text{Cov}(b_t, Y_{t+k}) & = \text{Cov}(Y_t - \phi Y_{t+1, Y_{t+n}})\ & = \text{Cov}(Y_t,Y_{t+k}) - \phi \text{Cov}(Y_{t+1}, Y_{t+k)})\ & = \frac{\sigma_e^2}{1-\phi^2}\phi^k - \phi \frac{\sigma_e^2}{1-\phi^2}\phi^{k-1}\ & = 0 & \square \end{split} ]
[ \begin{split} E(Y_0) & = E[c_1e_0] = c_1E[e_0] = 0\ E(Y_1) & = E(c_2Y_0 + e_1) = c_2 E(Y_0) = 0\ E(Y_2) & = E(\phi_1 Y_1 + \phi_2 Y_0 + e_t) = 0\ & \vdots \ E(Y_t) & = E(\phi_1 Y_{t-1} + \phi_2 Y_{t-2}) = 0 \end{split} ]
We have [ \begin{cases} \text{Var}(Y_0) = c_1^2\text{Var}(e_0) = c_1^2\sigma_e^2\ \text{Var}(Y_1) = c_2^2\text{Var}(Y_0) + \text{Var}(e_0) = c_2^2c_1^2\sigma_e^2=\sigma_e^2(1+c_1^2c_2^2) \end{cases} \implies ]
[ \begin{cases} c_1^2\sigma_e^2 = \sigma_e^2(1+c_1^2c_2^2) & \iff c_1^2(1-c_2^2) = 1\ c_1^2 = \frac{1}{1-c_2^2} & \iff c_1 = \sqrt{\frac{1}{1-c_1^2}} = \frac{1}{\sqrt{1-c_1^2}} \end{cases} ] with covariance [ \begin{gathered} \text{Cov}(Y_0, Y_1) = \text{Cov}(c_1e_0,c_2 c_1 e_0 + e_1) = \text{Cov}(c_1e_0,c_2 c_1 e_0) + \text{Cov}(c_1e_0, e_1) =\ c_1^2c_2\sigma_e^2 + c_1\text{Cov}(e_0, e_1) = c_1^2c_2\sigma_e^2 + 0 \end{gathered} ] and autocorrelation [ \rho_1 = \frac{c_1^2c_2\sigma_e^2}{\sqrt{(c_1^2)^2}} = c_2 ] so we must choose [ \begin{cases} c_2 = \frac{\phi_1}{1-\phi_2} \ c_1 = \frac{1}{\sqrt{1-c_2^2}} \end{cases}. ]
The process can be transformed by scaling and standardizing it and then shifting with any given mean. [ \frac{Y_t \sqrt{y_0}}{c_1} + \mu ]
[ \begin{split} Y_t & = \phi Y_{t-1} + e_t\ & = \phi(\phi Y_{t-2}+e_{t-1}) + e_t = \phi^2 Y_{t-2} + \phi e_{t-1} + e_t\ & \vdots \ & = \phi^t Y_{t-t} + \phi e_{t-1} + \phi^2 e_{t-2} + \dots + \phi^{t-1}e_1+e_t & \square \end{split} ]
[ E(Y_t) = E(\phi^t Y_0 +\phi e_{t-1} + \phi^2 e_{t-2} + \dots + \phi^{t-1}e_1+e_t) =\phi^t\mu_0 ]
[ \begin{split} \text{Var}(Y_t) & = \text{Var}(\phi^t Y_0 + \phi e_{t-1}+\phi^2e_{t-2} + \dots + \phi^{t-1}e_1)\ & = \phi^{2t}\sigma_0^2+\sigma_e^2 \sum_{k=0}^{t-1}(\phi^2)^k\ & = \sigma_e^2 \frac{1-\phi^{2n}}{1-\phi^2} + \phi^{2t}\sigma_0^2 \quad\text{if $\phi \neq 1$ else}\ & = \text{Var}(Y_0) + \sigma_e^2 t = \sigma_0^2 + \sigma_e^2 t \end{split} ]
If $\mu_0 = 0$ then $E(Y_t) = 0$ but for $\text{Var}(Y_t)$ to be free of t, $\phi$ cannot be 1.
[ \text{Var}(Y_t) = \phi^2 \text{Var}(Y_{t-1}) + \sigma_e^2 \implies \phi^2 \text{Var}(Y_t) + \sigma_e^2 ] and [ \text{Var}(Y_{t-1}) = \text{Var}(Y_t)(1-\phi^2) = \sigma_e^2 \implies \text{Var}(Y_t) = \frac{\sigma_e^2}{1-\phi} ]
and then we must have $|\phi| < 1$.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.