# mou.loglik: Loglikelihood for multivariate Ornstein-Uhlenbeck process. In msde: Bayesian Inference for Multivariate Stochastic Differential Equations

## Description

Computes the exact Euler loglikelihood for any amount of missing data using a Kalman filter.

## Usage

 `1` ```mou.loglik(X, dt, nvar.obs, Gamma, Lambda, Phi, mu0, Sigma0) ```

## Arguments

 `X` An `nobs x ndims` matrix of complete data. `dt` A scalar or length `nobs-1` vector of interobservations times. `nvar.obs` A scalar or length `nobs` vector of integers between 0 and `ndims` denoting the number of observed SDE variables in each row of `data`. Defaults to `ndims`. See `sde.init` for details. `Gamma` A `ndims x ndims` of linear-drift parameters. See Details. `Lambda` A length-`ndims` vector of constant-drift parameters. See Details. `Phi` A `ndims x ndims` positive definite variance matrix. See Details. `mu0, Sigma0` Mean and variance of marginal multivariate normal distribution of `X[1,]`. Defaults to iid standard normals for each component.

## Details

The p-dimensional multivariate Ornstein-Uhlenbeck (mOU) process Y_t = (Y_1t, …, Y_dt) satisfies the SDE

dY_t = (Γ Y_t + Λ)dt + Φ^(1/2) dB_t,

where B_t = (B_1t, …, B_pt) is p-dimensional Brownian motion. Its Euler discretization is of the form

Y_n+1 = Y_n + (Γ Y_n + Λ) Δ_n + Φ^(1/2) Δ B_n,

where Y_n = Y(t_n), Δ_n = t_n+1 - t_n and

Δ B_n = B(t_n+1) - B(t_n) \sim iid N(0, Δ_n).

Thus, Y_0, …, Y_N is multivariate normal Markov chain for which the marginal distribution of any subset of timepoints and/or components can be efficiently calculated using the Kalman filter. This can be used to check the MCMC output of `sde.post` as in the example.

## Value

Scalar value of the loglikelihood. See Details.

## Examples

 ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63``` ```# bivariate OU model bmod <- sde.examples("biou") # simulate some data # true parameter values Gamma0 <- .1 * crossprod(matrix(rnorm(4),2,2)) Lambda0 <- rnorm(2) Phi0 <- crossprod(matrix(rnorm(4),2,2)) Psi0 <- chol(Phi0) # precompiled model uses the Cholesky scale theta0 <- c(Gamma0, Lambda0, Psi0[c(1,3,4)]) names(theta0) <- bmod\$param.names # initial value Y0 <- rnorm(2) names(Y0) <- bmod\$data.names # simulation dT <- runif(1, max = .1) # time step nObs <- 10 bsim <- sde.sim(bmod, x0 = Y0, theta = theta0, dt = dT, dt.sim = dT, nobs = nObs) YObs <- bsim\$data # inference via MCMC binit <- sde.init(bmod, x = YObs, dt = dT, theta = theta0, nvar.obs = 1) # second component is unobserved # only Lambda1 is unknown fixed.params <- rep(TRUE, bmod\$nparams) names(fixed.params) <- bmod\$param.names fixed.params["Lambda1"] <- FALSE # prior on (Lambda1, Y_0) hyper <- list(mu = c(0,0), Sigma = diag(2)) names(hyper\$mu) <- bmod\$data.names dimnames(hyper\$Sigma) <- rep(list(bmod\$data.names), 2) # posterior sampling nsamples <- 1e5 burn <- 1e3 bpost <- sde.post(bmod, binit, hyper = hyper, fixed.params = fixed.params, nsamples = nsamples, burn = burn) L1.mcmc <- bpost\$params[,"Lambda1"] # analytic posterior L1.seq <- seq(min(L1.mcmc), max(L1.mcmc), len = 500) L1.loglik <- sapply(L1.seq, function(l1) { lambda <- Lambda0 lambda <- l1 mou.loglik(X = YObs, dt = dT, nvar.obs = 1, Gamma = Gamma0, Lambda = lambda, Phi = Phi0, mu0 = hyper\$mu, Sigma0 = hyper\$Sigma) }) # normalize density L1.Kalman <- exp(L1.loglik - max(L1.loglik)) L1.Kalman <- L1.Kalman/sum(L1.Kalman)/(L1.seq-L1.seq) # compare hist(L1.mcmc, breaks = 100, freq = FALSE, main = expression(p(Lambda*" | "*bold(Y))), xlab = expression(Lambda)) lines(L1.seq, L1.Kalman, col = "red") legend("topright", legend = c("Analytic", "MCMC"), pch = c(NA, 22), lty = c(1, NA), col = c("red", "black")) ```

msde documentation built on May 2, 2019, 2:05 a.m.