timedelay-package: Time Delay Estimation for Stochastic Time Series of...

Description Details Author(s) References Examples

Description

The R package timedelay provides a toolbox to estimate the time delay between the brightness time series of gravitationally lensed quasar images via Bayesian and profile likelihood approaches. The model is based on a state-space representation for irregularly observed time series data generated from a latent continuous-time Ornstein-Uhlenbeck process. Our Bayesian method adopts scientifically motivated hyper-prior distributions and a Metropoli-Hastings within Gibbs sampler, producing posterior samples of the model parameters that include the time delay. A profile likelihood of the time delay is a simple approximation to the marginal posterior distribution of the time delay. Both Bayesian and profile likelihood approaches complement each other, producing almost identical results; the Bayesian way is more principled but the profile likelihood is easier to be implemented. A new functionality is added in version 1.0.9 for estimating the time delay between doubly-lensed light curves observed in two bands.

Details

Package: timedelay
Type: Package
Version: 1.0.11
Date: 2020-05-18
License: GPL-2
Main functions: bayesian, bayesian.multiband, entirelogprofilelikelihood

Author(s)

Hyungsuk Tak, Kaisey Mandel, David A. van Dyk, Vinay L. Kashyap, Xiao-Li Meng, and Aneta Siemiginowska

Maintainer: Hyungsuk Tak <hyungsuk.tak@gmail.com>

References

Hyungsuk Tak, Kaisey Mandel, David A. van Dyk, Vinay L. Kashyap, Xiao-Li Meng, and Aneta Siemiginowska (2017). "Bayesian Estimates of Astronomical Time Delays between Gravitationally Lensed Stochastic Light Curves," The Annals of Applied Statistics, 11 (3), 1309-1348. Hyungsuk Tak, Xiao-Li Meng, and David A. van Dyk (2018), "A Repelling-Attracting Metropolis Algorithm for Multimodality", Journal of Computational and Graphical Statistics, 27 (3), 479-490. Zhirui Hu and Hyungsuk Tak (2020+), "Modeling Stochastic Variability in Multi-Band Time Series Data," arXiv:2005.08049.

Examples

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
  # Loading datasets
  data(simple)
  head(simple)

  # Subset (data for image A) of the typical quasar data set
  lcA <- simple[, 1 : 3]

  # Another subset (data for image B) of the typical quasar data set
  # The observation times for image B are not necessarily the same as those for image A
  lcB <- simple[, c(1, 4, 5)]

  # The two subsets do not need to have the same number of observations
  # For example, here we add one more observation time for image B
  lcB <- rbind(lcB, c(290, 1.86, 0.006))

  dim(lcA)
  dim(lcB)

  ###############################################
  # Time delay estimation via Bayesian approach #
  ###############################################


  # Cubic microlensing model (m = 3)
  output <- bayesian(data.lcA = lcA, data.lcB = lcB, 
                     data.flux = FALSE, theta.ini = c(0, 0.01, 200), 
                     delta.ini = 50, delta.uniform.range = c(0, 100), 
                     delta.proposal.scale = 1, 
                     tau.proposal.scale = 3, 
                     tau.prior.shape = 1, tau.prior.scale = 1,
                     sigma.prior.shape = 1, sigma.prior.scale = 2 / 10^7, 
                     asis = TRUE, micro = 3,
                     sample.size = 100, warmingup.size = 50)

  names(output)
  # hist(output$delta)
  # plot(output$delta, type = "l")
  # acf(output$delta)

  ### Argument description

  # data.lcA: An n1 by three matrix for image A (light curve A) with 
  #           n1 observation times in the first column,
  #           n1 observed magnitudes (or in flux) in the second column,
  #           n1 measurement errors for image A in the third column.

  # data.lcB: An n2 by three matrix for image B (light curve B) with 
  #           n2 observation times in the first column,
  #           n2 observed magnitudes (or in flux) in the second column,
  #           n2 measurement errors for image A in the third column.

  # data.flux: "True" if data are recorded on flux scale or "FALSE" if data are on magnitude scale.
  # If your observed time series can take on negative values, then data.flux = FALSE.

  # theta.ini: Initial values of theta = (mu, sigma, tau) for MCMC.
  # delta.ini: Initial values of the time delay for MCMC.
  # delta.uniform.range: The range of the Uniform prior distribution for the time delay.
  #                      The feasible entire support is 
  #                      c(min(simple[, 1]) - max(simple[, 1]), max(simple[, 1]) - min(simple[, 1]))

  # delta.proposal.scale: The proposal scale of the Metropolis step for the time delay.
  # tau.proposal.scale: The proposal scale of the Metropolis-Hastings step for tau.

  # tau.prior.shape: The shape parameter of the Inverse-Gamma hyper-prior distribution for tau. 
  # tau.prior.scale: The scale parameter of the Inverse-Gamma hyper-prior distribution for tau. 
  # sigma.prior.shape: The shape parameter of 
  #                    the Inverse-Gamma hyper-prior distribution for sigma^2.
  # sigma.prior.scale: The scale parameter of 
  #                    the Inverse-Gamma hyper-prior distribution for sigma^2.
  # micro: It determines the order of a polynomial regression model that accounts 
  #        for the difference between microlensing trends. Default is 3. 
  #        When zero is assigned, the Bayesian model fits a curve-shifted model.

  # asis: "TRUE" if we use the ancillarity-sufficiency interweaving strategy (ASIS) 
  #       for c (always recommended)
  # multimodality: "TRUE" if we use the repelling-attracting Metropolis algorithm
  #                for sampling Delta in the presence of multimodality.
  #                Please make sure that "adaptive.delta = FALSE" so that 
  #                adaptive MCMC for Delta is not used in the presence of multimodality.

  # adaptive.freqeuncy: The adaptive MCMC is applied for every specified frequency. 
  #                     If it is specified as 100, 
  #                     the adaptive MCMC is applied to every 100th iterstion.
  # adaptive.delta: "TRUE" if we use the adaptive MCMC for the time delay.
  # adaptive.delta.factor: The factor, exp(adaptive.delta.factor) or exp(-adaptive.delta.factor), 
  #                        multiplied to the proposal scale of the time delay for adaptive MCMC.
  #                        Default is 0.01.

  # adaptive.tau: "TRUE" if we use the adaptive MCMC for tau.
  # adaptive.tau.factor: The factor, exp(adaptive.tau.factor) or exp(-adaptive.tau.factor), 
  #                      multiplied to the proposal scale of tau for adaptive MCMC.

  # sample.size: The number of posterior samples for each parameter
  # warmingup.size: The number of burn-ins

  # Type ?bayesian for further details on graphical model checking.

  ################################################
  # Time delay estimation via profile likelihood #
  ################################################

  ###### The entire profile likelihood values on the grid of values of the time delay.


  # Cubic microlensing model
  ti1 <- lcB[, 1]
  ti2 <- lcB[, 1]^2
  ti3 <- lcB[, 1]^3
  ss <- lm(lcB[, 2] - mean(lcA[, 2]) ~ ti1 + ti2 + ti3)

  initial <- c(mean(lcA[, 2]), log(0.01), log(200), ss$coefficients)
  delta.uniform.range <- c(0, 100)
  grid <- seq(0, 100, by = 0.1) 
  # grid interval "by = 0.1" is recommended for accuracy,
  # but users can set a finer grid of values of the time delay.

  ###  Running the following codes takes more time than CRAN policy
  ###  Please type the following lines without "#" to run the function and to see the results

  #  logprof <- entirelogprofilelikelihood(data.lcA = lcA, data.lcB = lcB, grid = grid, 
  #                                        initial = initial, data.flux = FALSE, 
  #                                        delta.uniform.range = delta.uniform.range, micro = 3)

  #  plot(grid, logprof, type = "l", 
  #       xlab = expression(bold(Delta)),       
  #       ylab = expression(bold(paste("log L"[prof], "(", Delta, ")"))))
  #  prof <- exp(logprof - max(logprof))  # normalization
  #  plot(grid, prof, type = "l", 
  #       xlab = expression(bold(Delta)),       
  #       ylab = expression(bold(paste("L"[prof], "(", Delta, ")"))))

  ### Argument description

  # data.lcA: The data set (n by 3 matrix) for light curve A 
  # (1st column: observation times, 2nd column: values of fluxes or magnitudes,
  #  3rd column: measurement errors)
  # data.lcB: The data set (w by 3 matrix) for light curve B 
  # (1st column: observation times, 2nd column: values of fluxes or magnitudes,
  #  3rd column: measurement errors)
  # grid: the vector of grid values of the time delay 
  #       on which the log profile likelihood values are calculated.
  # initial: The initial values of the other model parameters (mu, log(sigma), log(tau), beta)
  # data.flux: "True" if data are recorded on flux scale or "FALSE" if data are on magnitude scale.
  # delta.uniform.range: The range of the Uniform prior distribution for the time delay.
  # micro: It determines the order of a polynomial regression model that accounts 
  #        for the difference between microlensing trends. Default is 3. 
  #        When zero is assigned, the Bayesian model fits a curve-shifted model.

timedelay documentation built on July 2, 2020, 2:41 a.m.