alt text


Using the 'BayesSingleSub' package, version 0.9.2+

Richard D. Morey and Rivka M. de Vries

Stable version: CRAN page

options(markdown.HTML.stylesheet = 'extra/manual.css')
library(knitr)
options(digits=3)
require(graphics)
set.seed(2)

Below we show how to compute the Bayes factors $latex B_{ar}$, $latex B_{trend}$, $latex B_{int}$, and $latex B_{t+i}$, which are presented in De Vries and Morey (2013), and how to obtain and plot the posterior distributions of the model parameters.

First, download the R statistical environment from http://cran.r-project.org/ and install the BayesSingleSub package using the R command:

install.packages("BayesSingleSub") 

Then, load the BayesSingleSub package with the library function:

library(BayesSingleSub)

For the purposes of this demonstration, we compute the Bayes factors for the data shown in Figure 1 of De Vries and Morey (2013). We first define the data and the number of observations in the pre- and post-treatment phases:

data = c(87.5, 82.5, 53.4, 72.3, 94.2, 96.6, 57.4, 78.1, 
         47.2, 80.7, 82.1, 73.7, 49.3, 79.3, 73.3, 57.3, 31.7, 
         50.4, 77.8, 67, 40.5, 1.6, 38.6, 3.2, 24.1)

n1 = 10
n2 = 15

For convenience, we divide the data before and after the intervention into separate vectors:

ypre =  data[1:n1]
ypost = data[n1 + 1:n2]

The logarithm of the JZS+AR Bayes factor $latex B_{ar}$ can be obtained by using the ttest.Gibbs.AR function, and the logarithm of the TAR Bayes factors $latex B_{int}$, $latex B_{trend}$, and $latex B_{i+t}$ by using the trendtest.Gibbs.AR function:

logBAR = ttest.Gibbs.AR(ypre, ypost, 
    iterations = 10000, return.chains = FALSE,
        r.scale = 1, betaTheta = 5,sdMet = 0.3)
logBTRENDS = trendtest.Gibbs.AR(ypre, ypost,
        iterations = 10000, return.chains = FALSE,
        r.scaleInt = 1, r.scaleSlp = 1, betaTheta = 5, sdMet = 0.3)

which will compute the Bayes factors while setting the $latex r$ scales of the Cauchy priors to 1, and the parameter $latex b$ of the beta priors on $latex \rho$ to 5. These are the default for the ttest.Gibbs.AR and trendtest.Gibbs.AR functions.

The first and second arguments of both functions are the series of observations in Phase 1 and Phase 2, respectively. The iterations argument controls the number of Gibbs sampler iterations; more iterations will increase the accuracy of the estimate of the Bayes factor. The accuracy of the estimate can be checked by comparing the estimate from the Gibbs sampler with the Monte Carlo estimate, discussed below. Substantial disagreement implies that the Gibbs sampler has not yet converged and the number of iterations should be increased. Setting the return.chains argument to FALSE ensures that the MCMC chains are not returned. They can be returned if they are needed, as we show below. The values for $latex r$ can be changed by changing the r.scale argument of the ttest.Gibbs.AR function and by changing the r.scaleInt (for the intercept differences) and r.scaleSlp (for the trend differences) arguments of the trendtest.Gibbs.AR function.

In both functions the value for $latex b$ can be changed by changing the betaTheta argument. If desired, $latex r$ and $latex b$ can be changed a few times and resulting Bayes factors can be compared. Finally, the acceptance rate reported by the function is an index of the quality of the MCMC sampling of $latex \rho$ (the Metropolis-Hastings acceptance rate; Hastings, 1970; Ross,2002). This number should be between .25 and .5 for most efficient estimation. If needed, the acceptance rate can be increased or decreased by decreasing or increasing, respectively, the sdMet argument, but the default setting should suffice for almost all analyses. For more information about a function's arguments, see the R help files for the corresponding function (e.g., help("ttest.Gibbs.AR")).

The logBAR variable now contains an estimate of the logarithm of the JZS+AR Bayes factor, and the logBTRENDS variable contains the logarithm of the three trend Bayes factors. We can exponentiate these log Bayes factors to obtain the Bayes factors:

logBAR
logBTRENDS
exp(logBAR) 
exp(logBTRENDS)

Every time the code above is run, the values will be slightly different, due to the random nature of MCMC estimation. However, with sufficient iterations (typically 2,000 or greater, in this application) the estimate should be consistent across calls to the ttest.Gibbs.AR and trendtest.Gibbs.AR functions.

If we wish to examine the posterior distribution for any parameter or the interval null Bayes factors for $latex \delta$ and $latex \beta_1$, we may do so by first calling trendtest.Gibbs.AR function with the return.chains argument set to true and the bounds under the null hypotheses defined1:

output.trend = trendtest.Gibbs.AR(ypre,ypost,
        iterations = 10000, return.chains = TRUE,
        r.scaleInt = 1, r.scaleSlp = 1, betaTheta = 5, sdMet = 0.3,
        intArea = c(-0.2, 0.2), slpArea = c(-0.1, 0.1))

The interval null Bayes factors are only returned if the return.chains argument is set to true. The default bounds under each of the null hypotheses changed by changing the areaNull argument of the ttest.Gibbs.AR function and changing the intArea (for the intercept) and slpArea (for the trend) arguments of the trendtest.Gibbs.AR function. Note that the chains contain unstandardized parameters, and the interval null Bayes factors are based on standardized effect sizes.

The variable output.trend now contains four components: logbf, which contains an estimate of the Bayes factor(s), chains, which contains the MCMC chains, acc, which contains the Metropolis-Hastings acceptance rate described above, and logbfArea, which contains the Bayes factor(s) for interval null hypotheses.

logIntervalNullBF.trend = output.trend$logbfArea
chains.trend = output.trend$chains

The variable logIntervalNullBF.trend contains the logarithm of the interval-null Bayes factors for the intercept and slope, respectively. Exponentiating the log Bayes factors gives the Bayes factors:

logIntervalNullBF.trend
exp(logIntervalNullBF.trend)

As before, every time the code above is run, the values will be slightly different, due to the random nature of MCMC estimation.

The variable chains.trend contains a matrix with eight columns, one for each parameter of the trend model. Each row represents an MCMC sample from the posterior distribution of a parameter. The parameters likely of interest to researchers, along with their names in the manuscript and column numbers, are shown in the table below:

Name in R | Name in manuscript | Column number in chains --------------|------------------------- |------------------------ mu0 | $latex \mu_0$ | 1 sig*delta | $latex \sigma_z \delta$ | 2 beta0 | $latex \beta_0$ | 3 sig*beta1 | $latex \sigma_z \beta_1$ | 4 sig2 | $latex \sigma^2_z$ | 5 rho | $latex \rho$ | 8

We can draw histograms of the samples for the parameters, as shown in the figure below. These histograms are approximations to the posterior distributions: for example, we can draw a histrogram for the $latex \rho$ parameter from the trend model:

hist(chains.trend[, 8], main = "Posterior for autocorrelation coeff.", 
    xlim = c(0, 1))
hist(chains.trend[, 8],main="Posterior for autocorrelation coeff.", xlim=c(0,1),xlab=expression(rho))

It is also easy to get posterior summary statistics:

summary(chains.trend[, 8])

In addition to the ttest.Gibbs.AR and trendtest.Gibbs.AR functions, the BayesSingleSub package also contains the ttest.MCGQ.AR and trendtest.MC.AR functions. We have not discussed these functions so far because they do not provide qualitatively different information from the information provided by the ttest.Gibbs.AR and trendtest.Gibbs.AR functions, and we did not want to confuse the reader by discussing several functions simultaneously. However, the ttest.MCGQ.AR and trendtest.MC.AR functions provide faster estimates of the Bayes factors than the ttest.Gibbs.AR and trendtest.Gibbs.AR functions, respectively. Their only disadvantage is that they do not return posterior distributions or interval null Bayes factors. This is because they estimate the Bayes factors by using Monte Carlo integration rather than Gibbs sampling. But when only the Bayes factor estimates are required, the ttest.MCGQ.AR and trendtest.MC.AR functions can be used, rather than the slower ttest.Gibbs.AR and trendtest.Gibbs.AR functions. As before, the functions require a definition of the Phase 1 and Phase 2 data and the number of iterations:

logBAR = ttest.MCGQ.AR(ypre, ypost, 
    iterations = 10000,  r.scale = 1,  betaTheta = 5)
logBTRENDS = trendtest.MC.AR(ypre, ypost, 
    iterations = 10000, r.scaleInt = 1, r.scaleSlp = 1,  betaTheta = 5)

Again, the resulting log Bayes factors can be exponentiated to obtain the Bayes factors, and the values for $latex r$ and $latex b$ can be changed by changing the r.scale, r.scaleInt, r.scaleSlp, and betaTheta arguments. For comparison with the previously computed Bayes factors, we print the new Bayes factors:

logBAR
logBTRENDS
exp(logBAR)  
exp(logBTRENDS)

Although they are somewhat different from the Bayes factors computed using the ttest.Gibbs.AR and trendtest.Gibbs.AR functions due to random sampling, they are similar. Increasing the number of iterations will give more precise results, which will have a greater level of agreement.

Footnotes


1. The same process holds for the JZS+AR Bayes factor, but we only show the TAR Bayes factor for brevity.

References


De Vries, R. M. \& Morey, R. D. (2013). Bayesian hypothesis testing Single-Subject Data. Psychological Methods.

Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97-109.

Ross, S. M. (2002). Simulation (3rd edition). London: Academic Press.



Try the BayesSingleSub package in your browser

Any scripts or data that you put into this service are public.

BayesSingleSub documentation built on May 2, 2019, 5:47 p.m.