Using Bayesian radiocarbon chronology package Bchron

knitr::opts_chunk$set(fig.width=7, fig.height = 5, fig.align = 'center',
                      warning=FALSE, message=FALSE)


Bchron is an R package that enables quick calibration of radiocarbon dates under various calibration curves (including user generated ones); age-depth modelling as per the algorithm of Haslett and Parnell (2008); Relative sea level rate estimation incorporating time uncertainty in polynomial regression models (Parnell and Gehrels 2015); non-parametric phase modelling via Gaussian mixtures as a means to determine the activity of a site (and as an alternative to the Oxcal function SUM; currently unpublished), and reverse calibration of dates from calibrated into un-calibrated years.

You will find Bchron far easier to use if you know some basics on how to use R. I recommend the book by Norman Matloff link, or the online intro course by Code School.

If you find bugs or want to suggest new features please visit the Bchron GitHub issues page.

Installing Bchron

Bchron will run in Windows, Mac OS X or Linux. To install Bchron you first need to install R. I would also recommend installing Rstudio as a nice desktop environment for using R. Once in R you can type:


at the R command prompt to install Bchron. If you then type:


it will load in all the Bchron functions.

If you want to install the development version of Bchron please visit the Bchron GitHub page. The GitHub version contains a few more features but some of these can be quite untested, and occasionally this version will break when it is in the process of being updated.

Calibrating radiocarbon dates

Bchron will calibrate single or multiple dates under multiple (even user defined) calibration curves. By default, the intcal13, shcal13 and marine13 calibration curves are included. You can calibrate a single radiocarbon date with, e.g.

ages1 = BchronCalibrate(ages=11553,

This will calibrate the radiocarbon age of 11,553 14C years BP with standard error 230 14C years on the intcal13 calibration curve. The id given is optional and only used for summarising and plotting. The summary command then gives the highest density regions of the calibrated date and the plot command produces a simple plot of the density, together with a shaded region for the 95% highest density region.

Bchron can calibrate multiple dates simultaneously by inputting the dates as vectors:

ages2 = BchronCalibrate(ages=c(3445,11553,7456), 

This will calibrate three different 14C ages with the calibration curves as specified in the calCurves argument. The summary and plot commands will produce individual highest density regions and density plots for the three dates.

Finally, if you provide position information (e.g. depths) to the BchronCalibrate function it will create a plot with position on the y-axis, e.g.:

ages3 = BchronCalibrate(ages=c(3445,11553), 

Credible intervals

If, alternatively, you want a credible interval (i.e. a single contiguous range) rather than an HDR, you could use the sampleAges function:

# First create age samples for each date
age_samples = sampleAges(ages3)
# Now summarise them with quantile - this gives a 95% credible interval
apply(age_samples, 2, quantile, prob=c(0.025,0.975))

The sampleAges function introduces some uncertainty as it is a simple random sample. This means that repeatedly running it will give you slightly different results. If you are introduced in precision at a yearly level, you might want to increase the n_samp argument though will slow the function down a little..e

If you wanted to find each date's median age (though this is often an inappropriate summary and should be treated with caution), use:

apply(age_samples, 2, quantile, prob=c(0.5))

The calibration code is very fast. On a standard PC you should have no trouble calibrating thousands of dates simultaneously without a noticeably long wait.

Comparing ages

If you want to compare two dates, a good method is to look at the posterior difference between them. Because Bchron provides all the information about the dates through the sample_ages function, this difference can be calculated in just a few lines of code.

age_diff = age_samples[,2] - age_samples[,1]
hist(age_diff, breaks = 30, freq = FALSE,
     main = 'Age difference between 3445+/-50 and 11553+/-230')

Running the Bchronology age-depth model

The Bchronology function fits the age-depth model outlined by Haslett and Parnell (2008). An illustrative data set from Glendalough is provided with the package, containing 5 radiocarbon dates and a known age for the top of the core. It can be called in via:


The top date is from the present and has the calibration curve 'normal' as it is not a 14C date. This core can be run through Bchron via:

GlenOut = Bchronology(ages=Glendalough$ages,

There are other arguments you can supply to Bchronology, including the date the core was extracted, the outlier probabilities for each individual date, and the number of iterations for which to run the algorithm. For more details see:


Once run, the summary commands will show various output:

summary(GlenOut, type='convergence')
summary(GlenOut, type='outliers')

The first summary command produces ages for each position supplied in the predictPositions argument above (output not shown as it's too long). The second provides convergence diagnostics. The third gives outlier probabilities. The plot command will produce an age-position plot:

     xlab='Age (cal years BP)',
     ylab='Depth (cm)',

Finally, the predict command will produce predicted ages for a newly specified set of positions with optional thicknesses:

predictAges = predict(GlenOut, 
                      newPositions = c(150,725,1500), 
predictAges = predict(GlenOut, 
                      newPositions = seq(0,1500,by=10))

To run this model on a data set of your own, you will need to load in your data set via, e.g.

mydata = read.table(file='path/to/file.txt',header=TRUE)
run = Bchronology(ages=mydata[,1],ageSds=mydata[,2], ...

Obtaining sedimnetation and accumulation rates

The summary function used above also allows for calculation of sedimentation and accumulation rates. Here is an example of calculating and plotting the accumulation rate for the GlenOut object created above:

acc_rate = summary(GlenOut, type = 'acc_rate')
plot(acc_rate[,'age_grid'], acc_rate[,'50%'], type='l', ylab = 'cm per year', xlab = 'Age (k cal years BP)', ylim = range(acc_rate[,-1]))
lines(acc_rate[,'age_grid'], acc_rate[,'2.5%'], lty='dotted')
lines(acc_rate[,'age_grid'], acc_rate[,'97.5%'], lty='dotted')

Clearly these accumulation rates are very uncertain, which is unsurprising given the paucity of dates.

To calculate sedimentation rates, you need to either make sure that the predictPositions argument in the Bchronology call above is given with unit differences (e.g. predictPositions=seq(0,1500,by=1)) or set the argument useExisting=FALSE in the command below:

sed_rate = summary(GlenOut, type = 'sed_rate', useExisting = FALSE)

This will be quite slow as it needs to predict ages on a regular unit depth grid. However, the plots can be created in the same way:

plot(sed_rate[,'position_grid'], sed_rate[,'50%'], type='l', ylab = 'Years per cm', xlab = 'Depth (cm)', ylim = range(sed_rate[,-1]))
lines(sed_rate[,'position_grid'], sed_rate[,'2.5%'], lty='dotted')
lines(sed_rate[,'position_grid'], sed_rate[,'97.5%'], lty='dotted')

You could then for example save this to a csv file with, e.g. write.csv(sed_rate, file = 'Glendalough_sed_rates.csv', quote=FALSE, row.names = FALSE)

Identifying influential dates and positions

Bchron contains two functions which can help with future data strategies. They are:

  1. Identifying the position at which the maximum age variance occurs. This is useful if you wanted to pick the one next position to date
  2. Identifying which date is the most influential on the chronology. This is useful if you are concerned that one date might be leading the chronology astray.

Identifying the position of maximum age variance can be created with the summary command:

summary(GlenOut, type = 'max_var')

You can control the number of positions reported with numPos. For example you could now add a line to mark the most variably layer to the plot with:

max_var = summary(GlenOut, type = 'max_var', numPos = 1)
     xlab='Age (cal years BP)',
     ylab='Depth (cm)',
abline(h = max_var, lty = 'dotted')

Finding the date which has maximum influence in the core is a more computationally intensive procedure and involves running the Bchron model repeatedly, leaving each date out in turn, and quantifying the effect on the chronology of removing each date. The function is called dateInfluence. Note that you provide this function with a full run of the Bchron model.

To find the influence, say, of date Beta-100901 as the median date shift in years when is removed, we would run:

              whichDate = 'Beta-100901',
              measure = 'absMedianDiff')

Alternatively, to find the influence of all the dates in terms of the median shift we could use:

              whichDate = 'all',
              measure = 'absMedianDiff')

A more compete measure of influence is the Kullback-Leibler (KL) divergence which takes account of the change in the full probability distribution rather than just looking at mean or median age differences when that particular date is left out. The KL divergence version could then be run using:

              whichDate = 'all',
              measure = 'KL')

The interpretation here is relative. The dates with the largest KL divergence values have the most influence on the chronology in the core.

Running RSL rate estimation

The function BchronRSL will produce estimated relative sea level rates from a regression model taking into account the uncertainties in age provided by a Bchronology run as above. Two example data sets are provided:


These can be run through Bchronology and BchronRSL via:

RSLchron = Bchronology(ages = TestChronData$ages,
                       ageSds = TestChronData$ageSds,
                       positions = TestChronData$position,
                       positionThicknesses = TestChronData$thickness,
                       ids = TestChronData$id,
                       calCurves = TestChronData$calCurves,
                       jitterPositions = TRUE,
                       predictPositions = TestRSLData$Depth)
RSLrun = BchronRSL(RSLchron,
                   RSLmean = TestRSLData$RSL,
                   RSLsd = TestRSLData$Sigma,
                   degree = 3)

The Bchronology run is as described in the section above. The BChronRSL run takes this object, an estimate of the RSL means and standard deviations, and a value of degree (here 3 indicating cubic regression). They can then be summarised and plotted via:

summary(RSLrun, type = 'RSL', age_grid = seq(0, 2000, by  = 250))
plot(RSLrun, type = 'RSL', main = 'Relative sea level plot')
plot(RSLrun, type = 'rate', main = 'Rate of RSL change')

See the help files for more options, including outputting parameter values, and plots of acceleration of RSL itself.

Running non-parametric phase estimation

Bchron contains two functions for running non-parametric phase models for estimating activity level in a site/region. The first, BchronDensity fits a full Bayesian Gaussian mixture model to the radiocarbon dates whilst the second BchronDensityFast fits an approximate version which will run on much larger data sets. An example run is

SlugDens = BchronDensity(ages=Sluggan$ages,

You can then output the possible start/end dates of phases:

summary(SlugDens, prob = 0.95)

The probability argument will specify the sensitivity of the algorithm to finding phases. Lower values of prob will lead to more discovered phases.

You can plot the density with:

plot(SlugDens,xlab='Age (cal years BP)', xaxp=c(0, 16000, 16))

Phases will be shown with a thick black red line. These can be turned off by setting the argument plotPhase=FALSE whilst the sensitivity can be changed with phaseProb.

BchronDensityFast is identical except for the function call:

SlugDensFast = BchronDensityFast(ages=Sluggan$ages,

The BchronDensityFast function outputs the age grid and density by default so requires no extra arguments to store it.

Including user-defined calibration curves

Bchron allows for user-defined calibration curves to be created through the function CreateCalCurve. If inputs to this function are:

Once run, the calibration curve needs to be moved from the chosen path (by default the working direcory) to the location of the other calibration curves. This is by default system.file('data',package='Bchron'). The resulting calibration curve can be used for all future runs until the package is deleted or re-installed. The calibration curve can be used by simply calling it in any other Bchron function using the supplied name. Note that no name checking is done, so it is possible to over-write the pre-defined calibration curves if

As an example, consider installing the intcal09 calibration curve, as available from ([here]. We can run the following code:

# Load in the calibration curve with:
intcal09 = read.table('',sep=',')
# Run CreateCalCurve

The new calibration curve now needs to be moved to the Bchron package directory. It can then be used via its name. For example, we can now compare the calibration of a date under the intcal09 curve with that of intcal13:

age_09 = BchronCalibrate(age=15500,ageSds=150,calCurves = 'intcal09',ids='My Date')
age_13 = BchronCalibrate(age=15500,ageSds=150,calCurves = 'intcal13')

Uncalibrating dates

Sometimes it is useful to turn a calendar age estimate into an uncalibrated age. This might be for simulation or model checking purposes. 'Un-calibrating' dates can be implemented in Bchron using the unCalibrate function. There are two versions, one where you have a single age and just want to get a quick look up of the 14C age, and another where you have the full distribution of the calibrated age and want to estimate both the 14C age and its associated error

The simple version can be run with, e.g.:

unCal1 = unCalibrate(2350, type = 'ages')

The above un-calibrates the age 2350 BP to the age 2346 14C years BP on the intcal13 calibration curve. A more advanced version would be:

unCal2 = unCalibrate(calAge = c(2350, 4750, 11440),
                     calCurve = 'shcal13',
                     type = 'ages')

A fun experiment can then be run to see what the distribution of 14C ages would look like for an even distribution of calendar ages:

ageRange = seq(8000, 9000, by = 5)
c14Ages = unCalibrate(ageRange,
                      type = 'ages')
load(system.file('data/intcal13.rda', package = 'Bchron'))
plot(intcal13[,1], intcal13[, 2], 
     xlim = range(ageRange),
     ylim = range(c14Ages),
     type = 'l',
     las = 1,
     xlab = '14C years BP',
     ylab = 'Cal years BP')
axis(side = 1, at = ageRange, labels = FALSE, tcl = 0.5)
axis(side = 2, at = c14Ages, labels = FALSE, tcl = 0.5)

Unsurprisingly the places where the calibration curve is flat (or reverses) leads to 'clumps' of calendar ages.

The more complicated version is run by providing it with a (preferably large) sample of ages which form a calibrated date. Such a set is created with Bchron via sampleAges, e.g.:

calAge = BchronCalibrate(ages = 11255,
                         ageSds = 25,
                         calCurves = 'intcal13')
calSampleAges = sampleAges(calAge)

We can now uncalibrate such a set of samples

unCal = unCalibrate(calSampleAges,
            type = 'samples')

This will provide a single best estimate of the mean and standard deviation of the uncalibrated (here 14C) date.


For a description of the kind of things that Bchron can do:
Parnell, A. C., Haslett, J., Allen, J. R. M., Buck, C. E., & Huntley, B. (2008). A flexible approach to assessing synchroneity of past events using Bayesian reconstructions of sedimentation history. Quaternary Science Reviews, 27(19-20), 1872–1885.

For the maths behind Bchron:
Haslett, J., & Parnell, A. (2008). A simple monotone process with application to radiocarbon-dated depth chronologies. Journal of the Royal Statistical Society: Series C (Applied Statistics), 57(4), 399–418.

For a review of chronology models:
Parnell, A. C., Buck, C. E., & Doan, T. K. (2011). A review of statistical chronology models for high-resolution, proxy-based Holocene palaeoenvironmental reconstruction. Quaternary Science Reviews, 30(21-22), 2948–2960.

For relative sea level rate estimation:
Parnell, A. C., & Gehrels, W. R. (2015). Using chronological models in late Holocene sea level reconstructions from salt marsh sediments. In: I. Shennan, B.P. Horton, and A.J. Long (eds). Handbook of Sea Level Research. Chichester: Wiley.

Try the Bchron package in your browser

Any scripts or data that you put into this service are public.

Bchron documentation built on March 18, 2018, 2:17 p.m.