Intigeo tag on a Bar tailed godwit analysis example

FLightR analysis

Appendix A6 to Rakhimberdiev, E., Senner, N. R., Verhoeven, M. A., Winkler, D. W., Bouten, W. and Piersma T. 2016 Comparing inferences of solar geolocation data against high-precision GPS data: annual movements of a double-tagged Black-Tailed Godwit. - Journal of Avian Biology 47(4): 589–596. DOI

Install package

We used FLightR 0.3.6 version, so you might want to install it if you want to get exactly the same results.

NB: Updated workflow for FLightR >= 0.3.9 is here

library(devtools)
install_github("eldarrak/FLightR@0.3.6") # note the version
library(FLightR)

Download data from GitHub

There are two files available in the directory: .lux and .csv. *.lux is the original file you get from Migrate Technology Ltd. This file does not have defined twilights. In the appendix A4 we already defined twilights with [BAStag package] (https://github.com/SWotherspoon/BAStag) and saved them. You can define twilights by yourself or just download file with predefined format. Let's now assume that you have done the previous step and have a .csv file. Now you can process these data:

download.file(
 "https://raw.githubusercontent.com/eldarrak/FLightR/master/examples/Black-Tailed_Godwit_JAB_example/A3_TAGS_format.csv",
 "A3_TAGS_format.csv")
TAGS.twilights<-read.csv("A3_TAGS_format.csv", stringsAsFactors =F)

TAGS.twilights$light<-exp(TAGS.twilights$light) # this is needed because
                      # we log transformed data in convert.lux.to.tags()

Now we have data read and we want to process them with the read.tags.light.twilight() function:

FLightR.data<-read.tags.light.twilight(TAGS.twilights,
                      start.date="2013-06-10", end.date="2014-05-17")

and process them with

Proc.data<-process.twilights(FLightR.data$Data, FLightR.data$twilights, 
                             measurement.period=60, saving.period=300)
-`measurement.period` - how often tag measures data (in seconds);
-`saving.period` - how often tag saves data (sec).

Current tag measures data every minute (measurement.period=60) and saves maximum over 5 minutes (saving.period=300)

add calibration location as x,y

start=c(5.43, 52.93)

log.light.borders=c(1.5, 9) # default values for Intigeo tag
log.irrad.borders=c(-3, 3) # default values for Intigeo tag

Calibration

We need to select days when bird was in a known location. These are typically days in the beginning or in the end the data. To do we first will plot all sun slopes over the whole period and then will decide when is our calibration period

Calibration.periods<-data.frame(calibration.start=as.POSIXct("2000-01-01"),
                                calibration.stop=as.POSIXct("2020-01-01"),
                                lon=start[1], lat=start[2])
# note - we select dates outside the range 
calibration.parameters<-get.calibration.parameters(Calibration.periods, Proc.data,
                            model.ageing=F, log.light.borders=log.light.borders, 
                            log.irrad.borders=log.irrad.borders)
# and log irradiance boundaries also outside normal range 
#  as we want to see the whole track first.

plot.slopes(calibration.parameters$All.slopes)

Now we have to select the calibration periods. One should try to play with abline() to find the proper boundaries for the calibration. The calibration is characterized by more or less coinciding dawn and dusk lines. And absense of a strong pattern - the lines should be norizontal.

abline(v=as.POSIXct("2013-08-20")) # end of first calibration period
abline(v=as.POSIXct("2014-05-05")) # start of the second calibration period

I will use both calibration periods - one in the beginning and another in the end. Now we create a data.frame where each line is one of the calibration periods. and the columns are start, end, x, y.

Calibration.periods<-data.frame(calibration.start=as.POSIXct(c("2000-01-01", "2014-05-05")),
                                calibration.stop=as.POSIXct(c("2013-08-20", "2020-01-01")),
                                lon=start[1], lat=start[2])

model.ageing=FALSE # set this FALSE is you are not going to model tag ageing.
#Actually one can model it only if there are light data 
#in known position in the beginning and in the end of the logger data

And now we estimate calibration parameters with a real boundaries...

calibration.parameters<-get.calibration.parameters(Calibration.periods, Proc.data, 
    model.ageing=model.ageing, log.light.borders=log.light.borders,
    log.irrad.borders=log.irrad.borders)

plot.slopes(calibration.parameters$All.slopes)

The next part is needed to exclude very strong outliers if there are some

if (length(calibration.parameters$calib_outliers)>0) {
FLightR.data$twilights$excluded[which(sapply(FLightR.data$twilights$datetime,
        FUN=function(x) min(abs(calibration.parameters$calib_outliers-as.numeric(x))))<3600)]<-1
Proc.data<-process.twilights(FLightR.data$Data, 
        FLightR.data$twilights[FLightR.data$twilights$excluded==0,],
        measurement.period=60, saving.period=300,
        impute.on.boundaries=Proc.data$impute.on.boundaries)
        calibration.parameters<-get.calibration.parameters(Calibration.periods, Proc.data, 
        model.ageing=model.ageing, log.light.borders=log.light.borders,
        log.irrad.borders=log.irrad.borders)
plot.slopes(calibration.parameters$All.slopes)
}

Now we create a preliminary calibration and use it to check whether there are serious outliers that should be excluded before hand.

Calibration=create.calibration(calibration.parameters$All.slopes, Proc.data,
        FLightR.data, start, 
        log.light.borders=log.light.borders, log.irrad.borders=log.irrad.borders,
        ageing.model=calibration.parameters$ageing.model)

Threads=detectCores()-1 # setting how many cores we allow to use

Outliers<-detect.tsoutliers(Calibration, Proc.data, plot=T,
                Threads=Threads, max.outlier.proportion=0.075, simple.version=F)

# max.outlier.proportion sets proportion of outliers we allow to exclude at maximum.

Now it is very important to decide on whether we are going to exclude outliers or not. Generally I would recommend to

  1. run without any outlier exclusion and see what you have got
  2. If you are not satisfied - you might run it with outlier detection, but for schedules of migration I wouls still use version w/o outlier detection, as it often ecludes migration points.

In the currect case we will not exclude any outliers. So we set exclude.detected.outliers to FALSE and R will skip the next part

exclude.detected.outliers<-FALSE
if (exclude.detected.outliers) {
Proc.data<-Outliers$Proc.data
FLightR.data$twilights$excluded[which(!as.numeric(FLightR.data$twilights$datetime) %in%
    c(Proc.data$Twilight.time.mat.dusk[25,]+Calibration$Parameters$saving.period
    -Calibration$Parameters$measurement.period, Proc.data$Twilight.time.mat.dawn[25,]) & 
    FLightR.data$twilights$excluded!=1 )]<-2
# end of outlier exclusion

# recalibration with outliers excluded..

Proc.data<-process.twilights(FLightR.data$Data,
    FLightR.data$twilights[FLightR.data$twilights$excluded==0,],
    measurement.period=Proc.data$measurement.period,
    saving.period=Proc.data$saving.period,
    impute.on.boundaries=Proc.data$impute.on.boundaries)

calibration.parameters<-get.calibration.parameters(Calibration.periods, Proc.data,
               model.ageing=model.ageing, log.light.borders=log.light.borders,  
               log.irrad.borders=log.irrad.borders)

plot.slopes(calibration.parameters$All.slopes)

Calibration<-create.calibration(calibration.parameters$All.slopes, Proc.data,
        FLightR.data, log.light.borders=log.light.borders, 
        log.irrad.borders=log.irrad.borders,
        start, ageing.model=calibration.parameters$ageing.model)

}

Now we need one more line before going for the spatial part

Processed.light<-make.processed.light.object(FLightR.data)

Spatial extent

Now we set up a grid.

ylim = c(30, 57)
xlim = c(-14, 13)

Globe.Points<-regularCoordinates(200) # 50 km between each point

All.Points.Focus<-Globe.Points[Globe.Points[,1]>xlim[1] &
                  Globe.Points[,1]<xlim[2] & 
                  Globe.Points[,2]>ylim[1] &
                  Globe.Points[,2]<ylim[2],]

# here we could cut by the sea but we will not do it now

plot(All.Points.Focus, type="n")
map('state',add=TRUE, lwd=1,  col=grey(0.5))
map('world',add=TRUE, lwd=1.5,  col=grey(0.8))
abline(v=start[1])
abline(h=start[2])

Grid<-cbind(All.Points.Focus, Land=1)

There are two main ideas in the extent - 1. you have to delete the points you do not want to allow (and it will speed up the process). 2. you can set up 0 in the third column if you want to allow to move through the point but not stay there between twilights (still experimental option)..

Now we will finalize the object preparation

Index.tab<-create.proposal(Processed.light, start=start, Grid=Grid)
Index.tab$Decision<-0.1 # prob of migration
Index.tab$Direction<- 0 # direction 0 - North
Index.tab$Kappa<-0 # distr concentration 0 means even
Index.tab$M.mean<- 300 # distance mu
Index.tab$M.sd<- 500 # distance sd

all.in<-geologger.sampler.create.arrays(Index.tab, Grid, start=start, stop=start)

all.in$Calibration<-Calibration
all.in$Data<-FLightR.data

Now we estimate likelihoods for every point of the grid at every twilight.

# the next step might have some time
# with the current example it takes about 3 min at 8 core workstation

Threads= detectCores()-1
Phys.Mat<-get.Phys.Mat.parallel(all.in, Proc.data$Twilight.time.mat.dusk,
        Proc.data$Twilight.log.light.mat.dusk,
    Proc.data$Twilight.time.mat.dawn,
    Proc.data$Twilight.log.light.mat.dawn,
    threads=Threads, calibration=all.in$Calibration)

all.in$Spatial$Phys.Mat<-Phys.Mat

Doing some preliminary checks now: First, we plot likelihood surface for a sample twilight

t=20
my.golden.colors <- colorRampPalette(c("white","#FF7100"))

image.plot(as.image(all.in$Spatial$Phys.Mat[,t], x=all.in$Spatial$Grid[,1:2],
    nrow=30, ncol=30), col=my.golden.colors(64), main=paste("twilight number",t ))          
library(maps)
map('world', add=T)
map('state', add=T)
abline(v=start[1])
abline(h=start[2])          

And second we mutiply likelihoods by each other and see where the results is going to be.

my.golden.colors <- colorRampPalette(c("white","#FF7100"))

#if (FALSE) {
par(mfrow=c(3,3), ask=T)
for (t in seq(1,dim(all.in$Spatial$Phys.Mat)[2]-30, by=30)) {
# ok now I want to see how stable my estimates are.
image.plot(as.image(apply(all.in$Spatial$Phys.Mat[,t:(t+30)],1,  FUN=prod),
           x=all.in$Spatial$Grid[,1:2], nrow=60, ncol=60),
           col=my.golden.colors(64), main=paste("twilight number", t))
library(maps)
map('world', add=T)
map('state', add=T)
#abline(v=start[1])
#abline(h=start[2])
}
#}

dev.off()

Main run

For the main run you might want to select: 1. nParticles 1e4 - is for test is 1e6 is for the main run 2. known.last select TRUE if you know that in the end of data collection tag was in a known place 3. check.outliers - additional on a fly outliers selection. Normally shoud be chosen as TRUE.

nParticles=1e6
Threads= detectCores()-1
a= Sys.time()
Result<-run.particle.filter(all.in, save.Res=F, cpus=min(Threads,6),
                nParticles=nParticles, known.last=TRUE,
        precision.sd=25, save.memory=T, k=NA,
        parallel=T,  plot=T, prefix="pf",
        extend.prefix=T, cluster.type="SOCK",
        a=45, b=1500, L=90, adaptive.resampling=0.99, check.outliers=F)
b= Sys.time()
b-a
save(Result, file="Result.bltg.ageing.model.noOD.RData")

This is it. Now we can do some plotting.

Plotting results

Plot a simple map

par(mfrow=c(1,1))
par(mar=c(4,4,3,1),las=1,mgp=c(2.25,1,0))

#--------------------------------------------------------
# we can plot either mean or median.

Mean_coords<-cbind(Result$Results$Quantiles$Meanlon, Result$Results$Quantiles$Meanlat)
if (is.null(Result$Results$Quantiles$MedianlatJ)) {
    Median_coords<-cbind(Result$Results$Quantiles$Medianlon, Result$Results$Quantiles$Medianlat)
} else {
    Median_coords<-cbind(Result$Results$Quantiles$MedianlonJ, Result$Results$Quantiles$MedianlatJ)
}
plot(Median_coords, type = "n",ylab="Latitude",xlab="Longitude")
library(maptools)
data(wrld_simpl)
plot(wrld_simpl, add = T, col = "grey95", border="grey70")
lines(Median_coords, col = "darkgray", cex = 0.1)
points(Median_coords, pch = 16, cex = 0.75, col = "darkgray")
lines(Mean_coords, col = "blue", cex = 0.1)
points(Mean_coords, pch = 16, cex = 0.75, col = "blue")
box()

Plot lon lat graph

Quantiles<-Result$Results$Quantiles
par(mfrow=c(2,1))
par(mar=c(2,4,3,1),cex=1)
 Sys.setlocale("LC_ALL", "English")  

#Longitude
plot(Quantiles$Medianlon~Quantiles$time, las=1,col=grey(0.1),pch=16,
     ylab="Longitude",xlab="",lwd=2, ylim=range(c( Quantiles$LCI.lon,
     Quantiles$UCI.lon )), type="n")


polygon(x=c(Quantiles$time, rev(Quantiles$time)), y=c(Quantiles$LCI.lon, rev(Quantiles$UCI.lon)),
         col=grey(0.9), border=grey(0.5))

polygon(x=c(Quantiles$time, rev(Quantiles$time)), y=c(Quantiles$TrdQu.lon, rev(Quantiles$FstQu.lon)),
         col=grey(0.7), border=grey(0.5))

lines(Quantiles$Medianlon~Quantiles$time, col=grey(0.1),lwd=2)


abline(v=as.POSIXct("2013-09-22 21:34:30 EDT"), col=1, lwd=1, lty=2)
abline(v=as.POSIXct("2014-03-22 21:34:30 EDT"), col=1, lwd=1, lty=2)

#Latitude
par(mar=c(3,4,1,1))

plot(Quantiles$Medianlat~Quantiles$time, las=1,col=grey(0.1),
     pch=16,ylab="Latitude",xlab="",lwd=2,
     ylim=range(c( Quantiles$UCI.lat, Quantiles$LCI.lat )), type="n")

polygon(x=c(Quantiles$time, rev(Quantiles$time)), y=c(Quantiles$LCI.lat, rev(Quantiles$UCI.lat)),
           col=grey(0.9), border=grey(0.5))

polygon(x=c(Quantiles$time, rev(Quantiles$time)), y=c(Quantiles$TrdQu.lat, rev(Quantiles$FstQu.lat)),
           col=grey(0.7), border=grey(0.5))

lines(Quantiles$Medianlat~Quantiles$time, col=grey(0.1),lwd=2)

abline(v=as.POSIXct("2013-09-22 21:34:30 EDT"), col=1, lwd=1, lty=2)
abline(v=as.POSIXct("2014-03-22 21:34:30 EDT"), col=1, lwd=1, lty=2)

Migration schedules

We can extract departure arrival estimates from the results we have got. First select grid points that are of interest. For example in the current data we are interested to figure out when our bird left the Netherlands. We will make a boundary at Lat 2°

Index<-which(Result$Spatial$Grid[,1]>(2))

And now I estimate probabilities if being in the area for each twilight:

Prob.of.being.in.NL<-get.prob.of.being.in(Result, Index)

Times.NL=find.times.distribution(Prob.of.being.in.NL,Result$Indices$Matrix.Index.Table$time)

Times.NL

This is it so far... There are of course many more things one could do with the data and we plan to show them in the other manuscripts.



eldarrak/FLightR documentation built on Nov. 10, 2023, 7:53 a.m.