Telemetry error

Error calibration

The first step to handling errors is to quantify them. Make sure that your data's "dilution of precision" (DOP) and error columns import correctly into ctmm. In the following wood turtle dataset, we have some calibration data and a turtle track. Note that the calibration data must be collected from the same model device as the tracking data, and their data formatting bust be similar enough that the same location classes are detected by as.telemetry.

names(turtle[[1]]) # data are not yet calibrated, but HDOP and location class is present
names(turtle) # two calibration datasets and two turtle datasets
plot(turtle[1:2],col=rainbow(2)) # calibration data only

The uere command is used to estimate the RMS UERE parameter(s) from calibration data. Do not use this command on animal tracking data.

UERE <-[1:2]) # only using calibration data

For GPS data, the RMS UERE will typically be 10-15 meters. Here we have two location classes, "2D" and "3D", which for this device have substantially different RMS UERE values. The UERE parameters can then be assigned to a dataset with the uere()<- command.

uere(turtle) <- UERE
summary(uere(turtle[[3]])) # this should be the same as summary(UERE)
plot(turtle[[3]],error=2) # turtle plot with 95% error discs

If you have Argos data, they will import with calibration already applied to the horizontal dimensions, but not the vertical dimension.

names(pelican$argos) # error ellipse information (COV and VAR) already present
plot(pelican$argos) # pelican Argos plot with 95% error ellipses

Error model selection

Not all GPS devices provide reliable DOP values and sometimes it is not obvious what error information will prove to be the most predictive. Generally speaking, as.telemetry will attempt to import the "best" column from among those that estimate error, DOP value, number of satellites and fix type, including timed-out fixes (see the timeout argument in help(as.telemetry)). Researchers may want to import their data with different error information, run for each error model, and then select among the candidate models. Here we do this by comparing the turtle calibration data with and without HDOP values.

First, we consider whether or not the HDOP and location class information are informative.

t.noHDOP  <- lapply(turtle,function(t){ t$HDOP  <- NULL; t })
t.noclass <- lapply(turtle,function(t){ t$class <- NULL; t })
t.nothing <- lapply(turtle,function(t){ t$HDOP  <- NULL; t$class <- NULL; t })

In other cases, manipulation will need to be performed before importing, so that as.telemetry can format the telemetry object properly. Running now results in errors calibrated under the assumption of homoskedastic errors.

UERE.noHDOP  <-[1:2])
UERE.noclass <-[1:2])
UERE.nothing <-[1:2])

We can now apply model selection to the UERE model fits by summarizing them in a list.


We can see that the combination of location class and HDOP values yield the best error model, both in terms of AICc and in terms of reduced Z squared. Reduced Z squared is a goodness-of-fit statistic akin to reduced chi squared, but designed for comparing error models.

We had two calibration datasets, so we can also see if there is any substantial difference between the two GPS tags. We do this by making a list of individual UERE objects to compare to the previous joint UERE model.

UERES <- lapply(turtle[1:2],

In this case, performance of the individualized and joint error models are comparable, and AICc selects the joint model.

Outlier detection

Now we come to the task of identifying outliers. The outlie function uses error information to estimate straight-line speeds between sampled times and distances from the bulk of the data.

outlie(turtle[[3]]) -> OUT

High-speed segments are colored in blue, while distant locations are colored in red. More emphasis is placed on the more extreme locations in the outlie plot. Visually we can see at least one outlier in the wood turtle data. The output of outlie also contains the error-informed speed and distance estimates (in SI units) used to generate the plot.


A sustained speed of 0.1 m/s is not biologically implausible for a wood turtle, but this location is highly suspicious, both in terms of speed and lack of proximity. After removing the outlier, we can check the data again.

BAD <- OUT$speed>0.08 # not appropriate for other species!
turtle[[3]] <- turtle[[3]][!BAD,]
outlie(turtle[[3]]) -> OUT

Datasets may have multiple outliers. In pathological situations, there may be no clear separation between the normative data and the outliers. This necessitates a better error model, either by improving inadequate (or absent) HDOP estimates or by employing a heavier tailed error distribution (not yet supported).

Variograms and model selection


If we were working with Argos data or high resolution GPS data on a small animal, then we can get a "nugget" effect in the variogram that looks like an initial discontinuity at short time lags.

# Argos type errors
curve(1+x,0,5,xlab="Short time lag",ylab="Semi-variance",ylim=c(0,6))
# detector array type errors (qualitatively only)
curve((1-exp(-2*x))/(1-exp(-2/4)),0,1/4,xlab="Short time lag",ylab="Semi-variance",ylim=c(0,6),xlim=c(0,5),add=FALSE)
curve(3/4+x,1/4,5,xlab="Short time lag",ylab="Semi-variance",ylim=c(0,6),add=TRUE,xlim=c(0,5))
title("Detector Array")

The height of this initial discontinuity corresponds to the variance of uncorrelated location errors. The second plot is the kind of initial discontinuity one has with detector array data. The end of the (slope) discontinuity is highlighted with a circle. This discontinuity is smooth because the movement and detection are correlated. The height of this initial discontinuity is also (at least roughly) the variance of the location errors.

Model fitting

Because of some convenient mathematical relations, fitting with telemetry errors involves numerically fitting 1-4 more parameters and is, therefore, slower and less reliable at converging on the MLE than fitting without telemetry error. Therefore, by default, telemetry error is not turned on in ctmm models (error=FALSE). Furthermore, in cases with an error model, is safer to use than direct applicaiton of, because will start with the simplest "compatible" model and then consider further model features that require numerical optimization in stepwise fashion.

Fitting with calibrated data

For calibrated errors, we have to set error=TRUE in the model guess to fit with telemetry error accounted for.

# automated guesstimate for calibrated data
GUESS <- ctmm.guess(turtle[[3]],CTMM=ctmm(error=TRUE),interactive=FALSE)
# stepwise fitting # CRAN policy limits us to 2 cores
FIT <-[[3]],GUESS,trace=TRUE,cores=2)
# if you get errors on your platform, then try cores=1

trace=TRUE allows us to see the models considered. verbose=TRUE would save each model. Since we started with a finite amount of calibration data, the location-error parameter estimates were updated by the tracking data, and we can compare the results here with the previous estimates in summary(UERE).

Simultaneous fitting with uncalibrated data

Fitting with unknown errors is a method of last resort. We have to provide an initial guess for error, which will be the device's RMS UERE. error=10 is usually a good guess for GPS data (other than e-obs). Unlike, is not yet coded to use the location classes, as simultaneous fitting is a method of last resort that becomes increasingly dangerous with more location classes.

# delete UERE information
uere(turtle) <- NULL

The only difference is that you have to provide an initial guess for the RMS UERE. Otherwise, the steps are the same.

# automated guesstimate for uncalibrated data (with 10 meter RMS UERE guess)
GUESS <- ctmm.guess(turtle[[3]],CTMM=ctmm(error=10),interactive=FALSE)
# fit and select models # CRAN policy limits us to 2 cores
FIT <-[[3]],GUESS,trace=TRUE,cores=2)
# if you get errors on your platform, then try cores=1

Here, fitting the RMS UERE simultaneously with the movement model produced a largely consistent result. This will not always be the case.

Fitting with uncalibrated data under a prior

An alternative method of last resort is to simply assign a range of credible RMS UERE values to the data. Again, 10-meter error is usually a good guess for 3D GPS data. For e-obs, and other devices that report error in meters, a number on the order of 1 is usually more relevant.

# these data have two location classes: 2D & 3D
# assign 20-meter 2D RMS UERE and 10-meter 3D RMS UERE
uere(turtle) <- c(20,10)
# for one location class, the above would likely be unnecessary, but would look like
# uere(turtle) <- 10

# the default uncertainty is none for numerical assignments
UERE <- uere(turtle)
# this is because the degrees-of-freedom are set to Inf
# here I set the DOF to a smaller value
UERE$DOF[] <- 2
# which now gives plausible credible intervals
# assign the prior to the data
uere(turtle) <- UERE
# automated guesstimate for calibrated data
GUESS <- ctmm.guess(turtle[[3]],CTMM=ctmm(error=TRUE),interactive=FALSE)
# stepwise fitting # CRAN policy limits us to 2 cores
FIT <-[[3]],GUESS,trace=TRUE,cores=2)
# if you get weird errors on your platform, then try cores=1

This method is still being developed and so this material is subject to change.

Try the ctmm package in your browser

Any scripts or data that you put into this service are public.

ctmm documentation built on Sept. 24, 2023, 1:06 a.m.