To install the latest version ef
package run
remotes::install_github("faskally/ef")
library(ef) library(mgcv) library(dplyr) library(reshape) library(lattice) data(ef_data)
Create a factor for pass with separate levels for the first and subsequent passes. NOTE: Assuming the second and third pass have the same capture probability avoids problems with model identifiability (Millar et al., 2016)
ef_data$pass12 <- factor(replace(ef_data$pass, ef_data$pass > 2, 2))
Create a sampleID for grouping EF counts together. A sample is a unique combination of site visit x species x lifestage and is required by the EF function
ef_data$sampleID <- as.numeric( factor( paste(ef_data$date, ef_data$siteID, ef_data$lifestage, ef_data$species) ) )
Create a visitID for unique site visits. This is required by the overdispersion function (see below).
ef_data$visitID <- factor(paste(ef_data$date, ef_data$siteID))
Create a factor for year
ef_data$fyear <- factor(ef_data$year)
Fit a simple model. This contains a factor for pass (2 levels), lifestage (2 levels), site (3 levels) and year (5 levels). With a larger dataset it would also be possible to add continuous variables as linear or smoothed terms providing the degrees of freedom are specified (typically up to k=3 (2 d.f.), see Millar et al., 2016 for further details)
m1 <- efp( count ~ siteID + fyear + pass12 + lifestage, data = ef_data, pass = pass, id = sampleID )
m1
The model needs to be reformatted to allow predictions (i.e. obtain P) capture probabilities, uses as.gam function.
m1_gam <- as.gam(m1)
Predict P from the model to the full dataset. Note, if all the data is multi-pass then can just use fitted values. However, if you want P for both single and multi-pass data then predict to the new dataframe
ef_data$prob <- predict.gam(m1_gam, newdata=ef_data, type = "response")
Aggregate counts by sampleID (i.e. counts of fry or parr per site visit)
counts <- aggregate(count ~ sampleID, ef_data, sum)
Reshape to get capture probability by species x lifestage x site visit x pass. This facilitates the summary of P.
probs <- cast(ef_data, formula = sampleID ~ pass, value = "prob")
Get cumulative capture probability by sampleID (see Glover et al., 2018)
probs $ prob <- apply(probs[,2:4], 1, function(x) 1 - prod(1-x))
Remove individual pass capture probabilities
probs <- select(probs, sampleID, prob)
Remove pass and pass-specific data from dataframe
densdata <- unique(select(ef_data, -count, -prob, -pass, -pass12))
Join data and cumulative counts by sampleID
densdata <- left_join(densdata, counts, by = "sampleID")
Join data and cumulative capture probability by sampleID
densdata <- left_join(densdata, probs, by = "sampleID")
Density (N/m^-2^) is estimated as $\frac{counts}{area*P}$
densdata $ Density_Estimate <- densdata $ count / (densdata $ area * densdata $ prob)
If you want to model counts directly (Poisson model), you need to have cumulative P and area in the offset. For a Poisson model the offset is $\log({area*cumulativeP})$.
densdata $ offset <- log(densdata $ area * densdata $ prob)
Plot to check
densdata <- densdata[order(densdata $ year), ] xyplot(Density_Estimate ~ fyear | lifestage, data = densdata, groups = siteID, type = 'b', auto.key = T, scales = list(alternating = F), xlab = "Year", ylab = expression(paste("Fish m"^-2)))
Because electrofishing count data are typically overdispersed and because the modelling framework cannot incorpororate random effects, it it necessary to estimate the amount of overdispersion in the data and to account for this when comparing models.
Overdispersion is estimated by fitting a large model that captures most of the systematic variation in the data and comparing this to a visit-wise and saturated model (see Millar et al., 2016):
large_model <- efp(count ~ fyear + pass12 + lifestage, data = ef_data, pass = pass, id = sampleID)
Use the overdispersion function in the ef package and specify the data, visit ID, site ID, sample ID and the large model.
od_estimate <- overdispersion(data = ef_data, visitID = "visitID", sampleID = "sampleID", control = list(maxit = 100), largemodel = large_model)
View overdispersion estimate. The 'disp' column contains the estimates of within and between sample overdispersion. The largest of these values is used to adjust the BIC.
od_estimate
Compare models using the adjusted BIC function (Equation 4, Millar et al., 2016)
mfull <- efp(count ~ siteID + year + lifestage + pass12, data = ef_data, pass = pass, id = sampleID) BICadj(mfull, od_estimate)
mlife <- efp(count ~ siteID + year + pass12, data = ef_data, pass = pass, id = sampleID) BICadj(mlife, od_estimate)
After making changes to the code, run:
devtools::document()
to update help files and the NAMESPACE
then
devtools::check()
to check for errors, and fix warnings and notes
Finally, the readme can be rebuilt using
rmarkdown::render("README.Rmd")
then changes can be commited and pushed.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.