knitr::opts_chunk$set(
  echo = TRUE,
  comment = "",
  prompt = FALSE,
  fig.width = 10,
  fig.height = 10
)

1. Introduction

Passive acoustic monitoring is an effective means of monitoring marine mammals; however, the value of acoustic detections depends on our ability to identify the source of the sounds we detect. Manual classification by trained acousticians can be used to develop a set of training data for supervised classification algorithms, such as BANTER (Bio-Acoustic eveNT classifiER).

A BANTER acoustic classifier is written in open source software, R, and requires minimal human intervention, providing more consistent results with fewer biases and errors. BANTER also produces a classification error rate which is a critical component when there is no independent verification of species identity. BANTER has been developed in a general manner such that it can be applied to sounds from any source (anthropogenic, terrestrial animals, marine animals).

BANTER is a flexible, hierarchical supervised machine learning algorithm for classifying acoustic events consisting of two stages, each consisting of a set of Random Forest classifiers (Rankin et al. 2017). The first stage is the Detector Model, where individual classification models are built for any number of call types (detection types). The second stage applies the results of the first stage call classifiers and adds any event-level variables to create an Event Model that classifies the event. BANTER classifiers (Detector and Event Models) are based on the Random Forest supervised learning algorithm.

Random Forest creates a large number of decision trees, each trained on a different subset of the data, aggregating predictions across all trees (the forest). For each decision tree in the forest, some portion of the samples will be left out during the construction of the decision tree (referred to as the Out-Of-Bag or OOB dataset). The model automatically evaluates its own performance by running each of the samples in the OOB dataset through the forest and comparing its predicted group classification to the a-priori designated group. Thus, there is no need for separate cross-validation or another test to get an unbiased estimate of the error. Random Forest can handle a large number of input variables which can be discrete or categorical, and is not prone to issues related to correlated variables. The random subsetting of samples and variables, and use of OOB data prevents overfitting of the model.

Here we present a user guide for the BANTER acoustic classification algorithm, using the built-in dataset provided in the BANTER package.

2. Methods

At a minimum, BANTER requires data to train a classifier which can then be applied to predict species identity on a novel dataset that has the same predictors. Here we will use some of the data provided within the BANTER R package for testing.

Once you have training data, you first need to initialize a BANTER model. The BANTER model can be developed in stages (first the Detector Model, then the Event Model) or as a single unit. We suggest running these separately so that each model can be modified to improve performance and ensure stability. Once the models are optimized, we present options for summarizing and interpreting your results.

This guide was developed based on BANTER v0.9.3.

2.1 Data Requirements and Limitations

BANTER has flexible data requirements which allow it to be applied to a wide array of training data. BANTER consists of two stages: (1) Detector Model and (2) Event Model. At its core, BANTER is an event classifier: it classifies a group of sounds observed at the same time. Multiple call type detectors can be considered; if your species of interest only produces a single call type, we have found that minor changes to the detector settings can lead to differences between species that can be informative (see Rankin et al. 2017).

BANTER accepts data in a generic R data frame format. There can be one or more detector data frame and only one event data frame.

The event data frame must have one row per event. The columns must be:

A detector data frame must have one row per call. The columns must be:

If you use PAMGuard open source software (pamguard.org), you can process your data and export your data formatted for BANTER using the export_banter() function in the PAMpal package (https://cran.r-project.org/web/packages/PAMpal/PAMpal.pdf).

BANTER cannot accommodate missing data (NA). Any predictors with missing data will be excluded from the model. As BANTER needs to both train and test the model, there must be a minimum of two events for each species in your model. Any species with fewer than 2 events will be excluded from the model. If a species is excluded from one of your detector models, but occurs in other detector models, then it can be used in the event model.

BANTER is a supervised machine learning classification model, and the strength of the classifications necessarily relies on the quality of the training data. Likewise, if you are applying a classifier you built to predict novel data, it is imperitive that the novel data be collected in the same manner, and have the same variables, as the training data. Here we provide tools to help you assess your model, but we recommend that you dive into your data to understand its strengths and limitations.

First, install the following R packages

install.packages(c("banter","rfPermute", "dplyr", "ggplot2"))

Then load the R packages

library(banter)
library(rfPermute)
library(dplyr)
library(ggplot2)

2.2 Create BANTER Model

The first step requires the initialization of a BANTER model with a data.frame of events that you provide.

We will use the data provided in the BANTER package (train.data). We must first load the training data, and take a look at the first few lines of data.

# load example data
data(train.data) 
# show names of train.data list
names(train.data)

The train.data object is a list that contains both the event data frame (train.data$events) and a list of data frames for each of three call detectors (train.data$detectors).

2.2.1 Initialize BANTER Model

Once we have our data, the next step is to initialize the BANTER model.

# initialize BANTER model
bant.mdl <- initBanterModel(train.data$events) 

BANTER is a hierarchical random forest model, with 2 stages. The first stage is the Detector Model, where a random forest model is created for each detector in your dataset. The second stage is the Event Model, which uses information derived from the Detector Model, along with any additional event level predictors. We can develop each of these models independently, or we can approach them as a single function. Here we will approach the Detector Model and the Event Model separately. Please see https://github.com/EricArcher/banter for more information on combining the models into a single function.

When the BANTER model has been initialized, it is good to check the summary() to see the distribution of the number of events per species:

# summarize BANTER model
summary(bant.mdl)

2.2.2 Adding Detectors

The addBANTERDetector() function adds Detectors to your model, where the detector information is tagged by Event. If the detector data is a single data frame, then the name of the detector (for example, "bp" is the "bp" detector) needs to be provided. If detector data is a named list of data frames, the name does not need to be provided (can be NULL). The addBanterDetector() function can be called repeatedly to add additional detectors or detectors can be added all at once. If your models require different parameters for different detectors, you may want to model them separately. Here we will lump all detectors into a single Detector Model.

# Add BANTER Detectors and Run Detector Models
bant.mdl <- addBanterDetector(
  bant.mdl, 
  data = train.data$detectors, # Identify all detectors in the train.data dataset
  ntree = 100, # Number of trees to run. See section on 'Tune BANTER Model' for more information.
  importance=TRUE, # Retain the importance information for downstream analysis
  sampsize = 2 # Number of samples used for each tree. See section on 'Tune _BANTER_ Model' for more information.
)

This will create the Random Forest detector models for every detector added. The function will generate reports of species excluded from models due to an insufficient number of samples. When complete, a summary of the model shows mean classification rates of each species in each detector:

summary(bant.mdl)

You can then create and examine the Error Trace Plot to determine the stability of your model. You may want to modify the sampsize and ntree parameters in the model to improve performance and ensure a stable model. See the section on Tune BANTER Model for more information on interpreting these plots and tuning your model.

plotDetectorTrace(bant.mdl)

Once you are satisfied with the Detector Model, you are ready to run your final BANTER model. This model will include output from the Detector Models, as well as any event-level variables you may have.

This model also uses the ntree and sampsize parameters, which can be modified to improve performance and model stability. We have purposefully set these values to provide poor results. The next step will be to tune this model to improve performance (see Tune BANTER Model).

bant.mdl <- runBanterModel(bant.mdl, ntree = 10, sampsize = 1)

The next step is to evaluate and tune your model.

2.3 Tune BANTER Model

The BANTER Models (Detector Models and Event Models) use Random Forest, which is an ensemble approach to classification using a large number of classification trees (ntree), where each tree consists of a random sample (n = sampsize) and a random number of variables to build a tree. Each tree gives a classification (or 'vote'), and the forest uses the classification having the most votes (trees in the forest). We can tune these two parameters, ntree and sampsize, to improve performance and ensure stability of the models. Here we will examine the parameters used in the model(s), as well as the summary text and plots of the model, to evaluate the model and tune it to improve the results.

The arguments provided in the Detector and/or Event models include:

The model will use n = sampsize samples for creating each tree in the model, and the remaining samples will be used as out-of-bag (OOB) for model testing. At a maximum, sampsize should be half of the smallest sample size of all species, which ensure a balanced and unbiased model. Models will run faster for low sample sizes and large number of trees, rather than vice-versa (there is little computational cost to running a very large number of trees). Simulated tests showed that we can obtain the same performance with sample sizes as low as 1-2 per species and very large numbers of trees (F. Archer, unpublished methods).

It is important that your BANTER model is stable: the results should not change when you rerun the model. We will explain how to tune the model using the case of the poor performing BANTER Event Model we created above. These same methods can be applied to the Detector Models, to ensure that your stage 1 models are stable (in this small case they are reasonably stable).

The first tool we have is the Error Trace plot (top plot after you run the summary function, below), which shows the error (y-axis) as we average across an increasing number of trees (x-axis). The goal is to have a stable Error Trace (flat lines). The second tool we have is the count of the percentage of trees where a samples was 'inbag'. You can get these plots by applying the summary function to your BANTER model after the Event model has been run.

summary(bant.mdl)

The top plot is Error Trace, or the trace of the error by the number of trees. This gives you an idea of the stability of the model. This plot is created using the plotRFtrace() function from rfPermute.The lower plot is Inbag distribution plot, or a count by trees where the sample is ‘In Bag’ (used in the training dataset); the red lines are the expected ‘in bag’ frequency for this model. This plot provides information on the minimum % of trees that every sample should have been in, and this gives a representation of the samples in the model. We want to use all of the samples, so we want enough trees that the majority of the trees were used and that they were used in the appropriate rate. If too few trees were used, this plot would show peaks at zero and the distributions will not be centered around the red lines. Ideally, the distributions should be tightly centered on the red lines.

To tune the model, you want to run enough trees that the error trace is flat, with the noise occuring in the first 5% of the error trace plot, and you want the Inbag distribution to show the frequency of inbag samples centered around the red lines.

Remember that for our BANTER model, we used sampsize = 1 and ntree = 10 (bant.mdl <- run_BANTER_Model(bant.mdl, ntree = 10, sampsize = 1). Clearly these were insufficient. We will need to increase the sample size and/or the number of trees in our model to improve performance. We suggest first increasing ntree until the trace is flat (or close), and then increasing sampsize incrementally until you are satisfied with the performance. Remember that it is best to keep sampsize less than or equal to half of the smallest species frequency.

Here we will rerun our model with an improved set of parameters and examine the difference in the results and summary information.

bant.mdl <- runBanterModel(bant.mdl, ntree = 50000, sampsize = 2)
summary(bant.mdl)

Once you are satisfied with your model, you can extract the Random Forest model (and model data) as separate objects for further analysis.

bant.rf <- getBanterModel(bant.mdl)
bantData.df <- getBanterModelData(bant.mdl)

You can also save each Detector Model for downstream processing.

bant.dw.rf <- getBanterModel(bant.mdl, "dw")
bant.bp.rf <- getBanterModel(bant.mdl, "bp")
bant.ec.rf <- getBanterModel(bant.mdl, "ec")

You are now ready to summarize and interpret your models and results.

2.4 Interpret BANTER Results

The summary() function provides information regarding your model results; however, conducting a 'deep dive' into these results will give you a better understanding of the strengths and limitations of your results and may guide you towards improving those results. Here we explain a number of options for interpreting your BANTER results.

Model Information

Detector Names & Sample Sizes Show the Detector Names and Sample Sizes

# Get detector names for your _BANTER_ Model
getDetectorNames(bant.mdl)
# Get Sample sizes
getSampSize(bant.mdl)

Number of Calls & Events, Proportion of Calls Number of calls (numCalls()), proportion of calls (propCalls()) and number of events (numEvents()) in your BANTER detector models (or specify by event/species)

# number of calls in detector model
numCalls(bant.mdl)
# number of calls by species (can also do by event)
numCalls(bant.mdl, "species")

# proportion of calls in detector model
propCalls(bant.mdl)
# proportion of calls by event (can also do by species)
#propCalls(bant.mdl, "event")
#[this is commented out as printout is long]

# number of events, with default for Event Model
numEvents(bant.mdl)
# number of events for a specific detector 
numEvents(bant.mdl, "bp")

Confusion Matrix The Confusion Matrix is the most commonly used output for a Random Forest model, and is provided by summary(). The output includes the percent correctly classified for each species, the lower and upper confidence levels, and the priors (expected classification rate).

By default, summary() reports the 95% confidence levels of the percent correctly classified. By using the confusionMatrix() function, we can specify a different confidence level if desired. However, unlike summary(), confusionMatrix() takes a randomForest object like the one we extracted above.

# Confusion Matrix
confusionMatrix(bant.rf, conf.level = 0.75)

The confusionMatrix() function also has a threshold argument that provides the binomial probability that the true classification probability (given infinite data) is greater than or equal to this value. For example, if we want to know what is probability that the true classification probability for each species is >= 0.80, we set threshold = 0.8:

# Confusion Matrix with medium threshold
confusionMatrix(bant.rf, threshold = 0.8)

This shows that D. capensis has a high probability of having a true classification score above 0.8 (Pr.gt_0.8 = 79.0). Conversely, the probability that the classification rate for D.delphis is above 0.8 is very low (Pr.gt_0.8 = 6.8).

And alternative view of the confusion matrix comes in the form of a heat map.

# Plot Confusion Matrix Heatmap
plotConfMat(bant.rf, title="Confusion Matrix HeatMap") 

Plot Random Forest Trace
The plotTrace() function allows us to plot the Error Trace directly.

# Plot trace of OOB error rate by number of trees
plotTrace(bant.rf)

Plot In-Bag Distributions The InBag samples are the events used in the model. The InBag distribution plot provides a visual for the percent of trees where the sample was in-bag. The OOB plot provides a visual for the number of times a sample was out of bag. A high OOB and a low InBag suggest highly random sampling.

# Plot inbag distribution
plotInbag(bant.rf)

Percent Correct A measure of how well a classifier works is to compare the percent correct score for a given threshold (specified percent of trees in the forest voting for that species) with the error rate you would expect based on random assignment and class sizes.

# Percent Correct for a series of thresholds
pctCorrect(bant.rf, pct = c(seq(0.2, 0.6, 0.2), 0.95))

Model Percent Correct Provides a summary data frame with the % correctly classified for each detector model and the event model.

modelPctCorrect(bant.mdl)

Plot Predicted Probabilities Histograms of the assignment probabilities to the predicted species class. Ideally, all events would be classified to the correct species (identified by the color), and would be strongly classified to the correct species (higher probablity of assignment). This plot can be used to understand the distribution of these classifications, and how strong the misclassifications were, by species.

plotPredictedProbs(bant.rf, bins = 30, plot = TRUE)

Model Interpretation

Proximity Plot The proximity plot provides a view of the distribution of events within the tree space. It shows the relative distance of events based on their average distance in nodes in the trees across the forest. For each event in the plot, the color of the central dot represents the true species identity, and the color of the circle represents the BANTER classification. Ideally, these would form rather distinct clusters, one for each species. The wider the spread of the events in this feature space, the more variation found in these predictors. Some species differentiation may be predicted by other predictors and may not be clear based on this pair of dimensions (those may be differentiated with different predictors).

# Proximity Plot
plotProximity(bant.rf)

Plot Votes The strength of a classification model depends on the number of trees that 'voted' for the correct species. We can look at the votes from each of these 5,000 trees for an event to see how many of them were correct. This plot shows these votes where each vertical slice is an event, and the percentage of votes for each species is represented by their color. If a species were to be correctly classified by all of the trees (votes) in the forest, then the plot for that species would be solid in the color that represents that species.

# Plot Vote distribution
plotVotes(bant.rf) 

Importance Heat Map The importance heat map provides a visual assessment of the important predictors for the overall model. The BANTER event model relies on the mean assignment probability for each of the detectors in our detector model, as well as any event level measures. For example, in this heat map, the first variable is 'dw.D.delphis', which is the mean probability that a detection was assigned to the species 'D.delphis' in the whistle detector. This requires extra steps to dig down to the specific whistle measures that are the important predictor variables for the whistle detector.

# Importance Heat Map
plotImportance(bant.rf, plot.type = "heatmap")

Mis-Classified Events

By segregating the misclassified events, you can dive deeper into these data to understand why the model failed. Perhaps they were incorrectly classified in the first place (inaccurate training data) or the misclassification could be due to natural variablity in the call characteristics. There are any number of possiblities, and by diving into the misclassifications, you can learn a lot about your data and your model. We do not recommend eliminating misclassifications simply because they are misclassifications. The point is to learn more about your data, not to cherry pick your data to get the best performing model. If it is important to only include strong classification results in your final model-- then apply the appropriate threshold in the confusionMatrix model, above.

First, identify your misclassified events and save them as an R object and a separate csv file.

Case Predictions You can also save a separate data.frame for your training data that includes the vote distributions. This can be useful for downstream processing and summaries.

casePredict <- casePredictions(bant.rf)

misclass <- casePredict %>% 
  filter(!is.correct) %>%
  select(id)

We can then look closer at these events to learn more about them.

First, identify the most important variables in your event model

# Get importance scores and convert to a data frame
bant.imp <-  data.frame(importance(bant.rf))

# Select top 4 important event stage predictors
bant.4imp <- bant.imp[order(bant.imp$MeanDecreaseAccuracy, decreasing = TRUE), ][1:4, ]

# Look at the predictors to identify your next steps
bant.4imp

The predictors that showed the greatest importance came from the whistle (dw) detector and the burst pulse (bp) detectors. We can plot the distribution of the predictor variables on these classes (in this case, a violin plot for each of these four most important variables).

plotImpPreds(bant.rf, bantData.df, "species", max.vars = 4)

2.5 Predict

The goal of building an acoustic classifier is to ultimately apply this classifier to novel data. It is critical to understand that we should apply our BANTER classifier to data collected in the same manner. All variables (detectors, detector measures, event-level variables) must also be the same (with the same labels). For example, novel data collected using a different hydrophone with different sensitivity curves may result in different measurements from your original model (unless the data is calibrated). Even in the case where a classifier is applied to the appropriate data, it is wise to validate a subset of this novel data.

To run a prediction model, you must have your BANTER model, and new data. Here we will use the bant.mdl object we made previously, and apply it to the test.data provided in the BANTER package.

Predict The predict() function will apply your BANTER model to novel data and provide you with a data frame with the events used in the Event Model for predictions, and a data frame of predicted species and assignment probabilities for each event.

data(test.data)
predict(bant.mdl, test.data)

3. Discussion

BANTER has been developed in a general manner such that it can be applied to a wide range of acoustic data (biological, anthropogenic). We have encouraged development of additional software (PAMpal) to facilitate BANTER classification of data analyzed in PAMGuard software. We encourage development of additional open source software to simplify BANTER classification of data analyzed using other signal processing software. While this classifier is easy to use, and can be powerful, we highly recommend that users examine their data and their results to ensure the data are appropriately applied. This is especially important when a classifier is applied to novel data for prediction purposes.

Acknowledgements

Many thanks to our original co-authors for their help in developing the original BANTER trial. Thoughtful reviews were provided by Anne Simonis and Marie Zahn. Funding for development of BANTER was provided by NOAA's Advanced Sampling Technology Working Group.

References

Rankin, S., Archer, F., Keating, J. L., Oswald, J. N., Oswald, M., Curtis, A. and Barlow, J. (2017) Acoustic classification of dolphins in the California Current using whistles, echolocation clicks, and burst pulses. Mar Mam Sci, 33: 520-540.



EricArcher/banter documentation built on Feb. 1, 2024, 1:36 p.m.