| singleSat | R Documentation |
Single Soundscape Saturation Index
singleSat(
soundfile,
channel = "stereo",
timeBin = 60,
dbThreshold = -90,
targetSampRate = NULL,
wl = 512,
window = signal::hamming(wl),
overlap = ceiling(length(window)/2),
histbreaks = "FD",
DCfix = TRUE,
powthr = 10,
bgnthr = 0.8,
beta = TRUE
)
soundfile |
tuneR Wave object, Ruido noise.matrix object or path to a valid audio |
channel |
channel where the background noise values will be extract from. Available channels are: |
timeBin |
size (in seconds) of the time bin. Set to |
dbThreshold |
minimum allowed value of dB for the spectrograms. Set to |
targetSampRate |
sample rate of the audios. Defaults to |
wl |
window length of the spectrogram. Defaults to |
window |
window used to smooth the spectrogram. Defaults to |
overlap |
overlap between the spectrogram windows. Defaults to |
histbreaks |
breaks used to calculate Background Noise. Available breaks are: |
DCfix |
if the DC offset should be removed before the metrics are calculated. Defaults to |
powthr |
a single value to evaluate the activity matrix for Soundscape Power (in %dB). Defaults to |
bgnthr |
a single value to evaluate the activity matrix for Background Noise (in %). Defaults to |
beta |
how BGN thresholds are calculated. If TRUE, BGN thresholds are computed using all recordings combined. |
Soundscape Saturation (SAT) is a measure of the proportion of frequency bins that are acoustically active in a determined window of time. It was developed by Burivalova et al. 2018 as an index to test the acoustic niche hypothesis.
To calculate this function, first we need to generate an activity matrix for each time bin of your recording with the following formula:
a_{mf} = 1\ if (BGN_{mf} > \theta_{1})\ or\ (POW_{mf} > \theta_{2});\ otherwise,\ a_{mf} = 0,
Where \theta_{1} is the threshold of BGN values and \theta_{2} is a threshold of dB values.
Since we define a single threshold for both in this function, we don't have to worry about generating a saturation value for many different combinations.
For the selected threshold a soundscape saturation measure will be taken with the following formula:
S_{m} = \frac{\sum_{f = 1}^N a_{mf}}{N}
A list containing the saturation values for all time bins of the inputted file
Burivalova, Z., Towsey, M., Boucher, T., Truskinger, A., Apelis, C., Roe, P., & Game, E. T. (2018). Using soundscapes to detect variable degrees of human influence on tropical forests in Papua New Guinea. Conservation Biology, 32(1), 205-215. https://doi.org/10.1111/cobi.12968
soundSat() and soundMat() to work with multiple audio files and activity() to get only the activity matrix
# First example: Using a Ruido noise.matrix object
# We are going to load a sample noise.matrix object to demonstrate the basic usage of singleSat()
# To understand about the origin of this noise.matrix, check: ?sampleBGN
data("sampleBGN")
# View the sample noise.matrix object
sampleBGN
# Run the function
SAT <- singleSat(sampleBGN)
# View the results
SAT
# Now lets plot our results to see the dynamics of soundscape saturation by minute
maxV <- max(unlist(SAT))
minV <- min(unlist(SAT))
plot(x = c(1, 3), y = c(minV, maxV), type = "n",
xlab = "Minute", ylab = "Soundscape Saturation (%)", xaxt = "n")
lines(x = 1:3, SAT$left, col = "#1ECBE1", type = "b", pch = 16)
axis(1, at = 1:3)
lines(x = 1:3, SAT$right, col = "#E1341E", type = "b", pch = 16)
legend("topright", legend = c("Left", "Right"), col = c("#1ECBE1", "#E1341E"), lty = 1, pch = 16)
# Second example: Using a tuneR Wave-class object
# Lets produce an artificial audio with the tuneR package to demonstrate that
# the function can also read Wave-class objects (This is the same object used in
# the example of bgNoise!)
library(tuneR)
oldpar <- par(no.readonly = TRUE)
# Define parameters for the artificial audio
samprate <- 12050
dur <- 59
n <- samprate * dur
# White noise
set.seed(413)
noise <- rnorm(n)
# Linear fade-out envelope
fade <- seq(1, 0, length.out = n)
# Apply fade
signal <- noise * fade
# Create Wave object
wave <- Wave(
left = signal,
samp.rate = samprate,
bit = 16
)
# Running singleSat() on the artificial audio
sat <- singleSat(wave, timeBin = 10)
# Now we can plot the results
# In the left we have a periodogram and in the right saturaion values
# along one minute
par(mfrow = c(1,2))
image(periodogram(wave, width = 8192, normalize = FALSE), xlab = "Time (s)",
ylab = "Frequency (hz)", axes = FALSE)
axis(1, labels = seq(0,60, 10), at = seq(0,7e5,length.out = 7))
axis(2)
plot(sat$mono, xlab = "Time (s)", ylab = "Soundscape Saturation (%)",
type = "b", pch = 16, axes = FALSE)
axis(1, labels = paste0(c("0-10","10-20","20-30","30-40","40-50","50-59"),
"s"), at = 1:6)
axis(2)
par(oldpar)
# Third example: Reading a file directly
# Lets begin by loading an audio from the online Zenodo library and
# read it directly with the function
# Getting audiofile from the online Zenodo library
dir <- paste(tempdir(), "forExample", sep = "/")
dir.create(dir)
rec <- paste0("GAL24576_20250401_", sprintf("%06d", 0),".wav")
recDir <- paste(dir,rec , sep = "/")
url <- paste0("https://zenodo.org/records/17575795/files/", rec, "?download=1")
# Downloading the file, might take some time denpending on your internet
download.file(url, destfile = recDir, mode = "wb")
# Now we calculate soundscape saturation for both sides of the recording
sat <- singleSat(recDir)
# Printing the results
print(sat)
barplot(unlist(sat), col = c("darkgreen", "red"),
names.arg = c("Left", "Right"), ylab = "Soundscape Saturation (%)")
unlink(dir, recursive = TRUE)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.