Calculate rarity weights for a single scale or for multiple scales on the basis of the selected weighting function(s).
1 2 3
vector, matrix or data.frame. Occurrence data for a single scale (vector) or several scales (matrix or data.frame).
integer. Maximum occurrence (see details). By default, the maximum occurrence of the dataset is used (i.e., maximum occurrence among the provided set of species), however it can be changed to another value, e.g. to provide the number of possible sites.
integer. Minimum occurrence (see details). By default, the minimum occurrence of the dataset is used (i.e., minimum occurrence among the provided set of species).
a decimal or a vector of values between 0 and 1, or
TRUE or FALSE. If TRUE, then weights are normalised between 0 and 1.
matrix or data.frame. Set of assemblages of species to calculate the rarity cutoff point(s) with the
TRUE or FALSE. Useful in case of multiple scales only. If TRUE, then weights will be given for every input scale in addition to multiscale weights. If FALSE, then only multiscale weights will be provided.
An integer or FALSE. If an integer is provided, then the values of weights will be rounded according to this value. If FALSE, weights will not be rounded.
To calculate single-scale weights, simply provide a vector with species occurrences. To calculate multiscale rarity weights, provide either a matrix or a data.frame where species are in rows, and each column provides occurrence for a particular scale.
The minimum and maximum weights can be set manually, or automatically calculated with the default parameters. Defaults parameters : if
occData is a vector,
Qmin = min(Q) and
Qmax = max(Q). If
occData is a matrix or a data.frame,
Qmin = apply(occData, 2, min) and
Qmax = apply(occData, 2, max)
Three weighting methods are available (more will become available later):
W: This is the method described in Leroy et al. (2013). We recommend using this method for both single and multiscale weight calculations.
exp(-(((Qi - Qmin)/(r * Qmax - Qmin)) * 0.97 + 1.05)^2)
where Qi is the occurrence of species i, Qmin and Qmax are respectively the minimum and maximum occurrences in the species pool and r is the choosen rarity cut-off point (as a percentage of occurrence).
invQ: This is the inverse of occurrence
where Qi is the occurrence of the ith species. The inverse of the occurrence should be avoided as a weighting procedure because it cannot be adjusted to the considered species pool, and it does not attribute 0 weights to common species (see discussion in Leroy et al. (2012)).
oldW: This is the original method described in Leroy et al. (2012). As this method was improved in Leroy et al. (2013), we recommend to rather use
exp(-((Qi/Qmin) * n + 1)^2)
where Qi is the occurrence of species i, Qmin is the minimum occurrence in the species pool, and n is and adjustment coefficient numerically approximated to fit the choosen rarity cut-off point.
oldW, a rarity cutoff point is required. The rarity cutoff point can either be entered manually (a single value for a single scale, a vector of values for multiple scales), or the methods of
Leroy can be used (see references):
Gaston method: the rarity cutoff point is the first quartile of species occurrences, i.e. rare species are the 25 percent species with the lowest occurrence.
Leroy method: the rarity cutoff point is the occurrence at which the average proportion of rare species in local assemblages is 25 percent. This method requires
assemblages to calculate the average proportion of rare species in assemblages.
NA are properly handled by the function.
A data.frame containing the results : species occurrences, rarity statuses, rarity weights and the used rarity cut-offs.
occData is a vector (single scale weights): A data.frame with 4 columns :
Q (species occurrence),
R (species rarity status),
W, (species rarity weights),
cut.off (rarity cut-off used for weight calculation)
occData is matrix or a data.frame (multiscale rarity weights): A data.frame with n columns
Q (species occurrences), n columns
R (species rarity statuses), one (if
extended = F) or n + 1 (if
extended = T) columns
W (species rarity weights) where n is the number of scales (number of columns of
occData), n columns
cut.off (rarity cut-offs used for weight calculation).
By default, weights are rounded to 3 digits, which should be sufficient in most cases. Another number of digits can also be chosen; or simply changing
FALSE will remove the rounding.
Boris Leroy [email protected]
Leroy B., Petillon J., Gallon R., Canard A., & Ysnel F. (2012) Improving occurrence-based rarity metrics in conservation studies by including multiple rarity cut-off points. Insect Conservation and Diversity, 5, 159-168.
Leroy B., Canard A., & Ysnel F. 2013. Integrating multiple scales in rarity assessments of invertebrate taxa. Diversity and Distributions, 19, 794-803.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
# 1. Single scale rarity weights data(spid.occ) head(spid.occ) regional.occ <- spid.occ$occurMA names(regional.occ) <- rownames(spid.occ) head(regional.occ) # Calculation of rarity weights at a single scale: rWeights(regional.occ, rCutoff = "Gaston") rWeights(regional.occ, rCutoff = 0.1) rWeights(regional.occ, wMethods = "invQ") rWeights(regional.occ, wMethods = c("W", "invQ")) # Calculation of rarity weights with the method of Leroy # Creating a fictive assemblage matrix of 5 assemblages # Warning: this is to provide an example of how the function works! # The correct use of this method requires a matrix of actually sampled species. assemblages.matrix <- cbind(assemblage.1 = sample(c(0, 1), 708, replace = TRUE), assemblage.2 = sample(c(0, 1), 708, replace = TRUE), assemblage.3 = sample(c(0, 1), 708, replace = TRUE), assemblage.4 = sample(c(0, 1), 708, replace = TRUE), assemblage.5 = sample(c(0, 1), 708, replace = TRUE)) rownames(assemblages.matrix) <- names(regional.occ) # Rownames of assemblages.matrix must # correspond to rownames in occurrences head(assemblages.matrix) rWeights(regional.occ, wMethods = "W", rCutoff = "Leroy", assemblages = assemblages.matrix) # 2. Multiscale rarity weights data(spid.occ) head(spid.occ) rWeights(spid.occ, wMethods = "W", rCutoff = "Gaston") rWeights(spid.occ, wMethods = "W", rCutoff = "Gaston", extended = TRUE) rWeights(spid.occ, wMethods = c("W", "invQ"), rCutoff = "Gaston", extended = TRUE) rWeights(spid.occ, wMethods = c("W", "invQ"), rCutoff = "Leroy", assemblages = assemblages.matrix, extended = TRUE) # Provided that you have # created "assemblages.matrix" as above
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.