This function calculates generalized linear models for a set of (species) presence/absence records in a data frame, with a wide set of options for data partition, variable selection, and output form.
1 2 3 4 5 6  multGLM(data, sp.cols, var.cols, id.col = NULL, family = "binomial",
test.sample = 0, FDR = FALSE, correction = "fdr", corSelect = FALSE,
cor.thresh = 0.8, step = TRUE, trace = 0, start = "null.model",
direction = "both", select = "AIC", Y.prediction = FALSE,
P.prediction = TRUE, Favourability = TRUE, group.preds = TRUE,
trim = TRUE, ...)

data 
a data frame in wide format (see 
sp.cols 
index numbers of the columns containing the species data to be modelled. 
var.cols 
index numbers of the columns containing the predictor variables to be used for modelling. 
id.col 
(optional) index number of column containing the row identifiers (if defined, it will be included in the output 
family 
argument to be passed to the 
test.sample 
a subset of data to set aside for subsequent model testing. Can be a value between 0 and 1 for a proportion of the data to choose randomly (e.g. 0.2 for 20%), or an integer number for a particular number of cases to choose randomly among the records in 
FDR 
logical value indicating whether to do a preliminary exclusion of variables based on the false discovery rate (see 
correction 
argument to pass to the 
corSelect 
logical value indicating whether to do a preliminary exclusion of highly correlated variables (see 
cor.thresh 
numerical value indicating the correlation threshold to pass to 
step 
logical, whether to use the 
trace 
if positive, information is printed during the running of 
start 
character, whether to start with the 'null.model' (so that variable selection starts forward) or with the 'full.model' (so selection starts backward). Used only if 
direction 
argument to be passed to 
select 
character string specifying the criterion for stepwise selection of variables. Options are "AIC" (Akaike's Information Criterion; Akaike, 1973), the default; or BIC (Bayesian Information Criterion, also known as Schwarz criterion, SBC or SBIC; Schwarz, 1978). Used only if 
Y.prediction 
logical, whether to include output predictions in the scale of the predictor variables ( 
P.prediction 
logical, whether to include output predictions in the scale of the response variable, i.e. probability ( 
Favourability 
logical, whether to apply the 
group.preds 
logical, whether to group together predictions of similar type ( 
trim 
logical, whether to trim nonsignificant variables off the models using the

... 
additional arguments to be passed to 
This function automatically calculates binomial GLMs for one or more species (or other binary variables) in a data frame. The function can optionally perform step
wise variable selection (and it does so by default) instead of forcing all variables into the models, starting from either the null model (the default, so selection starts forward) or from the full model (so selection starts backward) and using Akaike's information criterion (AIC) as a variable selection criterion. Instead or subsequently, it can also perform stepwise removal of nonsignificant variables from the models using the modelTrim
function.
There is also an optional preliminary selection of noncorrelated variables, and/or of variables with a significant bivariate relationship with the response, based on the false discovery rate (FDR
). Note, however, that some variables can be significant in a multivariate model even if they would not have been selected by FDR.
Fav
ourability is also calculated, removing the effect of species prevalence from occurrence probability and thus allowing direct comparisons between models (Real et al. 2006).
By default, all data are used in model training, but you can define an optional test.sample
to be reserved for model testing afterwards. You may also want to do a previous check for multicol
linearity among variables, e.g. the variance inflation factor (VIF).
The multGLM
function will create a list of the resulting models (each with the name of the corresponding species column) and a data frame with their predictions (Y, P and/or F, all of which are optional). If you plan on representing these predictions in a GIS based on .dbf tables, remember that dbf only allows up to 10 characters in column names; multGLM
predictions will add 2 characters (_Y, _P and/or _F) to each of your species column names, so use species names/codes with up to 8 characters in the data set that you are modelling. You can create (sub)species name abbreviations with the spCodes
function.
This function returns a list with the following components:
predictions 
a data frame with the model predictions (if either of Y.prediction, P.prediction or Favourability are TRUE). 
models 
a list of the resulting model objects. 
A. Marcia Barbosa
Akaike, H. (1973) Information theory and an extension of the maximum likelihood principle. In: Petrov B.N. & Csaki F., 2nd International Symposium on Information Theory, Tsahkadsor, Armenia, USSR, September 28, 1971, Budapest: Akademiai Kiado, p. 267281.
Fielding A.H. & Bell J.F. (1997) A review of methods for the assessment of prediction errors in conservation presence/absence models. Environmental Conservation 24: 3849
Huberty C.J. (1994) Applied Discriminant Analysis. Wiley, New York, 466 pp. Schaafsma W. & van Vark G.N. (1979) Classification and discrimination problems with applications. Part IIa. Statistica Neerlandica 33: 91126
Real R., Barbosa A.M. & Vargas J.M. (2006) Obtaining environmental favourability functions from logistic regression. Environmental and Ecological Statistics 13: 237245.
Schwarz, G.E. (1978) Estimating the dimension of a model. Annals of Statistics, 6 (2): 461464.
glm
, Fav
, step
, modelTrim
, multicol
, corSelect
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 
Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
Please suggest features or report bugs with the GitHub issue tracker.
All documentation is copyright its authors; we didn't write any of that.