alm | R Documentation |
Function estimates model based on the selected distribution
alm(formula, data, subset, na.action, distribution = c("dnorm", "dlaplace",
"ds", "dgnorm", "dlogis", "dt", "dalaplace", "dlnorm", "dllaplace", "dls",
"dlgnorm", "dbcnorm", "dinvgauss", "dgamma", "dexp", "dfnorm", "drectnorm",
"dpois", "dnbinom", "dbinom", "dgeom", "dbeta", "dlogitnorm", "plogis",
"pnorm"), loss = c("likelihood", "MSE", "MAE", "HAM", "LASSO", "RIDGE",
"ROLE"), occurrence = c("none", "plogis", "pnorm"), scale = NULL,
orders = c(0, 0, 0), parameters = NULL, fast = FALSE, ...)
formula |
an object of class "formula" (or one that can be coerced to
that class): a symbolic description of the model to be fitted. Can also include
|
data |
a data frame or a matrix, containing the variables in the model. |
subset |
an optional vector specifying a subset of observations to be used in the fitting process. |
na.action |
a function which indicates what should happen when the data contain NAs. The default is set by the na.action setting of options, and is na.fail if that is unset. The factory-fresh default is na.omit. Another possible value is NULL, no action. Value na.exclude can be useful. |
distribution |
what density function to use in the process. The full name of the distribution should be provided here. Values with "d" in the beginning of the name refer to the density function, while "p" stands for "probability" (cumulative distribution function). The names align with the names of distribution functions in R. For example, see dnorm. |
loss |
The type of Loss Function used in optimization.
In case of LASSO / RIDGE, the variables are not normalised prior to the estimation,
but the parameters are divided by the standard deviations of explanatory variables
inside the optimisation. As the result the parameters of the final model have the
same interpretation as in the case of classical linear regression. Note that the
user is expected to provide the parameter A user can also provide their own function here as well, making sure
that it accepts parameters
See |
occurrence |
what distribution to use for occurrence variable. Can be
If this is not |
scale |
formula for scale parameter of the model. If |
orders |
the orders of ARIMA to include in the model. Only non-seasonal orders are accepted. |
parameters |
vector of parameters of the linear model. When |
fast |
if |
... |
additional parameters to pass to distribution functions. This includes:
You can also pass parameters to the optimiser:
You can read more about these parameters by running the function nloptr.print.options. |
This is a function, similar to lm, but using likelihood for the cases of several non-normal distributions. These include:
dnorm - Normal distribution,
dlaplace - Laplace distribution,
ds - S-distribution,
dgnorm - Generalised Normal distribution,
dlogis - Logistic Distribution,
dt - T-distribution,
dalaplace - Asymmetric Laplace distribution,
dlnorm - Log-Normal distribution,
dllaplace - Log-Laplace distribution,
dls - Log-S distribution,
dlgnorm - Log-Generalised Normal distribution,
dfnorm - Folded normal distribution,
drectnorm - Rectified normal distribution,
dbcnorm - Box-Cox normal distribution,
dinvgauss - Inverse Gaussian distribution,
dgamma - Gamma distribution,
dexp - Exponential distribution,
dlogitnorm - Logit-normal distribution,
dbeta - Beta distribution,
dpois - Poisson Distribution,
dnbinom - Negative Binomial Distribution,
dbinom - Binomial Distribution,
dgeom - Geometric Distribution,
plogis - Cumulative Logistic Distribution,
pnorm - Cumulative Normal distribution.
This function can be considered as an analogue of glm, but with the
focus on time series. This is why, for example, the function has orders
parameter
for ARIMA and produces time series analysis plots with plot(alm(...))
.
This function is slower than lm
, because it relies on likelihood estimation
of parameters, hessian calculation and matrix multiplication. So think twice when
using distribution="dnorm"
here.
The estimation is done via the maximisation of likelihood of a selected distribution,
so the number of estimated parameters always includes the scale. Thus the number of degrees
of freedom of the model in case of alm
will typically be lower than in the case of
lm
.
See more details and examples in the vignette for "ALM": vignette("alm","greybox")
Function returns model
- the final model of the class
"alm", which contains:
coefficients - estimated parameters of the model,
FI - Fisher Information of parameters of the model. Returned only when FI=TRUE
,
fitted - fitted values,
residuals - residuals of the model,
mu - the estimated location parameter of the distribution,
scale - the estimated scale parameter of the distribution. If a formula was provided for scale, then an object of class "scale" will be returned.
distribution - distribution used in the estimation,
logLik - log-likelihood of the model. Only returned, when loss="likelihood"
or
loss="ROLE"
and in several special cases of distribution and loss
combinations (e.g. loss="MSE"
, distribution="dnorm"),
loss - the type of the loss function used in the estimation,
lossFunction - the loss function, if the custom is provided by the user,
lossValue - the value of the loss function,
res - the output of the optimisation (nloptr function),
df.residual - number of degrees of freedom of the residuals of the model,
df - number of degrees of freedom of the model,
call - how the model was called,
rank - rank of the model,
data - data used for the model construction,
terms - terms of the data. Needed for some additional methods to work,
occurrence - the occurrence model used in the estimation,
B - the value of the optimised parameters. Typically, this is a duplicate of coefficients,
other - the list of all the other parameters either passed to the
function or estimated in the process, but not included in the standard output
(e.g. alpha
for Asymmetric Laplace),
timeElapsed - the time elapsed for the estimation of the model.
Ivan Svetunkov, ivan@svetunkov.com
stepwise, lmCombine,
xregTransformer
### An example with mtcars data and factors
mtcars2 <- within(mtcars, {
vs <- factor(vs, labels = c("V", "S"))
am <- factor(am, labels = c("automatic", "manual"))
cyl <- factor(cyl)
gear <- factor(gear)
carb <- factor(carb)
})
# The standard model with Log-Normal distribution
ourModel <- alm(mpg~., mtcars2[1:30,], distribution="dlnorm")
summary(ourModel)
plot(ourModel)
# Produce table based on the output for LaTeX
xtable(summary(ourModel))
# Produce predictions with the one sided interval (upper bound)
predict(ourModel, mtcars2[-c(1:30),], interval="p", side="u")
# Model with heteroscedasticity (scale changes with the change of qsec)
ourModel <- alm(mpg~., mtcars2[1:30,], scale=~qsec)
### Artificial data for the other examples
xreg <- cbind(rlaplace(100,10,3),rnorm(100,50,5))
xreg <- cbind(100+0.5*xreg[,1]-0.75*xreg[,2]+rlaplace(100,0,3),xreg,rnorm(100,300,10))
colnames(xreg) <- c("y","x1","x2","Noise")
# An example with Laplace distribution
ourModel <- alm(y~x1+x2+trend, xreg, subset=c(1:80), distribution="dlaplace")
summary(ourModel)
plot(predict(ourModel,xreg[-c(1:80),]))
# And another one with Asymmetric Laplace distribution (quantile regression)
# with optimised alpha
ourModel <- alm(y~x1+x2, xreg, subset=c(1:80), distribution="dalaplace")
# An example with AR(1) order
ourModel <- alm(y~x1+x2, xreg, subset=c(1:80), distribution="dnorm", orders=c(1,0,0))
summary(ourModel)
plot(predict(ourModel,xreg[-c(1:80),]))
### Examples with the count data
xreg[,1] <- round(exp(xreg[,1]-70),0)
# Negative Binomial distribution
ourModel <- alm(y~x1+x2, xreg, subset=c(1:80), distribution="dnbinom")
summary(ourModel)
predict(ourModel,xreg[-c(1:80),],interval="p",side="u")
# Poisson distribution
ourModel <- alm(y~x1+x2, xreg, subset=c(1:80), distribution="dpois")
summary(ourModel)
predict(ourModel,xreg[-c(1:80),],interval="p",side="u")
### Examples with binary response variable
xreg[,1] <- round(xreg[,1] / (1 + xreg[,1]),0)
# Logistic distribution (logit regression)
ourModel <- alm(y~x1+x2, xreg, subset=c(1:80), distribution="plogis")
summary(ourModel)
plot(predict(ourModel,xreg[-c(1:80),],interval="c"))
# Normal distribution (probit regression)
ourModel <- alm(y~x1+x2, xreg, subset=c(1:80), distribution="pnorm")
summary(ourModel)
plot(predict(ourModel,xreg[-c(1:80),],interval="p"))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.