Description Usage Arguments Details Value Note Author(s) References See Also Examples
View source: R/madlibglm.R View source: R/madlibglm.R
The wrapper function for MADlib's generzlized linear regression [7] including the support for multple families and link functions. Heteroskedasticity test is implemented for linear regression. One or multiple columns of data can be used to separate the data set into multiple groups according to the values of the grouping columns. The requested regression method is applied onto each group, which has fixed values of the grouping columns. Multinomial logistic regression is not implemented yet. Categorical variables are supported. The computation is parallelized by MADlib if the connected database is Greenplum/HAWQ database. The regression computation can also be done on a column which contains an array as its value in the data table.
1 2 
formula 
An object of class 
data 
An object of 
family 
A string which indicates which form of regression to apply. Default
value is “ 
na.action 
A string which indicates what should happen when the data contain

control 
A list, extra parameters to be passed to linear or logistic regressions.
For the linear regressions, the extra parameter is
For logistic regression, one can pass the following extra parameters:

... 
Further arguments passed to or from other methods. Currently, no more parameters can be passed to the linear regression and logistic regression. 
See madlib.lm
for more details.
For the return value of linear regression see madlib.lm
for details.
For the logistic regression, the returned value is similar to that of
the linear regression. If there is no grouping (i.e. no 
in the
formula), the result is a logregr.madlib
object. Otherwise, it is
a logregr.madlib.grps
object, which is just a list of
logregr.madlib
objects.
If MADlib's generalized linear regression function is used
(use.glm=TRUE
for family=binomial(logit)
), the return
value is a glm.madlib
object without grouping or
a glm.madlib.grps
object with grouping.
A logregr.madlib
or glm.madlib
object is a list which
contains the following items:
grouping column(s) 
When there are grouping columns in the formula, the resulting list has multiple items, each of which has the same name as one of the grouping columns. All of these items are vectors, and they have the same length, which is equal to the number of distinct combinations of all the grouping column values. Each row of these items together is one distinct combination of the grouping values. When there is no grouping column in the formula, none of such items will appear in the resulting list. 
coef 
A numeric matrix, the fitting coefficients. Each row contains the coefficients for the linear regression of each group of data. So the number of rows is equal to the number of distinct combinations of all the grouping column values. 
log_likelihood 
A numeric array, the loglikelihood for each fitting to the groups.
Thus the length of the array is equal to 
std_err 
A numeric matrix, the standard error for each coefficients. The row
number is equal to 
z_stats,t_stats 
A numeric matrix, the zstatistics or tstatistics for each coefficient. Each row is for a fitting to a group of the data. 
p_values 
A numeric matrix, the pvalues of 
odds_ratios 
Only for 
condition_no 
Only for 
num_iterations 
An integer array, the itertion number used by each fitting group. 
grp.cols 
An array of strings. The column names of the grouping columns. 
has.intercept 
A logical, whether the intercept is included in the fitting. 
ind.vars 
An array of strings, all the different terms used as independent variables in the fitting. 
ind.str 
A string. The independent variables in an array format string. 
call 
A language object. The function call that generates this result. 
col.name 
An array of strings. The column names used in the fitting. 
appear 
An array of strings, the same length as the number of independent
variables. The strings are used to print a clean result, especially when
we are dealing with the factor variables, where the dummy variable
names can be very long due to the inserting of a random string to
avoid naming conflicts, see 
model 
A 
terms 
A 
nobs 
The number of observations used to fit the model. 
data 
A 
origin.data 
The original 
Note that if there is grouping done, and there are multiple
logregr.madlib
objects in the final result, each one of them
contains the same copy model
.
See madlib.lm
's note for more about the formula format.
For logistic regression, the dependent variable MUST be a logical
variable with values being TRUE
or FALSE
.
Author: Predictive Analytics Team at Pivotal Inc.
Maintainer: Frank McQuillan, Pivotal Inc. [email protected]
[1] Documentation of linear regression in lastest MADlib, http://doc.madlib.net/latest/group__grp__linreg.html
[2] Documentation of logistic regression in latest MADlib, http://doc.madlib.net/latest/group__grp__logreg.html
[3] Wikipedia: Iteratively reweighted least squares, http://en.wikipedia.org/wiki/IRLS
[4] Wikipedia: Conjugate gradient method, http://en.wikipedia.org/wiki/Conjugate_gradient_method
[5] Wikipedia: Stochastic gradient descent, http://en.wikipedia.org/wiki/Stochastic_gradient_descent
[6] Wikipedia: Odds ratio, http://en.wikipedia.org/wiki/Odds_ratio
[7] Documentation of generalized linear regression in latest MADlib, http://doc.madlib.net/latest/group__grp__glm.html
madlib.lm
, madlib.summary
,
madlib.arima
are MADlib wrapper functions.
as.factor
creates categorical variables for fitiing.
delete
safely deletes the result of this function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63  ## Not run:
## set up the database connection
## Assume that .port is port number and .dbname is the database name
cid < db.connect(port = .port, dbname = .dbname, verbose = FALSE)
source_data < as.db.data.frame(abalone, conn.id = cid, verbose = FALSE)
lk(source_data, 10)
## linear regression conditioned on nation value
## i.e. grouping
fit < madlib.glm(rings ~ . id  sex, data = source_data, heteroskedasticity = T)
fit
## logistic regression
## logistic regression
## The dependent variable must be a logical variable
## Here it is y < 10.
fit < madlib.glm(rings < 10 ~ .  id  1 , data = source_data, family = binomial)
fit < madlib.glm(rings < 10 ~ sex + length + diameter,
data = source_data, family = "logistic")
## 3rd example
## The table has two columns: x is an array, y is double precision
dat < source_data
dat$arr < db.array(source_data[,c(1,2)])
array.data < as.db.data.frame(dat)
## Fit to y using every element of x
## This does not work in R's lm, but works in madlib.lm
fit < madlib.glm(rings < 10 ~ arr, data = array.data, family = binomial)
fit < madlib.glm(rings < 10 ~ arr  arr[1:2], data = array.data, family = binomial)
fit < madlib.glm(rings < 10 ~ arr[1:7] + sex  id
fit < madlib.glm(rings < 10 ~ arr  arr[8] + sex  id
## 4th example
## Stepwise feature selection
start < madlib.glm(rings < 10 ~ .  id  sex, data = source_data, family = "binomial")
## step(start)
## 
## Examples for using GLM model
fit < madlib.glm(rings < 10 ~ .  id  sex, data = source_data, family = binomial(probit),
control = list(max.iter = 10))
fit < madlib.glm(rings ~ .  id  sex, data = source_data, family = poisson(log),
control = list(max.iter = 10))
fit < madlib.glm(rings ~ .  id, data = source_data, family = Gamma(inverse),
control = list(max.iter = 10))
db.disconnect(cid, verbose = FALSE)
## End(Not run)

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.