Regression in uwIntroStats

In the uwIntroStats package, we have set out to make regression and analysis easier by:

This function is regress(). The basic arguments to this function, which unlock all of its potential, are

We use the concept of a functional to handle our first goal. A functional takes a function as its argument and returns a number - hence the mean is a functional, because it takes a distribution as its argument and returns a single number. The allowed functionals to regress() are

|Functional | Type of Regression | Previous command (package)| |---|:---|:---| |"mean" | Linear Regression | lm() (stats - base R) | |"geometric mean" | Linear Regression on logarithmically transformed Y | lm(), with Y log transformed (stats - base R) | |"odds" | Logistic Regression | glm(family = binomial) (stats - base R) | |"rate" | Poisson Regression | glm(family = poisson) (stats - base R) |

The formula to regress() is the same as a formula given to lm() or any of the other regression commands from base R, survival, or geepack, but with one small addition. To address our second goal of allowing the user to specify multiple-partial F-tests, we have added a special function - U() - which can be added to the formula. The U() function is documented more fully in "User Specified Multiple-Partial F-tests in Regression".

The data argument is exactly the same as that in lm() or any of the other regression commands.

Last, id allows the user to fit a generalized estimating equations (GEE) model while using the same syntax as any of the functionals to regress(). The GEE framework is a useful way to model correlated data, which often comes in the form of repeated measurements.

Linear Regression

As a first example, we run a linear regression analysis of atrophy on age and male, from the mri data. This dataset is included in the uwIntroStats package, and its documentation can be found on Scott Emerson's website here.

## Preparing our R session
library(uwIntroStats)
data(mri)
regress("mean", atrophy ~ age + male, data = mri)

This call automatically prints the coefficients table. First, notice that by default robust standard error estimates (calculated using the sandwich package) are returned, in addition to the naive estimates. The robust estimates are also used to perform the inference - thus the confidence intervals, statistics, and p-values use these estimates of the standard error.

If we did not use robust standard error estimates, then in the case of linear regression we would be assuming that all groups have the same variance. Then any inference we make could be on the fact that the variances are different, rather than just on the means - which is usually what we want in linear regression.

F-statistics are also displayed by default. This allows us to display multiple-partial F-tests within the coefficients table, and is more in line with teaching philosophy at the University of Washington.

All of the usual inferential statements apply to our output. Thus we would say:

We analyzed the association between cerebral atrophy, age, and sex by running a linear regression of cerebral atrophy modeled as a continuous variable on age - modeled as a continuous variable - and sex - modeled as a binary variable. We allowed for unequal variances among groups by computing robust standard error estimates using the Huber-White procedure. We calculated 95% confidence intervals and p-values using these same robust standard error estimates.

Based on this linear regression analysis, we estimate that for each one year increase in age, the mean cerebral atrophy score increases by 0.682 units. Based on a 95% confidence interval, the data suggest that this estimate is not unreasonable if the true coefficient for age was in the interval from 0.509 to 0.854. We also estimate that males have an average cerebral atrophy score of 5.96 units higher than females. Based on a 95% confidence interval, the data suggest that this estimate is not unreasonable if the true coefficient for sex was in the interval from 4.227 to 7.700.

However, in this case we did not adjust for multiple comparisons in calculating the individual p-values. If we wanted to make inferential claims using these, we would have to use a correction. We could also use a multiple-partial F-test to test both coefficients simultaneously.

Regression on the Geometric Mean

In normal linear regression, we are comparing the mean of the response variable across groups defined by the predictors. However, if we were to log transform the response, we would be comparing the geometric mean across groups. In regress(), we simply have to use the "geometric mean" functional.

Generalized Linear Regression

Using the same function, and the same syntax, we can also run generalized linear regression. For example, if we wanted to examine the odds of having diabetes between males and females, we would run a logistic regression.

regress("odds", diabetes ~ male, data = mri)

In all of the generalized linear regression output (and in the output from regression on the geometric mean), by default we see two tables. The Raw Model table displays coefficients and standard errors for the model after we have transformed the response variable, but we have not transformed back. Recall that in most generalized linear regression cases, we need to back-transform our results to get them in the original units. This is due to using a link function to model the regression. If you want to suppress printing this table, set suppress = TRUE.

The Transformed Model table does the back-transform for you. It exponentiates all of the coefficients and the confidence intervals so that they are in the original units.

Proportional hazards regression is no longer supported.

Accounting for Correlated Data

A common way to account for correlated data - for example longitudinal data, where we have multiple measurements on the same subjects over time - is to use the Generalized Estimating Equations (GEE) framework. We chose to use this framework in our regress() function because it gives flexibility and uses common conventions. To make use of this functionality, as we mentioned above, you simply need to specify the id argument in regress().

For an example, we turn to the salary data, again hosted at Scott Emerson's website. More information can be found in the documentation, but these data essentially deal with university level faculty. In 1995, a survey was run asking the faculty to recall their salaries from the past. Each faculty member had records dating either to their starting year at the university or 1976, whichever was more recent. Thus we have repeated measurements on these individuals, and the variable of interest (salary) is highly correlated across measurements.

One interesting question to ask of these data involves discrimination based on sex. We will not model these data in the most sophisticated way (we leave that to the enterprising reader), but we can regress salary on sex and year, and involve their interaction.

salary <- read.table("http://www.emersonstatistics.com/datasets/salary.txt", header = TRUE, stringsAsFactors = FALSE)

## create an indicator variable
salary$female <- ifelse(salary$sex == "F", 1, 0)

Now that the data is read in, we can run the regression. First we run a regular linear regression which does not properly account for the correlated data, and then we run the GEE based regression.

## linear regression
regress("mean", salary ~ female*year, data = salary)

From this call to regress, we would think that we had 19792 unique data points, which is quite a lot of faculty to have at a university. If we compare to the results from the GEE based regression, we will see how far off we are.

## GEE
regress("mean", salary ~ female*year, id = id, data = salary)

Now regress() tells us that we only have 1597 faculty members, which is much more reasonable. Also, it tells us that the "maximum cluster size" - i.e. the largest number of observations on any one id - is 20. This makes sense due to the sampling scheme, where some faculty have been at the university for 20 years by the time the survey was taken.

While our estimates of the coefficients are the same between the two calls to regress(), it is our standard error estimates and inference which are drastically different. When we overestimate how many unique samples we have (in the naive linear regression), our standard error is much smaller than it should be, and thus we have much larger F statistics and smaller p-values. While in this case our inference would be the same, since the p-values are on the same order of magnitude in both cases, failing to account for the correlation in these data is still a mistake.

Re-parameterizations of a Variable

There are three special functions in uwIntroStats which allow us to re-parameterize variables:

Each of these three functions is used to great effect in regress(). Also, each will give a multiple-partial F-test of the entire variable. This allows you to determine if the variable should be included in the model, rather than having only the coefficient estimates.

For example, we can model race as dummy variables to examine the differences in the odds of having diabetes between races. This allows us to better make comparisons, because modeling as dummy variables essentially creates indicator variables against a reference group.

regress("odds", diabetes ~ dummy(race), data = mri)

First, notice that below the table regress() tells us how the dummy variables were calculated. In this case the reference was 1, corresponding to white in this data set. Next we see the multiple-partial F-test, which is on its own line for dummy(race). The coefficient estimates are nested beneath this line to indicate that these coefficients all come from the same variable (race) but we have modeled them as three variables.

User-specified Multiple-partial F-tests

As we mentioned above, the formula in regress() allows you to specify multiple-partial F-tests. This comes in handy if you want to test a subset of variables all at once in your regression.

For example, say that we are interested in the relationship between atrophy, age, sex, and race. However, we also want to include the sex-age interaction and the sex-race interaction. We also want to model race as dummy variables. Last, we want to determine if all of the variables involved in the sex-age interaction (male, age, and the interaction) should be in the model, and similar for the sex-race interaction. We use the U() function to accomplish this goal.

regress("mean", atrophy ~ U(ma = ~male*age) + U(mr = ~male*dummy(race)), data = mri)

We have also made use of functionality in U() which allows us to name the groups for the multiple-partial F-tests. Thus we can see that the F-statistic for simultaneously testing whether male, age, and the interaction male:age are equal to zero is 34.35, with the correct degrees of freedom for the test (3) and a small p-value. We would conclude (without adjusting for multiple comparisons) that (some of) these variables should be in the model. On the other hand, we see that the F-statistic for simultaneously testing whether male, race, and the interaction are equal to zero is 1.04. We would conclude (again without adjusting for multiple comparisons) that we cannot reject the null hypothesis that these are all equal to zero.

We also get individual coefficient estimates for each, where we have again nested the estimates within their corresponding groups defined by calls to U(). Note that we have repeated the line for male since it appears in both groups. Other than that, the coefficients table is the same as it would be if we had run the formula atrophy ~ male*age + male*dummy(race).



Try the uwIntroStats package in your browser

Any scripts or data that you put into this service are public.

uwIntroStats documentation built on May 2, 2019, 4:34 a.m.