| savvySh | R Documentation |
This function estimates coefficients in a linear regression model using several shrinkage methods, including Multiplicative Shrinkage, Slab Regression, Linear shrinkage, and Shrinkage Ridge Regression. Each method gives estimators that balance bias and variance by applying shrinkage to the ordinary least squares (OLS) solution. The shrinkage estimators are computed based on different assumptions about the data.
savvySh(x, y, model_class = c("Multiplicative", "Slab", "Linear", "ShrinkageRR"),
v = 1, lambda_vals = NULL, nlambda = 100, folds = 10,
foldid = FALSE, include_Sh = FALSE, exclude = NULL)
x |
A matrix of predictor variables. |
y |
A vector of response variable. |
model_class |
A character string specifying the shrinkage model to use. Options can choose from |
v |
A numeric value controlling the strength of shrinkage for the |
lambda_vals |
A vector of |
nlambda |
The number of |
folds |
Number of folds for cross-validation in |
foldid |
Logical. If |
include_Sh |
Logical. If |
exclude |
A vector specifying columns to exclude from the predictors. The default is |
The Slab and Shrinkage Linear Regression Estimation methodology provides four classes of shrinkage estimators
that reduce variance in the OLS solution by introducing a small, structured bias. These methods handle overfitting,
collinearity, and high-dimensional scenarios by controlling how and where the coefficients are shrunk. Each class offers a distinct strategy
for controlling instability and improving mean squared error (MSE) in linear models, tailored for different modeling contexts specified
in the model_class argument. Note that if the user provides more than one option in model_class, only the first option is used,
and a warning is issued.
Model Classes:
This class includes three estimators that use the OLS coefficients as a starting point and apply
multiplicative adjustments:
St - Stein estimator, which shrinks all coefficients toward zero by a single global factor. This aims to reduce MSE while keeping the overall bias fairly uniform across coefficients.
DSh - Diagonal Shrinkage, assigning an individual factor to each coefficient based on its variance. This yields more targeted shrinkage than the global approach and often achieves a lower MSE.
Sh - Shrinkage estimator that solves a Sylvester equation for a full (non-diagonal) shrinkage matrix.
It is more flexible but also more computationally demanding. Included only if include_Sh = TRUE.
Slab Regression applies an adaptive quadratic penalty term to the OLS objective:
SR - Simple Slab Regression, which modifies the OLS objective by
adding a penalty in a fixed direction (often the constant vector). This penalty is controlled by v
and does not require cross-validation. It can be viewed as a special case of the generalized lasso
but focuses on smooth (quadratic) rather than \ell_1 regularization.
GSR - Generalized Slab Regression, extending SR by allowing shrinkage along multiple directions. Typically, these directions correspond to the eigenvectors of the design covariance matrix, effectively shrinking principal components.
The Linear Shrinkage (LSh) estimator forms a convex combination of the OLS estimator (through the origin) and a target estimator that assumes uncorrelated predictors (diagonal approximation of the covariance). This approach is simpler than a full matrix method and is well-suited for standardized data where the intercept is not needed.
The Shrinkage Ridge Regression (SRR) extends standard RR by shrinking the design covariance
matrix toward a spherical target (i.e., a diagonal matrix with equal entries). This additional regularization
stabilizes the eigenvalues and yields more robust coefficient estimates, particularly when the predictors lie
close to a low-dimensional subspace.
A list containing the following elements:
call |
The matched function call. |
model |
The data frame of |
optimal_lambda |
If |
model_class |
The selected model class. |
coefficients |
A list of estimated coefficients for each applicable estimator in the |
fitted_values |
A list of fitted values for each estimator. |
pred_MSE |
A list of prediction MSEs for each estimator. |
ridge_results (optional) |
A list containing detailed results from
|
Ziwei Chen, Vali Asimit, Marina Anca Cidota, Jennifer Asimit
Maintainer: Ziwei Chen <ziwei.chen.3@citystgeorges.ac.uk>
Asimit, V., Cidota, M. A., Chen, Z., & Asimit, J. (2025). Slab and Shrinkage Linear Regression Estimation. Retrieved from https://openaccess.city.ac.uk/id/eprint/35005/
# 1. Simple Multiplicative Shrinkage example
set.seed(123)
x <- matrix(rnorm(100 * 5), 100, 5)
y <- rnorm(100)
fit_mult <- savvySh(x, y, model_class = "Multiplicative", include_Sh = TRUE)
print(fit_mult)
# 2. Slab Regression example
fit_slab <- savvySh(x, y, model_class = "Slab", v = 2)
coef(fit_slab, estimator = "GSR")
# 3. Linear Shrinkage (standardized data recommended)
x_centered <- scale(x, center = TRUE, scale = FALSE)
y_centered <- scale(y, center = TRUE, scale = FALSE)
fit_linear <- savvySh(x_centered, y_centered, model_class = "Linear")
# 4. Shrinkage Ridge Regression
fit_srr <- savvySh(x, y, model_class = "ShrinkageRR")
predict(fit_srr, newx = matrix(rnorm(10 * 5), 10, 5), type = "response")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.