View source: R/ridgeGLMandCo.R
ridgeGLMmultiT | R Documentation |
Function that evaluates the multi-targeted ridge estimator of the regression parameter of generalized linear models.
ridgeGLMmultiT(Y, X, U=matrix(ncol=0, nrow=length(Y)),
lambdas, targetMat, model="linear",
minSuccDiff=10^(-10), maxIter=100)
Y |
A |
X |
The design |
U |
The design |
lambdas |
An all-positive |
targetMat |
A |
model |
A |
minSuccDiff |
A |
maxIter |
A |
This function finds the maximizer of the following penalized loglikelihood: \mathcal{L}( \mathbf{Y}, \mathbf{X}; \boldsymbol{\beta}) - \frac{1}{2} \sum_{k=1}^K \lambda_k \| \boldsymbol{\beta} - \boldsymbol{\beta}_{k,0} \|_2^2
, with loglikelihood \mathcal{L}( \mathbf{Y}, \mathbf{X}; \boldsymbol{\beta})
, response \mathbf{Y}
, design matrix \mathbf{X}
, regression parameter \boldsymbol{\beta}
, penalty parameter \lambda
, and the k
-th shrinkage target \boldsymbol{\beta}_{k,0}
. For more details, see van Wieringen, Binder (2020).
The ridge estimate of the regression parameter.
W.N. van Wieringen.
van Wieringen, W.N. Binder, H. (2020), "Online learning of regression models from a sequence of datasets by penalized estimation", submitted.
# set the sample size
n <- 50
# set the true parameter
betas <- (c(0:100) - 50) / 20
# generate covariate data
X <- matrix(rnorm(length(betas)*n), nrow=n)
# sample the response
probs <- exp(tcrossprod(betas, X)[1,]) / (1 + exp(tcrossprod(betas, X)[1,]))
Y <- numeric()
for (i in 1:n){
Y <- c(Y, sample(c(0,1), 1, prob=c(1-probs[i], probs[i])))
}
# set the penalty parameter
lambdas <- c(1,3)
# estimate the logistic regression parameter
# bHat <- ridgeGLMmultiT(Y, X, lambdas, model="logistic",
# targetMat=cbind(betas/2, rnorm(length(betas))))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.