mlsei | R Documentation |
Finds the q
x p
matrix B
that minimizes the multivariate least squares problem
sum(( Y - X %*% t(Z %*% B) )^2)
|
subject to t(A) %*% B[,j] >= b
for all j = 1:p
. Unique basis functions and constraints are allowed for each column of B
.
mlsei(X, Y, Z, A, b, meq,
backfit = FALSE, maxit = 1000,
eps = 1e-10, del = 1e-6,
XtX = NULL, ZtZ = NULL,
simplify = TRUE, catchError = FALSE)
X |
Matrix of dimension |
Y |
Matrix of dimension |
Z |
Matrix of dimension |
A |
Constraint matrix of dimension |
b |
Consraint vector of dimension |
meq |
The first |
backfit |
Estimate |
maxit |
Maximum number of iterations for back-fitting algorithm. Ignored if |
eps |
Convergence tolerance for back-fitting algorithm. Ignored if |
del |
Stability tolerance for back-fitting algorithm. Ignored if |
XtX |
Crossproduct matrix: |
ZtZ |
Crossproduct matrix: |
simplify |
If |
catchError |
If |
If backfit = FALSE
(default), a closed-form solution is used to estimate B
whenever possible. Otherwise a back-fitting algorithm is used, where the columns of B
are updated sequentially until convergence. The backfitting algorithm is determined to have converged when
mean((B.new - B.old)^2) < eps * (mean(B.old^2) + del)
,
where B.old
and B.new
denote the parameter estimates at iterations t
and t+1
of the backfitting algorithm.
If Z
is a list with q_j = q
for all j = 1,\ldots,p
, then...
B |
is returned as a |
B |
is returned as a list of length |
If Z
is a list with q_j \neq q
for some j
, then B
is returned as a list of length p
.
Otherwise B
is returned as a q
x p
matrix.
The Z
input can also be a list of length p
where Z[[j]]
contains a m
x q_j
matrix. If q_j = q
for all j = 1,\ldots,p
and simplify = TRUE
, the output B
will be a matrix. Otherwise B
will be a list of length p
where B[[j]]
contains a q_j
x 1 vector.
The A
and b
inputs can also be lists of length p
where t(A[[j]]) %*% B[,j] >= b[[j]]
for all j = 1,\ldots,p
. If A
and b
are lists of length p
, the meq
input should be a vector of length p
indicating the number of equality constraints for each element of A
.
Nathaniel E. Helwig <helwig@umn.edu>
Goldfarb, D., & Idnani, A. (1983). A numerically stable dual method for solving strictly convex quadratic programs. Mathematical Programming, 27, 1-33. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1007/BF02591962")}
Helwig, N. E. (in prep). Constrained multivariate least squares in R.
Ten Berge, J. M. F. (1993). Least Squares Optimization in Multivariate Analysis. Volume 25 of M & T Series. DSWO Press, Leiden University. ISBN: 9789066950832
Turlach, B. A., & Weingessel, A. (2019). quadprog: Functions to solve Quadratic Programming Problems. R package version 1.5-8. https://CRAN.R-project.org/package=quadprog
cmls
calls this function for several of the constraints.
######***###### GENERATE DATA ######***######
# make X
set.seed(2)
n <- 50
m <- 20
p <- 2
Xmat <- matrix(rnorm(n*p), nrow = n, ncol = p)
# make B (which satisfies all constraints except monotonicity)
x <- seq(0, 1, length.out = m)
Bmat <- rbind(sin(2*pi*x), sin(2*pi*x+pi)) / sqrt(4.75)
struc <- rbind(rep(c(TRUE, FALSE), each = m / 2),
rep(c(FALSE, TRUE), each = m / 2))
Bmat <- Bmat * struc
# make noisy data
set.seed(1)
Ymat <- Xmat %*% Bmat + rnorm(n*m, sd = 0.25)
######***###### UNCONSTRAINED ######***######
# unconstrained
Bhat.cmls <- cmls(X = Xmat, Y = Ymat, const = "uncons")
Bhat.mlsei <- t(mlsei(X = Xmat, Y = Ymat))
mean((Bhat.cmls - Bhat.mlsei)^2)
# unconstrained and structured (note: cmls is more efficient)
Bhat.cmls <- cmls(X = Xmat, Y = Ymat, const = "uncons", struc = struc)
Amat <- vector("list", p)
meq <- rep(0, p)
for(j in 1:p){
meq[j] <- sum(!struc[j,])
if(meq[j] > 0){
A <- matrix(0, nrow = m, ncol = meq[j])
A[!struc[j,],] <- diag(meq[j])
Amat[[j]] <- A
} else {
Amat[[j]] <- matrix(0, nrow = m, ncol = 1)
}
}
Bhat.mlsei <- t(mlsei(X = Xmat, Y = Ymat, A = Amat, meq = meq))
mean((Bhat.cmls - Bhat.mlsei)^2)
######***###### NON-NEGATIVITY ######***######
# non-negative
Bhat.cmls <- cmls(X = Xmat, Y = Ymat, const = "nonneg")
Bhat.mlsei <- t(mlsei(X = Xmat, Y = Ymat, A = diag(m)))
mean((Bhat.cmls - Bhat.mlsei)^2)
# non-negative and structured (note: cmls is more efficient)
Bhat.cmls <- cmls(X = Xmat, Y = Ymat, const = "nonneg", struc = struc)
eye <- diag(m)
meq <- rep(0, p)
for(j in 1:p){
meq[j] <- sum(!struc[j,])
Amat[[j]] <- eye[,sort(struc[j,], index.return = TRUE)$ix]
}
Bhat.mlsei <- t(mlsei(X = Xmat, Y = Ymat, A = Amat, meq = meq))
mean((Bhat.cmls - Bhat.mlsei)^2)
# see internals of cmls.R for further examples
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.