Given a square matrix *M* of size *n*n*, and a
matrix *x* of size *n*p* (or a vector of length
*n*), evaluate various quadratic forms.

(in the following, *t(x)* denotes the complex conjugate of
the transpose, also known as the Hermitian transpose. This only
matters when considering complex numbers).

Function

`quad.form(M,x)`

evaluates*t(x) %*% M %*% x*in an efficient mannerFunction

`quad.form.inv(M,x)`

returns*t(x) %*% solve(M) %*% x*using an efficient method that avoids inverting*M*Function

`quad.tform(M,x)`

returns*x %*% A %*% t(x)*using`tcrossprod()`

without taking a transposeFunction

`quad.tform.inv(M,x)`

returns*x %*% solve(M) %*% t(x)*, although a single transpose is neededFunction

`quad.3form(M,l,r)`

returns*t(l) %*% M %*% r*using nested calls to`crossprod()`

. It's no faster than calling`crossprod()`

directly, but makes code neater and less error-prone (IMHO)Function

`quad.3tform(M,l,r)`

returns*l %*% M %*% t(r)*using nested calls to`tcrossprod()`

. Again, this is to make for neater codeFunction

`quad.diag(M,x)`

returns the*diagonal*of the (potentially very large) square matrix`quad.form(M,x)`

without calculating the off diagonal elementsFunction

`quad.tdiag(M,x)`

similarly returns the diagonal of`quad.tform(M,x)`

These functions invoke the following lower-level calls:

Function

`ht(x)`

returns the Hermitian transpose, that is, the complex conjugate of the transpose, sometimes written*x^**Function

`cprod(x,y)`

returns*t(x) %*% y*, equivalent to`crossprod(Conj(x),y)`

Function

`tcprod(x,y)`

returns*x %*% t(y)*, equivalent to`crossprod(x,Conj(y))`

Note again that in the calls above, “transpose” [that is,
*t(x)*] means “Conjugate transpose”, or the Hermitian
transpose.

1 2 3 4 5 6 7 8 9 10 11 | ```
quad.form(M, x, chol=FALSE)
quad.form.inv(M, x)
quad.tform(M, x)
quad.3form(M,left,right)
quad.3tform(M,left,right)
quad.tform.inv(M,x)
quad.diag(M,x)
quad.tdiag(M,x)
cprod(x,y)
tcprod(x,y)
ht(x)
``` |

`M` |
Square matrix of size |

`x,y` |
Matrix of size |

`chol` |
Boolean, with |

`left,right` |
In function |

The “meat” of `quad.form()`

for `chol=FALSE`

is just
`crossprod(crossprod(M, x), x)`

, and that of
`quad.form.inv()`

is `crossprod(x, solve(M, x))`

.

If the Cholesky decomposition of `M`

is available, then calling
with `chol=TRUE`

and supplying `M.upper`

should generally be
faster (for large matrices) than calling with `chol=FALSE`

and
using `M`

directly. The time saving is negligible for matrices
smaller than about *50*50*, even if the overhead of
computing `M.upper`

is ignored.

These functions are used extensively in the emulator and
calibrator packages' R code, primarily in the interests of elegant
code, but also speed. For the problems I usually consider, the
speedup (of `quad.form(M,x)`

over `t(x) %*% M %*% x`

,
say) is marginal at best

Robin K. S. Hankin

`optimize`

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | ```
jj <- matrix(rnorm(80),20,4)
M <- crossprod(jj,jj)
M.lower <- t(chol(M))
x <- matrix(rnorm(8),4,2)
jj.1 <- t(x) %*% M %*% x
jj.2 <- quad.form(M,x)
jj.3 <- quad.form(M.lower,x,chol=TRUE)
print(jj.1)
print(jj.2)
print(jj.3)
## Make two Hermitian positive-definite matrices:
L <- matrix(c(1,0.1i,-0.1i,1),2,2)
LL <- diag(11)
LL[2,1] <- -(LL[1,2] <- 0.1i)
z <- t(latin.hypercube(11,2,complex=TRUE))
quad.diag(L,z) # elements real because L is HPD
quad.tdiag(LL,z) # ditto
## Now consider accuracy:
quad.form(solve(M),x) - quad.form.inv(M,x) # should be zero
quad.form(M,x) - quad.tform(M,t(x)) # should be zero
quad.diag(M,x) - diag(quad.form(M,x)) # should be zero
diag(quad.form(L,z)) - quad.diag(L,z) # should be zero
diag(quad.tform(LL,z)) - quad.tdiag(LL,z) # should be zero
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

All documentation is copyright its authors; we didn't write any of that.