eigs_sym: Find a few eigenvalues and vectors on large, sparse Hermitian...

View source: R/eigs.R

eigs_symR Documentation

Find a few eigenvalues and vectors on large, sparse Hermitian matrix

Description

Compute a few eigenpairs from a specified region (the largest, the smallest, the closest to a point) on a symmetric/Hermitian matrix using PRIMME [1]. Generalized symmetric/Hermitian problem is also supported. Only the matrix-vector product of the matrix is required. The used method is usually faster than a direct method (such as eigen) if seeking a few eigenpairs and the matrix-vector product is cheap. For accelerating the convergence consider to use preconditioning and/or educated initial guesses.

Usage

eigs_sym(
  A,
  NEig = 1,
  which = "LA",
  targetShifts = NULL,
  tol = 1e-06,
  x0 = NULL,
  ortho = NULL,
  prec = NULL,
  isreal = NULL,
  B = NULL,
  ...
)

Arguments

A

symmetric/Hermitian matrix or a function with signature f(x) that returns A %*% x.

NEig

number of eigenvalues and vectors to seek.

which

which eigenvalues to find:

"LA"

the largest (rightmost) values;

"SA"

the smallest (leftmost) values;

"LM"

the farthest from targetShifts;

"SM"

the closest to targetShifts;

"CLT"

the closest but left to targetShifts;

"CGT"

the closest but greater than targetShifts;

vector of numbers

the closest values to these points.

targetShifts

return the closest eigenvalues to these points as indicated by target.

tol

the convergence tolerance: \|A x - x\lambda\| \le tol\|A\|.

x0

matrix whose columns are educated guesses of the eigenvectors to to find.

ortho

find eigenvectors orthogonal to the space spanned by the columns of this matrix; useful to avoid finding some eigenvalues or to find new solutions.

prec

preconditioner used to accelerated the convergence; usually it is an approximation of the inverse of A - \sigma I if finding the closest eigenvalues to \sigma. If it is a matrix it is used as prec %*% x; otherwise it is used as prec(x).

isreal

whether A %*% x always returns real number and not complex.

B

symmetric/Hermitian positive definite matrix or a function with signature f(x) that returns B %*% x. If given, the function returns the eigenpairs of (A,B).

...

other PRIMME options (see details).

Details

Optional arguments to pass to PRIMME eigensolver (see further details at [2]):

method

used by the solver, one of:

"DYNAMIC"

switches dynamically between DEFAULT_MIN_TIME and DEFAULT_MIN_MATVECS

"DEFAULT_MIN_TIME"

best method for light matrix-vector product

"DEFAULT_MIN_MATVECS"

best method for heavy matrix-vector product or preconditioner

"Arnoldi"

an Arnoldi not implemented efficiently

"GD"

classical block Generalized Davidson

"GD_plusK"

GD+k block GD with recurrence restarting

"GD_Olsen_plusK"

GD+k with approximate Olsen preconditioning

"JD_Olsen_plusK"

GD+k, exact Olsen (two preconditioner applications per step)

"RQI"

Rayleigh Quotient Iteration, also Inverse Iteration if targetShifts is provided

"JDQR"

original block, Jacobi Davidson

"JDQMR"

our block JDQMR method (similar to JDCG)

"JDQMR_ETol"

slight, but efficient JDQMR modification

"STEEPEST_DESCENT"

equivalent to GD(maxBlockSize,2*maxBlockSize)

"LOBPCG_OrthoBasis"

equivalent to GD(neig,3*neig)+neig

"LOBPCG_OrthoBasis_Window"

equivalent to GD(maxBlockSize,3*maxBlockSize)+maxBlockSize when neig>maxBlockSize

aNorm

estimation of norm-2 of A, used in convergence test (if not provided, it is estimated as the largest eigenvalue in magnitude seen).

maxBlockSize

maximum block size (like in subspace iteration or LOBPCG).

printLevel

message level reporting, from 0 (no output) to 5 (show all).

locking

1, hard locking; 0, soft locking.

maxBasisSize

maximum size of the search subspace.

minRestartSize

minimum Ritz vectors to keep in restarting.

maxMatvecs

maximum number of matrix vector multiplications.

maxit

maximum number of outer iterations.

scheme

the restart scheme (thick restart by default).

maxPrevRetain

number of approximate eigenvectors retained from previous iteration, that are kept after restart.

robustShifts

set to true to avoid stagnation.

maxInnerIterations

maximum number of inner QMR iterations.

LeftQ

use the locked vectors in the left projector.

LeftX

use the approx. eigenvector in the left projector.

RightQ

use the locked vectors in the right projector.

RightX

use the approx. eigenvector in the right projector.

SkewQ

use the preconditioned locked vectors in the right projector.

SkewX

use the preconditioned approximate eigenvector in the right projector.

relTolBase

a legacy from classical JDQR (recommend not use).

iseed

an array of four numbers used as a random seed.

Value

list with the next elements

values

the eigenvalues \lambda_i

vectors

the eigenvectors x_i

rnorms

the residual vector norms \|A x_i - \lambda_i B x_i\|.

stats$numMatvecs

number of matrix-vector products performed

stats$numPreconds

number of preconditioner applications performed

stats$elapsedTime

time expended by the eigensolver

stats$timeMatvec

time expended in the matrix-vector products

stats$timePrecond

time expended in applying the preconditioner

stats$timeOrtho

time expended in orthogonalizing

stats$estimateMinEval

estimation of the smallest eigenvalue of A

stats$estimateMaxEval

estimation of the largest eigenvalue of A

stats$estimateANorm

estimation of the norm of A

References

[1] A. Stathopoulos and J. R. McCombs PRIMME: PReconditioned Iterative MultiMethod Eigensolver: Methods and software description, ACM Transaction on Mathematical Software Vol. 37, No. 2, (2010) 21:1-21:30.

[2] https://www.cs.wm.edu/~andreas/software/doc/primmec.html#parameters-guide

See Also

eigen for computing all values; svds for computing a few singular values

Examples

A <- diag(1:10)  # the eigenvalues of this matrix are 1:10 and the
                 # eigenvectors are the columns of diag(10)
r <- eigs_sym(A, 3);
r$values  # the three largest eigenvalues on diag(1:10)
r$vectors # the corresponding approximate eigenvectors
r$rnorms # the corresponding residual norms
r$stats$numMatvecs # total matrix-vector products spend

r <- eigs_sym(A, 3, 'SA') # compute the three smallest values

r <- eigs_sym(A, 3, 2.5) # compute the three closest values to 2.5

r <- eigs_sym(A, 3, 2.5, tol=1e-3); # compute the values with
r$rnorms                                    # residual norm <= 1e-3*||A||

B <- diag(rev(1:10));
r <- eigs_sym(A, 3, B=B); # compute the 3 largest eigenpairs of
                          # the generalized problem (A,B)

# Build a Jacobi preconditioner (too convenient for a diagonal matrix!)
# and see how reduce the number matrix-vector products
A <- diag(1:1000)   # we use a larger matrix to amplify the difference
P <- diag(diag(A) - 2.5)
eigs_sym(A, 3, 2.5, tol=1e-3)$stats$numMatvecs
eigs_sym(A, 3, 2.5, tol=1e-3, prec=P)$stats$numMatvecs

# Passing A and the preconditioner as functions
Af <- function(x) (1:100) * x; # = diag(1:100) %*% x
Pf <- function(x) x / (1:100 - 2.5); # = solve(diag(1:100 - 2.5), x)
r <- eigs_sym(Af, 3, 2.5, tol=1e-3, prec=Pf, n=100)

# Passing initial guesses
A <- diag(1:1000)   # we use a larger matrix to amplify the difference
x0 <- diag(1,1000,4) + matrix(rnorm(4000), 1000, 4)/100;
eigs_sym(A, 4, "SA", tol=1e-3)$stats$numMatvecs
eigs_sym(A, 4, "SA", tol=1e-3, x0=x0)$stats$numMatvecs

# Passing orthogonal constrain, in this case, already compute eigenvectors
r <- eigs_sym(A, 4, "SA", tol=1e-3); r$values
eigs_sym(A, 4, "SA", tol=1e-3, ortho=r$vectors)$values


PRIMME documentation built on Oct. 1, 2023, 1:07 a.m.