Function File: [blockVectorX, lambda] = lobpcg (blockVectorX, operatorA)
Function File: [blockVectorX, lambda, failureFlag] = lobpcg (blockVectorX, operatorA)
Function File: [blockVectorX, lambda, failureFlag, lambdaHistory, residualNormsHistory] = lobpcg (blockVectorX, operatorA, operatorB, operatorT, blockVectorY, residualTolerance, maxIterations, verbosityLevel)

Solves Hermitian partial eigenproblems using preconditioning.

The first form outputs the array of algebraic smallest eigenvalues lambda and corresponding matrix of orthonormalized eigenvectors blockVectorX of the Hermitian (full or sparse) operator operatorA using input matrix blockVectorX as an initial guess, without preconditioning, somewhat similar to:

# for real symmetric operator operatorA
opts.issym  = 1; opts.isreal = 1; K = size (blockVectorX, 2);
[blockVectorX, lambda] = eigs (operatorA, K, 'SR', opts);

# for Hermitian operator operatorA
K = size (blockVectorX, 2);
[blockVectorX, lambda] = eigs (operatorA, K, 'SR');

The second form returns a convergence flag. If failureFlag is 0 then all the eigenvalues converged; otherwise not all converged.

The third form computes smallest eigenvalues lambda and corresponding eigenvectors blockVectorX of the generalized eigenproblem Ax=lambda Bx, where Hermitian operators operatorA and operatorB are given as functions, as well as a preconditioner, operatorT. The operators operatorB and operatorT must be in addition positive definite. To compute the largest eigenpairs of operatorA, simply apply the code to operatorA multiplied by -1. The code does not involve any matrix factorizations of operatorA and operatorB, thus, e.g., it preserves the sparsity and the structure of operatorA and operatorB.

residualTolerance and maxIterations control tolerance and max number of steps, and verbosityLevel = 0, 1, or 2 controls the amount of printed info. lambdaHistory is a matrix with all iterative lambdas, and residualNormsHistory are matrices of the history of 2-norms of residuals

Required input:

  • blockVectorX (class numeric) - initial approximation to eigenvectors, full or sparse matrix n-by-blockSize. blockVectorX must be full rank.
  • operatorA (class numeric, char, or function_handle) - the main operator of the eigenproblem, can be a matrix, a function name, or handle

Optional function input:

  • operatorB (class numeric, char, or function_handle) - the second operator, if solving a generalized eigenproblem, can be a matrix, a function name, or handle; by default if empty, operatorB = I.
  • operatorT (class char or function_handle) - the preconditioner, by default operatorT(blockVectorX) = blockVectorX.

Optional constraints input:

  • blockVectorY (class numeric) - a full or sparse n-by-sizeY matrix of constraints, where sizeY < n. blockVectorY must be full rank. The iterations will be performed in the (operatorB-) orthogonal complement of the column-space of blockVectorY.

Optional scalar input parameters:

  • residualTolerance (class numeric) - tolerance, by default, residualTolerance = n * sqrt (eps)
  • maxIterations - max number of iterations, by default, maxIterations = min (n, 20)
  • verbosityLevel - either 0 (no info), 1, or 2 (with pictures); by default, verbosityLevel = 0.

Required output:

  • blockVectorX and lambda (class numeric) both are computed blockSize eigenpairs, where blockSize = size (blockVectorX, 2) for the initial guess blockVectorX if it is full rank.

Optional output:

  • failureFlag (class integer) as described above.
  • lambdaHistory (class numeric) as described above.
  • residualNormsHistory (class numeric) as described above.

Functions operatorA(blockVectorX), operatorB(blockVectorX) and operatorT(blockVectorX) must support blockVectorX being a matrix, not just a column vector.

Every iteration involves one application of operatorA and operatorB, and one of operatorT.

Main memory requirements: 6 (9 if isempty(operatorB)=0) matrices of the same size as blockVectorX, 2 matrices of the same size as blockVectorY (if present), and two square matrices of the size 3*blockSize.

In all examples below, we use the Laplacian operator in a 20x20 square with the mesh size 1 which can be generated in MATLAB by running:

A = delsq (numgrid ('S', 21));
n = size (A, 1);

or in MATLAB and Octave by:

[~,~,A] = laplacian ([19, 19]);
n = size (A, 1);

Note that laplacian is a function of the specfun octave-forge package.

The following Example:

[blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, 1e-5, 50, 2);

attempts to compute 8 first eigenpairs without preconditioning, but not all eigenpairs converge after 50 steps, so failureFlag=1.

The next Example:

blockVectorY = [];
lambda_all = [];
for j = 1:4
  [blockVectorX, lambda] = lobpcg (randn (n, 2), A, blockVectorY, 1e-5, 200, 2);
  blockVectorY           = [blockVectorY, blockVectorX];
  lambda_all             = [lambda_all' lambda']';
  pause;
end

attemps to compute the same 8 eigenpairs by calling the code 4 times with blockSize=2 using orthogonalization to the previously founded eigenvectors.

The following Example:

R       = ichol (A, struct('michol', 'on'));
precfun = @(x)R\(R'\x);
[blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, [], @(x)precfun(x), 1e-5, 60, 2);

computes the same eigenpairs in less then 25 steps, so that failureFlag=0 using the preconditioner function precfun, defined inline. If precfun is defined as an octave function in a file, the function handle @(x)precfun(x) can be equivalently replaced by the function name precfun. Running:

[blockVectorX, lambda, failureFlag] = lobpcg (randn (n, 8), A, speye (n), @(x)precfun(x), 1e-5, 50, 2);

produces similar answers, but is somewhat slower and needs more memory as technically a generalized eigenproblem with B=I is solved here.

The following example for a mostly diagonally dominant sparse matrix A demonstrates different types of preconditioning, compared to the standard use of the main diagonal of A:

clear all; close all;
n       = 1000;
M       = spdiags ([1:n]', 0, n, n);
precfun = @(x)M\x;
A       = M + sprandsym (n, .1);
Xini    = randn (n, 5);
maxiter = 15;
tol     = 1e-5;
[~,~,~,~,rnp] = lobpcg (Xini, A, tol, maxiter, 1);
[~,~,~,~,r]   = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
subplot (2,2,1), semilogy (r'); hold on;
semilogy (rnp', ':>');
title ('No preconditioning (top)'); axis tight;
M(1,2)  = 2;
precfun = @(x)M\x; % M is no longer symmetric
[~,~,~,~,rns] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
subplot (2,2,2), semilogy (r'); hold on;
semilogy (rns', '--s');
title ('Nonsymmetric preconditioning (square)'); axis tight;
M(1,2)  = 0;
precfun = @(x)M\(x+10*sin(x)); % nonlinear preconditioning
[~,~,~,~,rnl] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
subplot (2,2,3),  semilogy (r'); hold on;
semilogy (rnl', '-.*');
title ('Nonlinear preconditioning (star)'); axis tight;
M       = abs (M - 3.5 * speye (n, n));
precfun = @(x)M\x;
[~,~,~,~,rs] = lobpcg (Xini, A, [], @(x)precfun(x), tol, maxiter, 1);
subplot (2,2,4),  semilogy (r'); hold on;
semilogy (rs', '-d');
title ('Selective preconditioning (diamond)'); axis tight;

References

This main function lobpcg is a version of the preconditioned conjugate gradient method (Algorithm 5.1) described in A. V. Knyazev, Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method, SIAM Journal on Scientific Computing 23 (2001), no. 2, pp. 517-541. http://dx.doi.org/10.1137/S1064827500366124

Known bugs/features

  • an excessively small requested tolerance may result in often restarts and instability. The code is not written to produce an eps-level accuracy! Use common sense.
  • the code may be very sensitive to the number of eigenpairs computed, if there is a cluster of eigenvalues not completely included, cf.
    operatorA = diag ([1 1.99 2:99]);
    [blockVectorX, lambda] = lobpcg (randn (100, 1),operatorA, 1e-10, 80, 2);
    [blockVectorX, lambda] = lobpcg (randn (100, 2),operatorA, 1e-10, 80, 2);
    [blockVectorX, lambda] = lobpcg (randn (100, 3),operatorA, 1e-10, 80, 2);
    

Distribution

The main distribution site: http://math.ucdenver.edu/~aknyazev/

A C-version of this code is a part of the http://code.google.com/p/blopex/ package and is directly available, e.g., in PETSc and HYPRE.

Package: linear-algebra