June 8, 2023

Iterative solvers are a cornerstone of numerical analysis and scientific computations. They offer a compelling alternative to direct solvers, particularly when it comes to solving large and sparse linear systems. How do these numerical techniques work, and how do you choose the right one for your problem? In this article, we will take an in-depth look at one of the most popular iterative solvers: GMRES.

GMRES stands for Generalized Minimal RESidual Method and is an iterative solver for solving linear systems. It was originally proposed by Y. Saad and M.H. Schultz in 1986 and has become one of the most frequently used iterative solvers in numerical linear algebra. GMRES belongs to the family of Krylov subspace methods, which is a collection of iterative solvers that construct an approximate solution by projecting the residual onto a subspace generated by the matrix and the current residual vector.

When solving a linear system Ax = b, where A is a matrix and b is a vector, GMRES constructs a sequence of solutions that converge to the exact solution. At each iteration, GMRES computes an approximate solution x_k by minimizing the residual ||b - Ax_k|| over a Krylov subspace of dimension k. The Krylov subspace is defined as the span of the vectors {b, Ab, A^2b, ..., A^(k-1)b}.

GMRES is particularly useful for solving large and sparse linear systems, where direct methods may be impractical due to memory limitations. It can also handle matrices with non-symmetric, indefinite, or poorly conditioned structures, which can cause other iterative solvers to fail.

GMRES is commonly used in various fields, including physics, engineering, and computer science. Because it can efficiently solve large and sparse linear systems arising from finite element discretization, it is particularly useful for solving partial differential equations, such as those that arise in fluid dynamics or structural analysis. GMRES can also be used to solve eigenvalue problems and to compute the singular value decomposition of a matrix.

In addition to its use in scientific computing, GMRES has also found applications in other areas, such as image processing, signal processing, and optimization. For example, GMRES can be used to solve linear least squares problems, which arise in image and signal processing when fitting a model to noisy data.

One of the main advantages of GMRES is its high convergence rate when compared to other iterative solvers. As a Krylov subspace method, GMRES can efficiently deal with large and sparse systems, where direct solvers may fail due to memory limitations. Additionally, GMRES can handle matrices with non-symmetric, indefinite, or poorly conditioned structures, which can cause other iterative solvers to fail.

However, GMRES can be computationally expensive, particularly when dealing with ill-conditioned matrices or when the number of iterations required for convergence is high. Additionally, while GMRES is guaranteed to converge for any linear system, it may require a large number of iterations to reach a solution that meets the desired accuracy. This can be a disadvantage when solving large-scale problems, where the computational cost of each iteration can be high.

Despite its limitations, GMRES remains one of the most widely used iterative solvers in numerical linear algebra. Its ability to efficiently handle large and sparse linear systems with non-symmetric or indefinite structures has made it an indispensable tool in scientific computing and engineering.

When solving a linear system Ax = b, where A is a square matrix and b is the right-hand side vector, the Krylov subspace generated by A and b is a set of vectors:

- b
- Ab
- A^2b
- ...
- A^(n-1)b

This set of vectors is important because it provides a basis for approximating the solution to the linear system. The Krylov subspace is a subspace of the vector space in which the solution to the linear system resides.

GMRES constructs an approximate solution by finding a vector x that minimizes the residual r = b - Ax in this Krylov subspace. Specifically, it computes a vector x_n that minimizes the Euclidean norm of the residual r_n = b - Ax_n over the subspace spanned by {b, Ab, A^2b, ..., A^(n-1)b}.

This approach is particularly useful when the matrix A is large and sparse, as is often the case in scientific and engineering applications. In such cases, it is not practical to compute the exact solution to the linear system, and an approximate solution that is accurate to within a certain tolerance is sufficient.

To find a vector x_n that minimizes the residual r_n, GMRES uses the Arnoldi iteration. The Arnoldi iteration is an algorithm that constructs an orthogonal basis for the Krylov subspace by orthogonalizing the sequence {b, Ab, A^2b, ..., A^(n-1)b} using the Gram-Schmidt orthogonalization process. The resulting basis is a set of orthonormal vectors Q = {q_1, q_2, ..., q_n}, and the matrix H_n that generates the Krylov subspace is an upper Hessenberg matrix of size (n+1) x n.

The Arnoldi iteration is an iterative process, and it can be computationally expensive for large matrices. However, it is a powerful tool for solving large linear systems, and it has been used in a wide range of applications, including computational fluid dynamics, structural analysis, and image processing.

GMRES converges when the residual norm ||r_n||_2 falls below a certain tolerance value. However, determining the optimal stopping criterion can be challenging. One option is to use an absolute or relative residual tolerance, which specifies the maximum acceptable value of ||r_n||_2 or ||r_n||_2 / ||b||_2. Another option is to use a stagnation criterion, which terminates the solver when the norm of the residual does not decrease significantly over a certain number of iterations.

In practice, the choice of stopping criterion depends on the specific application and the desired level of accuracy. For example, in some applications, it may be more important to minimize the number of iterations required to achieve a certain level of accuracy, while in others, it may be more important to minimize the error in the solution.

Overall, GMRES is a powerful and widely used algorithm for solving large linear systems. Its ability to construct an approximate solution in a Krylov subspace makes it particularly useful for solving large, sparse systems, which are common in many scientific and engineering applications.

The generalized minimal residual method (GMRES) is an iterative method for solving large, sparse linear systems of equations. It is particularly useful for problems where the matrix is non-symmetric or poorly conditioned. The basic algorithm for GMRES can be summarized as follows:

- Initialize x_0, r_0 = b - Ax_0, and set k = 0.
- Construct the orthonormal basis Q = {q_1, q_2, ..., q_k} and the upper Hessenberg matrix H_k using the Arnoldi iteration. The Arnoldi iteration is a process that generates an orthonormal basis for the Krylov subspace K_k(A, r_0), which is the span of {r_0, Ar_0, A^2r_0, ..., A^(k-1)r_0}.
- Compute the minimum-norm solution of H_ky = ||b||_2e_1 using QR factorization. This step involves solving a smaller, dense linear system of equations.
- Update the solution x_{k+1} = x_k + Q_ky.
- Compute the residual r_{k+1} = b - Ax_{k+1}.
- If ||r_{k+1}||_2 is below the chosen tolerance or the solver has reached the maximum number of iterations, terminate.
- Otherwise, set k = k + 1 and go to step 2.

One advantage of GMRES over other iterative methods is that it does not require the matrix to be stored explicitly. Instead, it only requires matrix-vector products, which can be computed efficiently using sparse matrix-vector multiplication.

Preconditioning is a technique used to improve the convergence rate of iterative solvers by transforming the original linear system into an equivalent system that is easier to solve. GMRES can benefit from preconditioning if the matrix A is ill-conditioned or highly non-normal. Common preconditioners include incomplete Cholesky factorization, multigrid methods, and domain decomposition techniques. Choosing an appropriate preconditioner can significantly reduce the number of iterations required for convergence.

One popular preconditioning technique is the incomplete Cholesky factorization (IC), which is a variant of the Cholesky factorization that only computes a subset of the entries in the factorization. This can be particularly effective for matrices that have a strong diagonal dominance or are nearly positive definite.

Multigrid methods are another popular preconditioning technique that can be used to solve linear systems on a hierarchy of grids with different resolutions. This can be particularly effective for problems with highly oscillatory or localized features.

Domain decomposition methods involve dividing the domain of the problem into smaller subdomains and solving the linear system on each subdomain separately. This can be particularly effective for problems with complex geometries or heterogeneous materials.

Implementing GMRES from scratch can be time-consuming and error-prone. Fortunately, there are many software packages and libraries available that include implementations of GMRES and other iterative solvers. Popular choices include PETSc, and Trilinos. These libraries offer a wide range of functionality, including support for different matrix formats, preconditioning techniques, and stopping criteria. Using a well-established library can save time and effort while ensuring accurate results.

PETSc (the Portable, Extensible Toolkit for Scientific computing) is a software library for parallel computation that includes a wide range of numerical algorithms, including iterative solvers and preconditioners. It also includes support for a variety of parallel architectures and programming models.

Trilinos is another software library for parallel computation that includes a wide range of numerical algorithms, including iterative solvers and preconditioners. It also includes support for a variety of parallel architectures and programming models, as well as a number of advanced features such as automatic differentiation and optimization.

The Conjugate Gradient (CG) method is another popular iterative solver that is commonly used for solving symmetric positive definite linear systems. Like GMRES, CG constructs an approximate solution by projecting the residual onto a Krylov subspace. However, CG requires significantly fewer iterations than GMRES for symmetric positive definite matrices, making it a popular choice for this class of problems.

The BiCGStab (Bi-Conjugate Gradient Stabilized) method is another iterative solver that is commonly used for solving non-symmetric linear systems. BiCGStab uses a bi-conjugate gradient algorithm with a stabilization process to improve convergence. Compared to GMRES, BiCGStab can be more efficient for some matrices, particularly those that are non-symmetric and have a small number of eigenvalues near zero.

Choosing the right iterative solver for your problem depends on several factors, including the properties of the matrix, the desired accuracy, and the compute resources available. GMRES is a versatile solver that can efficiently solve a wide range of linear systems, but it may not always be the optimal choice. It is important to consider the matrix properties and test different solvers to identify the most efficient and effective method.

GMRES is a powerful and widely used iterative solver for solving large and sparse linear systems. By constructing an approximate solution through the Krylov subspace and using the Arnoldi iteration, GMRES can efficiently solve a wide range of problems in physics, engineering, and computer science. Preconditioning, software libraries, and the choice of stopping criteria are just some of the key considerations when implementing GMRES. Ultimately, the choice of solver depends on the specific properties of the matrix and the desired accuracy.

*Learn more about how** Collimatorâ€™s system design solutions** can help you fast-track your development. **Schedule a demo** with one of our engineers today.*