Domain decomposition solvers

This is a quick introduction to domain decomposition solvers for dense/compressed systems, and how they are implemented in Htool-DDM. These techniques have been introduced in Hebeker[1] and analysed in papers like Stephan and Tran[2], Heuer[3] for application to boundary integral equations. One goal of this library is to provide DD solvers with a GenEO coarse space Marchand et al.[4], Marchand[5].

Iterative solvers

For a given linear system \(\mathbf{A}\in \mathbb{C}^{n\times n}\), we want to solve

\[\mathbf{A}\mathbf{x}=\mathbf{b},\]

where

  • \(\mathbf{b}\) is a given right-hand side stemming from the data of the underlying problem, and,

  • \(\mathbf{x}\) is the unknown to be computed.

There are mainly two strategies to solve a linear system:

  • A direct linear solver, which typically relies on a factorisation of \(\mathbf{A}\).

  • An iterative linear solver, which requires multiples matrix-vector products for a given precision.

This library focuses on the second approach because it is easier to adapt such algorithms in a distributed-memory context. The issue is that the number of matrix-vector products can be large, making the method less efficient. Thus, a preconditioner \(\mathbf{P}\in \mathbb{C}^{n\times n}\) is often used to solve the alternative linear system

\[\mathbf{P} \mathbf{A}\mathbf{x}= \mathbf{P}\mathbf{b},\]

with the aim of making the preconditioned linear system \(\mathbf{P} \mathbf{A}\) more suitable for iterative resolution.

Schwarz preconditioners

A specific class of preconditionners stemming from Domain Decomposition Methods (DDM) are Schwarz preconditioners. They rely on a decomposition with overlap of our set of unknowns in \(N\) subdomains. Each subdomain defines a local numbering with \(\sigma_p:\left\{1,\dots,n_p\right\}\to\left\{1,...,n\right\}\), where \(n_p\) is the number of unknows in the pth subdomain. Thus, the restriction matrix can be defined as \(\mathbf{R}_p\in \mathbb{R}^{n_p\times n}\)

\[\begin{split}\left(\mathbf{R}\right)_{j,k} = \left\{ \begin{aligned} 1 & \quad\text{if }k=\sigma_p(j),\\ 0 & \quad\text{otherwise}. \end{aligned} \right.\end{split}\]

Its transpose \(\mathbf{R}_p^T\) defines the extension by zero, and \(\mathbf{R}_p \mathbf{A}\mathbf{R}_p^T\) is the diagonal block of the original linear system associated with the pth subdomain. An example of one-level Schwarz preconditioner can then be defined as

\[\mathbf{P}_{\mathrm{ASM}}= \sum_{p=1}^N \mathbf{R}_p^T(\mathbf{R}_p \mathbf{A}\mathbf{R}_p^T)^{-1} \mathbf{R}_p,\]

which is called a Additive Schwarz Method (ASM) preconditionner.

Note

It lends itself well in a distributed-memory context because it relies on solving local problem independently. Typically, a subdomain will be associated with one MPI process.

GenEO coarse space

If Schwarz preconditioners, and DD preconditioners in general, improve iterative solving of linear systems (by lowering the number of required matrix-vector products), their efficiency decrease when the number of subdomains (and thus MPI processes) increases.

To make those preconditionners more robust when \(N\) increases, a recurring technique is to add a small coarse space. It defines the columns of \(\mathbf{R}_0^T\) so that \(\mathbf{R}_0 \mathbf{A}\mathbf{R}_0^T\) is a global matrix of small size. Then, a coarse space can be used additively with \(\mathbf{P}_{\mathrm{ASM}}\):

\[\mathbf{P}_{\mathrm{ASM},2}= \mathbf{R}_0^T(\mathbf{R}_0 \mathbf{A}\mathbf{R}_0^T)^{-1} \mathbf{R}_0 + \sum_{p=1}^N \mathbf{R}_p^T(\mathbf{R}_p \mathbf{A}\mathbf{R}_p^T)^{-1} \mathbf{R}_p.\]

It is also called a two-level Schwarz preconditioner because we solve

  1. local problems to the subdomains: \(\mathbf{R}_p \mathbf{A}\mathbf{R}_p^T\) and,

  2. a global problem \(\mathbf{R}_0 \mathbf{A}\mathbf{R}_0^T\) (but hopefully of small size).

Various definitions of coarse spaces have been introduced, they usually rely on a coarser discretization of the underlying problem, or on well-chosen local eigenproblems. This library focuses on the latter with the so-called “Generalized Eigenproblems in the Overlap” (GenEO) coarse space, where local eigenproblems are of the form

\[\mathbf{D}_p \mathbf{R}_p \mathbf{A} \mathbf{R}_p^T \mathbf{D}_p \mathbf{v}_p = \lambda_p \mathbf{B}_p \mathbf{v}_p,\]

where

  • \(\mathbf{D}_p\) defines a partition of the unity and,

  • \(\mathbf{B}_p\) is well-chosen matrix depending on the underlying problem.

We refer Marchand et al.[4], Marchand[5] where such coarse space is analysed in the context of boundary integral equations.

Row-wise distributed operator

We focused so far on the preconditionner \(\mathbf{P}\), but the actual linear system \(\mathbf{A}\) we want to solve also needs to be parallelized. We use a simple row-wise data-layout so that applying the preconditionner is relatively easy:

../_images/hmat_parallelization.svg

The distribution of the linear system is defined via a partition induced by a level of the target cluster tree.