Quickstart

Htool-DDM can be used for different use cases:

Dependencies

Htool-DDM’s Python interface is based on pybind11 and Htool-DDM C++ header-only library. It requires

  • C++ compiler with standard 14,

  • BLAS, to perform algebraic dense operations,

  • Python 3,

  • cmake to build the python interface,

  • numpy for basic linear algebra in Python.

And

  • For DDM solvers: a MPI implementation, mpi4py for using MPI via Python, HPDDM and LAPACK.

  • HPDDM and its dependencies (BLAS, LAPACK) to use iterative solvers and DDM preconditioners,

  • for SVD (re)compression: LAPACK

Note

Htool-DDM C++ header-only library, HPDDM and pybind11 are git submodules of Htool-DDM python interface. Thus, users do not need to install them.

And optionally,

Installation

First, you need to clone this repository with its submodules:

git clone --recurse-submodules https://github.com/htool-ddm/htool_python.git && cd htool_python

In the folder of this repository, do:

pip install .

In case you need to pass cmake variables, you can use

CMAKE_ARGS="-DCMAKE_VAR=VALUE1 -DCMAKE_VAR_2=VALUE2" pip install .

Geometric clustering

The hierarchical clustering of a geometry is contained in Htool.Cluster and can be created using a Htool.ClusterTreeBuilder. For example,

number_of_points = 10000
dimension = 3
number_of_children = 2
# coordinates = ...10000 x 3 column-major numpy array
cluster_builder = Htool.ClusterTreeBuilder()
cluster: Htool.Cluster = cluster_builder.create_cluster_tree(
    coordinates, number_of_children
)

where

  • number_of_children defines the number of children in the cluster tree.

  • coordinates stores all geometric points associated with the evaluation or discretisation of the kernel to be compressed, e.g., in 2d \((x_0,y_0,x_1,y_1,...)\).

Note

See example

Hierarchical compression

To use the in-house hierarchical compression Htool.HMatrix, Htool-DDM will need two inputs:

  1. The underlying geometry of the kernel to be compressed for the Geometric clustering.

  2. A function to generate any coefficient of the matrix to be compressed.

Note

See example

Coefficient generator

A coefficient generator is defined by following the interface Htool.VirtualGenerator, and in particular Htool.VirtualGenerator.build_submatrix(). For example, a user could define the following generator:

class CustomGenerator(Htool.VirtualGenerator):
    def build_submatrix(self, J, K, mat):
        for j in range(0, len(J)):
            for k in range(0, len(K)):
                mat[j, k] = ... # A(J[j],K[k]) where A is the kernel to be compressed

Build a hierarchical matrix

To build a Htool.HMatrix, you need:

# HMatrix parameters
epsilon  = 0.01;
eta      = 10;
symmetry = 'S';
UPLO     = 'L';

#  Generator
# CustomOperator A = ...;

hmatrix_builder = Htool.HMatrixTreeBuilder(epsilon, eta, symmetry, UPLO)
hmatrix: Htool.HMatrix = hmatrix_builder.build(
    A, cluster, cluster
)

where

  • epsilon is the parameter controlling the relative error when compressing a subblock with a low-rank approximation.

  • eta is the parameter \(\eta\) in the admissibility condition (1).

Use a hierarchical matrix

Basic linear algebra is provided for Htool.HMatrix:

size = hmatrix.shape[1]
x = np.random.rand(size)
y = hmatrix * x

Distributed operator

A Htool.DistributedOperator mainly consists in a vector of global-to-local and local-to-local operators. Here, local means a vector local to the current MPI process/partition.

Note

See example

Build a distributed operator

To assemble a distributed operator, first a geometric clustering needs to be applied. The partition defined by the target geometric cluster tree is then used to define a row-wise distributed operator (see Row-wise distributed operator). Htool.DefaultApproximationBuilder builds a row-wise distributed operator Htool.DistributedOperator where each block of rows is compressed using a Htool.HMatrix, which can be accessed as a public member.

default_approximation = Htool.DefaultApproximationBuilder(
    A,
    cluster,
    cluster,
    Htool.HMatrixTreeBuilder(epsilon, eta, "N", "N"),
    mpi4py.MPI.COMM_WORLD,
)

distributed_operator = default_approximation.distributed_operator
local_hmatrix = default_approximation.hmatrix

Use a distributed operator

Basic linear algebra is provided for Htool.DistributedOperator:

# Matrix vector product
nb_cols = hmatrix.shape[1]
x = np.random.rand(nb_cols)
y_1 = distributed_operator * x

# Matrix matrix product
X = np.asfortranarray(np.random.rand(nb_cols, 2))
Y_1 = distributed_operator @ X

DDM Solver

To solve the linear system associated with Htool.DistributedOperator, an iterative linear solve can be used via a Htool.Solver object. It can be built via a Htool.DDMSolverBuilder. We give here an example for a simple block-Jacobi solver, in particular we use Htool.DefaultApproximationBuilder.block_diagonal_hmatrix, which corresponds to the \(\mathcal{H}\) matrix for the block diagonal associated with the local subdomain.

# Create DDM object
block_diagonal_hmatrix = copy.deepcopy(
    default_approximation.block_diagonal_hmatrix
)  # inplace factorization requires deepcopy

default_solver_builder = Htool.DDMSolverBuilder(
    default_approximation.distributed_operator, block_diagonal_hmatrix
)
solver = default_solver_builder.solver

# Prepare solver
hpddm_args = "-hpddm_compute_residual l2 "
solver.set_hpddm_args(hpddm_args)
solver.facto_one_level() # block_diagonal_hmatrix is factorized in-place

# Solve
x = ...# unknowns
b = ...# right-hand sides
solver.solve(x, b)

Note

We rely on the external library HPDDM for the implementation of efficient iterative solvers. We refer to its cheatsheet listing the various possible options for the solver.

Note

See example