Parallel Schur-Complement Linear Solver
- class MPISchurComplementLinearSolver(subproblem_solvers: Dict[int, LinearSolverInterface], schur_complement_solver: LinearSolverInterface)
Bases:
LinearSolverInterface
Solve the system Ax = b.
A must be block-bordered-diagonal and symmetric:
K1 transpose(A1) K2 transpose(A2) K3 transpose(A3) A1 A2 A3 Q
Only the lower diagonal needs supplied.
- Some assumptions are made on the block matrices provided to do_symbolic_factorization and do_numeric_factorization:
Q must be owned by all processes
K i and A i must be owned by the same process
- Parameters
- subproblem_solvers: dict
Dictionary mapping block index to linear solver
- schur_complement_solver: LinearSolverInterface
Linear solver to use for factorizing the schur complement
- do_symbolic_factorization(matrix: MPIBlockMatrix, raise_on_error: bool = True, timer: Optional[HierarchicalTimer] = None) LinearSolverResults
Perform symbolic factorization. This performs symbolic factorization for each diagonal block and collects some information on the structure of the schur complement for sparse communication in the numeric factorization phase.
- Parameters
- matrix: MPIBlockMatrix
A Pynumero MPIBlockMatrix. This is the A matrix in Ax=b
- raise_on_error: bool
If False, an error will not be raised if an error occurs during symbolic factorization. Instead the status attribute of the results object will indicate an error ocurred.
- timer: HierarchicalTimer
A timer for profiling.
- Returns
- res: LinearSolverResults
The results object
- do_numeric_factorization(matrix: MPIBlockMatrix, raise_on_error: bool = True, timer: Optional[HierarchicalTimer] = None) LinearSolverResults
- Perform numeric factorization:
perform numeric factorization on each diagonal block
form and communicate the Schur-Complement
factorize the schur-complement
This method should only be called after do_symbolic_factorization.
- Parameters
- matrix: MPIBlockMatrix
A Pynumero MPIBlockMatrix. This is the A matrix in Ax=b
- raise_on_error: bool
If False, an error will not be raised if an error occurs during symbolic factorization. Instead the status attribute of the results object will indicate an error ocurred.
- timer: HierarchicalTimer
A timer for profiling.
- Returns
- res: LinearSolverResults
The results object
- do_back_solve(rhs, timer=None)
Performs a back solve with the factorized matrix. Should only be called after do_numeric_factorixation.
- Parameters
- rhs: MPIBlockVector
- timer: HierarchicalTimer
- Returns
- result: MPIBlockVector
- get_inertia()
Get the inertia. Should only be called after do_numeric_factorization.
- Returns
- num_pos: int
The number of positive eigenvalues of A
- num_neg: int
The number of negative eigenvalues of A
- num_zero: int
The number of zero eigenvalues of A
- increase_memory_allocation(factor)
Increases the memory allocation of each sub-solver. This method should only be called if the results status from do_symbolic_factorization or do_numeric_factorization is LinearSolverStatus.not_enough_memory.
- Parameters
- factor: float
The factor by which to increase memory allocation. Should be greater than 1.