This is an old revision of the document!
The Generalized Minimal RESidual (GMRES) method is an iterative method for the numerical solution of a system of linear equations developed by Y. Saad and Martin H. Schultz in 1986. The subroutine in Lagamine comes from the open source SPARKIT library.
The equation system is: \[[K]\underline{x}=\underline{F}\]
Where $[K]$ is the stiffness matrix, $\underline{x}$ is the displacement and $\underline{F}$ is the out of balanced forces.
One defines the residual, $\underline{r}^i$: \[\underline{r}^i =\underline{F}-[K]\underline{x}^i\]
where $\underline{x}^k$ is an approximation of the solution at the iteration $k$.
At each iteration $i$, a Krylov subspace, $D_i$, is constructed by the Arnoldi’s method therefore $D_i$ is an orthogonal subspan. \[D_i = span(\underline{p}_1, [K]\underline{p}_1,\ldots,[K]^{i-1}\underline{p}_1) \text{ with } \underline{p}_1=\frac{\underline{r}^0}{\|\underline{r}^0\|_2}\]
At each iteration, one searches $\underline{x}^i$ as: \[\underline{x}^i=\underline{x}^0+[V]^i\underline{y}^i\]
Where $[V]^k$ is an array which contains the Krylov subspace vector and $\underline{y}^k$ is a $k$ vector which the residual norm: \[\underline{y}^i=\min{J(y)}=\min{\|\underline{F}-[K]\underline{x}^i\|}=min{\|\underline{F}-[K](\underline{x}^0+[V]^i\underline{y})\|}\]
The method is considered converged if the residual norm is sufficiently diminished by the following criterion: \[\|\underline{r}^i\|<\varepsilon \|\underline{r}^0\|\]
where $\varepsilon$ is a tolerance coefficient.
In case where the residual is inferior to a threshold, GMRES consider that the convergence is reached: \[\|\underline{r}^i\|<restol\]
The GMRES method requires parameters:
Preconditioning is a technique to reach quicker the convergence. The convergence depends on the distribution and the value of the eigenvalues of the stiffness matrix, $[K]$.
Suppose that $[M]$ is symmetric, positive-definite matrix that approximates $[K]$, but easier to invert. The equation system can be indirectly solved by solving: \[ [M]^{-1}[K]\underline{x}=[M]^{-1}\underline{F}\]
If the set of eigenvalue of $[M]^{-1}[K]$ is lower and better clustered than the set of $[K]$, then the
number of iterations to solve without iteration will be lower than the number of iterations with preconditioning.
The simplest ways of defining a preconditioner is to perform an incomplete LU factorization of the original matrix $[K]$. This entails a decomposition of the form: \[[K]=[L]_k[U]_k-[R]\]
where $[L]_k$ and $[U]_k$ have nonzero structure as the lower part and upper parts of $[K]$ respectively and $[R]$ is the residual error of the factorization therefore $[M]=[L]_k[U]_k$.
The fill-in depends on the geometrical tolerance to avoid too large bandwidth for $[L]_k$ and $[U]_k$. The fill-in depends if the component, which does not belong to the diagonal is not inferior to a tolerance value.
The preconditioning required two parameters: