[Math] Block Gauss -Seidel Iterative Method for Overdetermined Linear Systems

linear algebramatrices

I am interested in solving a large linear system with block Gauss-Seidel Iterative Method. Suppose I have the following block matrix $A$ for $Ax=b$ linear system:

\begin{bmatrix} B & 0 & C & 0 \\ 0 & C & D & 0 \\ D & 0 & D & E \\ 0 & 0 & 0 & F \end{bmatrix}

where each block matrix is $n \times n$. Right hand side vector ,$b$ , is a zero vector. So far everything is fine because $A$ matrix is a square. However I have one more equation:

$\sum{x e} = 1$

where $e$ is a vector of ones. According to my best knowledge I can use Gauss-Seidel method if $A$ is a square matrix. In my case, the linear system is overdetermined, thus not square. My question is:

Can I use Gauss-Seidel method for this problem with some pre-work?
If not, what do you recommend me to use?

Thanks.

Best Answer

  • You can calculate the least squares solution, that is, find $\underset{x}{\arg \min} ||A x - b||$, where $A$ is the non-square matrix $$ \begin{bmatrix} B & 0 & C & 0 \\ 0 & C & D & 0 \\ D & 0 & D & E \\ 0 & 0 & 0 & F\\ e^T & e^T & e^T & e^T \end{bmatrix} .$$ Its solution is (in overdetermined case) $x = (A^T A)^{-1} A^T b$ (where $b = (\underbrace{0, \ldots, 0}_{4 n}, 1)^T$), which can be calculated with Gauss-Seidel method. However, $A^T A$ is, in this case, less sparse than $A$.

  • Another method (that would obtain the same result as above) is to calculate the singular value decomposition of $A$ and then calculate.

  • If the original $A$ (without the bottom row) is not full-rank, there is a simpler way to find the solution.

    1. Calculate the basis of null space of original $A$ (for example, by inverse iteration method).
    2. For every vector $v_k$ of that basis, divide it by the sum of its elements. If there is only one basis vector (e.g. if the entire problem is known to have a unique solution), after the division it is the solution. Otherwise:
    3. The convex hull (the set of $\sum_k \alpha_k w_k$ for non-negative $\alpha_k$ such that $\sum_k \alpha_k = 1$) of the divided vectors $w_i$ is the set of solutions.

Implementation note for the last method: In Matlab, the first step of the last method can be done by using null function. However, it involves calculating the SVD of original $A$ and doesn't work for sparse matrices, so using eigs to find eigenvectors corresponding to the eigenvalue $0$ may be faster, especially since $A$ is sparse. Instead of providing $A$ for eigs, one can use a handle to a function that calculates $A^{-1} y$ for given $y$, for example, a Gauss-Seidel solver.