[Math] For which applications are iterative methods particularly suitable to solve linear systems of equations

linear algebramatricesnumerical linear algebrasystems of equations

Linear systems of equations can be either solved with direct methods as the LU-decomposition or with iterative methods. These iterative methods are the Gauss-Seidel method, successive over-relaxation, the Jacobi method, and others.

Iterative methods are computationally less demanding as they only require matrix-vector multiplications. However, using an iterative approach may not work when the chosen method does not converge, or the convergence may be slow.

On the other hand, the direct approach is easy as one gets the exact solution without taking care of convergence and precision.

So, what are the applications for which we prefer iterative methods over direct methods?

Edit:
As noted in the comments, iterative methods may be used for large systems of equations where the precision of the solution is not that important. However, I still wonder in which applications do we have these conditions.

Best Answer

To answer my question, I made a short literature review on recent publications on iterative matrix inversion methods.

Gauss-Seidel-Method (GS)

  • In https://arxiv.org/pdf/1411.2791.pdf, signal detection in wireless communication systems with many antennas and many users is considered. There, solving a linear equation system is required to compute a minimum mean square error solution. The number of variables could be around 2000. This does not seem dramatically high, however, the computation must be very fast.

  • In https://dl.acm.org/citation.cfm?id=2982437, GS is used for the physically-based animation of a soft body, that might be used in video games. The challenge here is the hard requirement on the computation time that must be below a few milliseconds. "Linear iterative methods are preferred in these cases since they provide approximate solutions within a given error tolerance and in a short amount of time." In this publication, the GS is applied in a parallel fashion. Note that parallelization is a powerful advantage of GS over other iterative methods.

  • https://www.researchgate.net/profile/Matthias_Mueller14/publication/274479082_Unified_Particle_Physics_for_Real-Time_Applications/links/5538d62a0cf247b8587d5a6f.pdf treats the simulation of visual effects in real-time applications. In the latter work, objects are modeled as an accumulation of particles. These particles interact with each other through, for example, collisions. To simulate the movement of objects, optimization problems are solved to find the minimum change in kinetic energy. These optimization problems require to solve sets of linear equations. The GS is particularly suitable due to parallelization. Moreover, the GS allows obtaining a trade-off between the accuracy of a simulation and the computation time that is regulated by the number of iterations.

Successive Overrelaxation (SOR)

  • https://arxiv.org/pdf/1507.04588.pdf treats also signal detection in wireless communication systems.

  • In http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.8725&rep=rep1&type=pdf SOR is applied for support vector machine algorithms that are used for classification. The rough idea of support vector machines is to distinguish between two classes of elements based on their features. The goal is to divide the multidimensional feature space by a plane such that elements of one class lie on one side and the other elements on the other side. The computation of this plan requires to solve an optimization problem including matrix inversion. There the number of elements can high if the number of elements is high, for example, more 100000.

    To be continued...

Related Question