[Math] How to reverse matrix vector multiplication

matricesnumerical linear algebravectors

I'm using the simple matrix x vector multiplication below to calculate result. And now I wonder how can I calculate vector if I know matrix and result ? So when I multiply matrix and vector afterwards again I get result.


Sorry I don't know how you call this multiplication. I was never deep in those math topics. I have a program that does my calculation. I hope you can understand and classify the multiplication:

// float[] matrix = [4x4], float[] vector = [4] column vector

float[] result = new float[vector.Length];
for (int column = 0; column < 4; column++)
{
    for (int row = 0; row < 4; row++)
        result[column] += matrix[column + row * 4] * vector[row];
}
return result;

Update: I found this for inverted matrices and I now remember mathematicians don't care for complexity of things but we programmers do. Is there no way to avoid matrix inversion (to lessen complexity) ?

Solution: I implemented the raw 4×4 matrix inversion and (alternatively) I inverted the matrix generation. In the end I got the very same matrices and the very same valid results for my vector. I choose the 2nd path because that reduces the complexity to around the same as the calculation done in my first sentence above.

Thank you for the help!

Best Answer

If one were to list 5 computational problems that are central to computer science, solving the matrix equation $Ax = b$ would be one of them. Here, $A$ is your matrix, $b$ is your result, and $x$ is the vector you are trying to solve for.

Regardless of what method you decide to use, you should be using a library and not manually coding the algorithm. There are two classical methods for solving the linear system $Ax = b$. The first is Gaussian elimination, suitable for small to medium-sized, dense matrices. This is implemented in NumPy as numpy.linalg.solve, or more generally, in LAPACK as gesv. Gaussian elimination, in all its implementations, are exact in exact arithmetic, and so you can never beat the $O(n^3)$ scaling.

The second is the iterative method, suitable for large, sparse matrices. There are several variations on this method. If $A$ is symmetric and positive-definite, you can use Conjugate Gradients. Otherwise, you will want to use GMRES. Actual library implementations of these are slightly more varied, and you will have to search the internet to find one suitable for your purposes. Iterative methods take advantage of sparsity to perform far faster than Gaussian elimination, at the cost of not being exact in exact arithmetic. However, given well-conditioned and sparse matrices, iterative methods have superb accuracy.

There is a third but less common option. If you know some special knowledge about the structure of your matrix, then you can also look for structured matrix solvers. Given the nature of your question, it is likely that this is not the case for your problem.

The bottom line is that you should search the internet to find a suitable library. If computational efficiency is very important to you, nothing that you come up with is going to beat the performance of subroutines in libraries like LAPACK.

Related Question