Assuming the singular value decomposition is known, how is it the matrix-vector product of $\mathbf{A}$ ($m \times n$) and vector $\mathbf{x}$ ($n \times 1$) has $O(m+n)$ complexity? Somewhat related: Efficient low rank matrix-vector multiplication and this post on Using SVD to approximate matrix-vector multiplication?
The way I break it down, for $\mathbf{Ax} = \mathbf{U \Sigma V}^T\mathbf{x}$:
- the product $\mathbf{L} = \mathbf{V}^T\mathbf{x}$ has $O(n^2)$ complexity
- the product $\mathbf{H} = \mathbf{\Sigma}\mathbf{L}$ has $O(n)$ complexity
- the product $\mathbf{U}\mathbf{H}$ has $O(mn)$ complexity.
Why is the matrix-product complexity of order $O(m+n)$ instead of $O(n^2 + n + mn)$?
Best Answer
If your question has the same conditions as the two posts you referred to, notice that they assume $A$ has rank $r$. Then $U$ is $m\times r$, $\Sigma$ is $r\times r$, $V^T$ is $r\times n$.
So
This gives $O(r(m+n))$ which is $O(m+n)$ if $r$ is fixed. This is in the sense that the problem has many different $A$'s or $x$'s, and the ranks of $A$'s remain constant.