I'm stuck on a practice problem in which I have to devise an algorithm:
Suppose a sequence integers $A_n$, and $A_0,\ldots,A_{k-1}< 50$ are all given, and each $A_i$ is a linear combination of the previous $k$ terms, ie $A_i = A_{i-1} b_1 + A_{i-2} b_2 + \dots + A_{i-k} b_k$. We know $A_0$ through $A_{k-1}$ and coefficients $b_1$ through $b_k$.
Devise an algorithm that computes $A_n \bmod 50$ in $O(\log n)$ time. You should treat $k$ as a constant, and you may assume that all arithmetic operations involving numbers of $O(\log n)$ bits take constant time.
Here's been my thought process thus far:
- using repeated squaring to get a factor of $(\log n)$
- doing something similar to fast matrix powering where I have a matrix, we'll call M that's like \begin{bmatrix}
1 & 1\\
1 & 0
\end{bmatrix} that multiplies with \begin{bmatrix} A_{n}\\ A_{n-1}\end{bmatrix} and if you want $A_{n+1}$ you do $M^n *$ some starting vector - But how do I account for getting only the k previous terms?
I'm mainly just stuck on how to apply repeated squaring and fast matrix powering to this algorithm. Would love a few pointers!
Best Answer
Keep $k$ consecutive terms as a column vector $$A_n' := \begin{bmatrix} A_{n-k+1} & A_{n-k+2} & \ldots & A_n\end{bmatrix}^T.$$
Then define the $k \times k$ matrix $$ M = \begin{bmatrix} 0&1&0&0&\ldots&0\\\\ 0&0&1&0&\ldots&0\\\\ 0&0&0&1&\ldots&0\\\\ \vdots&&&\vdots&&\vdots\\\\ 0&0&0&0&\ldots&1\\\\ b_k & b_{k-1} & b_{k-2} & b_{k-3} & \ldots & b_1 \\\\ \end{bmatrix} $$ Now $A_{n+1}' = MA_n'$, that is, multiplying by $M$ takes you one step forward (and still keeps the state of $k$ consecutive terms). Thus multiplying by $M^n$ takes you $n$ steps forward. If you have $M^n$, multiplying by it is a unit-cost operation since it has constant size $k \times k$ and its elements are integers smaller than $50$ (recalling that you are calculating everything modulo $50$).
It remains to find $M^n$. Now matrix multiplications $M^aM^b = M^{a+b}$ are also unit-cost operations, when $M^a$ and $M^b$ are known, so you should be able to compute $M^n$ in roughly $O(\log n)$ time. Check out addition chain exponentiation.
What about $k$?
The analysis above rests on the stipulation that $k$ is a constant. What if we include $k$ in the complexity analysis? In particular, what if $k$ is also large?
Multiplication of two $k \times k$ matrices is an $O(k^c)$ operation where in practice $c=3$ or slightly less (theoretically we know some $c < 2.4$ algorithms, and there are speculations that the exponent could be made as small as $2$). So our repeated matrix multiplication is something like $O(k^3 \log n)$. If $k$ is relatively large, this starts to play a role.
Another method would just apply the $k$-length recurrence $n-1$ times, for a total time $O(kn)$. Depending on the relative sizes of $k$ and $n$ this could be faster.
Whether there is some method between $O(k^c \log n)$ and $O(kn)$, where $c$ is the exponent in our matrix multiplication algorithm, I don't know. It would be very interesting.
At the other extreme, if $k$ is very small, one might be able to exploit the fact that we are working modulo $50$, so the $A_i'$ vector has only $50^k$ possible values. Sooner or later it will repeat itself. If you can find the period $t$, then you could do astronomically large instances fast by reducing $n$ modulo $t$.