[Math] Determinant of a symmetric, sparse matrix

determinantlinear algebrasparse matrices

I'm trying to calculate the determinant of the $q \times q$ matrix shown here:

$$L(y)=\begin{bmatrix} V(1) & 1 & 0 & \cdots & 0 & 1 \\
1 & V(2) & 1 & \ddots & \ddots & 0 \\
0& 1 & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & \ddots & 1 & 0 \\
0 & \ddots & \ddots & 1 & V(q-1) & 1 \\
1 & 0 & \cdots & 0 & 1 & V(q) \\
\end{bmatrix}$$

where $V(k) = 2 \cos(2 \pi k \frac{p}{q} )$, and $y = \frac{p}{q}$.

(Purpose: I want to calculate a Lyapunov exponent, in order to render Hofstadter's Butterfly using a GLSL shader.)

The matrix is symmetric and sparse, but is not positive definite, because the diagonals are not always positive.

My linear algebra is weak, but I'm trying to follow instructions, learning as I go. According to wikipedia on computing determinants, I can't use Cholesky because the matrix is not positive definite; so LU decomposition looks like the next best bet.

However there are multiple algorithms for LU decomposition, and I'm starting to get lost in the weeds. I'm wondering if there is any particular shortcut that I can use because of the sparseness, and the highly constrained pattern of zeroes and ones in the matrix.

There is obviously a strong "algorithms" component to this question, so I debated whether to post it on cs.stackechange.com. However, at this point I think my question is more on the math side: What properties does the above matrix have that might make it easy prey for shortcuts?

If it helps, $q$ is likely to be in the range from 4 up to at least 50, and ideally even up to 1000.

Update

If anyone is interested, the implementation of Hofstadter's Butterfly is here. I had promised long ago to post it, so here it is. As for the algorithm, it's been a long time and I don't remember many details, but apparently I ported code from this java applet from Physics by Computer. I can't say I really understand what it's doing, but it doesn't look like it's calculating the determinant of a matrix.

Best Answer

Add an appropriate multiple of row 2 to the last row so as to kill the $(q,1)$-th entry. Then add an appropriate multiple of row 3 to the last row kill the $(q,2)$-th entry. Continue in this manner, you can eliminate the entries on the last row one by one.

Suppose in the last but one step, the last row becomes $(0,0,\ldots,0,p,\ast,\ast)$ for some number $p$. Then, by subtracting $p$ times the $(q-1)$-th row from the last row, the last row is reduced to the form of $(0,0,\ldots,0,0,\alpha,\beta)$.

Now, in theory, if we further apply the analogous series of column additions to the last column of the matrix, we may also eliminate the first $q-2$ entries on the last column. However, as each column addition only affects the last column, all except the last entries of the last row are unaffected. Since the column operations here are merely transposed versions of the row operations, it follows that the last column must become $(0,0,\ldots,0,\alpha,\ast)^T$, where the $\alpha$ here is the same $\alpha$ that we have mentioned before. The bottom right entry is changed by the last column addition. So, it becomes $\beta-p\alpha$, where $\beta$ and $p$ are again the same ones that we have mentioned before.

Therefore, the result of the series of column operations is in fact completely determined by the previous result of the series of row additions. So, in practice, we don't really need to carry out the column operations. All we need is to determine $p,\alpha$ and $\beta$ in the last two steps of row additions, and $L(y)$ would be reduced to $$ M=\begin{bmatrix} V(1) & 1 & 0 & \cdots & 0 & 0 \\ 1 & V(2) & 1 & \ddots & \ddots & 0 \\ 0& 1 & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & 1 & 0 \\ 0 & \ddots & \ddots & 1 & V(q-1) & \alpha \\ 0 & 0 & \cdots & 0 & \alpha & \beta-p\alpha \end{bmatrix}. $$ As only three entries of the last row are updated in each iteration, the total time complexity is $O(q)$. Also, as row/column additions do not change the determinant of a matrix, $M$ and $L(y)$ have the same determinants.

$M$ is tridiagonal. It is well-known that its determinant can be evaluated recursively in $O(q)$ operations: denote the determinant of the trailing $k\times k$ submatrix of $M$ by $D_k$ (so that $\det L(y)=\det M=\det D_q$); then by Laplace expansion, we have \begin{align} D_1&=\beta-p\alpha,\\ D_2&=V(q-1)(\beta-p\alpha) -\alpha^2,\\ D_k&=V(q+1-k)D_{k-1}-D_{k-2};\quad k\ge3. \end{align} So, in total, $\det L(y)$ can be evaluated in $O(q)$ operations.

Related Question