Hey There, so if I am assuming correctly for your case, you want to find eigenvalues for this matrix, which is essentially solving for your roots of the characteristic polynomial of the matrix after doing the determinant operation on it. So to go off from Robert idea, we want to use the equation,
det(A$-\lambda$ I) = $0$ $~~~$(following from this we can plug in the coefficient matrix given).
det(A$-\lambda$ I) =
$\left[\begin{array}{ccc}
0-\lambda & 0 & \dfrac{3}{4} \\
1 & 0-\lambda & -\dfrac{9}{4} \\
0 & 1 & \dfrac{1}{4}-\lambda
\end{array} \right] = 0$, where A is your coefficient matrix and I is the identity matrix.
I = $\left[\begin{array}{ccr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array} \right]$
From here we can now find out the eigenvalues of the matrix A as follows:
$\underline {\text{Evaluation of the determinant expanding it by the minors of column 1:}}$
= $~-\lambda
\left[\begin{array}{cc}
-\lambda & -\dfrac{9}{4} \\
1 & \dfrac{1}{4}-\lambda
\end{array} \right]
-1
\left[\begin{array}{cc}
0 & \dfrac{3}{4} \\
1 & \dfrac{1}{4}-\lambda
\end{array} \right]
+ 0
\left[\begin{array}{rr}
0 & \dfrac{3}{4} \\
-\lambda & -\dfrac{9}{4}
\end{array} \right]
$
$\Rightarrow ~ -\lambda
\left[\begin{array}{c}
\lambda^{2} -\dfrac{1}{4}\lambda + \dfrac{9}{4} \\
\end{array} \right]
-1
\left[\begin{array}{cc}
0 -\dfrac{3}{4} \\
\end{array} \right]
+ ~0
\left[\begin{array}{rr}
0 + \dfrac{3}{4}\lambda \\
\end{array} \right]
$
$\Rightarrow ~$ $-\lambda^3+\dfrac{1}{4}\lambda^2-\dfrac{9}{4}\lambda+\dfrac{3}{4}$, $~$ Hence our characteristic polynomial is now obtained.
$$P(\lambda)=-\lambda^3+\dfrac{1}{4}\lambda^2-\dfrac{9}{4}\lambda+\dfrac{3}{4}$$
If you need assistance on how to find the characteristic polynomial by evaluating the determinant, here is a reference: Computing Determinants
\
After solving this polynomial for its roots (eigenvalues) we get the following:
{$\lambda = (0.329,0.000) ~~ \lambda = (-0.040,-1.508) ~~ \lambda = (-0.040,1.508)$}
I believe all the roots except for $\lambda = 0.329$ are complex conjugate roots. Can someone else please verify that those are all of the roots to this polynomial and that the ones I provided are correct, Thanks. I hope this helps out if this explanation is what you were looking for.
Take a favorite iterative procedure for computing eigenvalues; for concreteness, I'll restrict to ... the power iteration for the dominant eigenvalue. Does applying this procedure to $C$ correspond to a natural root-finding algorithm for $p$?
I'll restrict myself to power iteration proper as well, lest this answer get too long. Most of the variants of power iteration translate readily to a corresponding operation on polynomials, which I might discuss later in a separate answer if there's interest. (I've already said something on the QR algorithm in the comments.)
I'll consider the following form of the (upper Hessenberg) Frobenius companion matrix for $p(x)=c_n x^n +\cdots +c_1 x+c_0$:
$$\mathbf C=\begin{pmatrix}-\frac{c_{n-1}}{c_n}&\cdots&-\frac{c_1}{c_n}&-\frac{c_0}{c_n}\\1&&&\\&\ddots&&\\&&1&\end{pmatrix}$$
which is similar to another form you might be accustomed to:
$$\mathbf T\mathbf C\mathbf T^{-1}=\begin{pmatrix}&&&-\frac{c_0}{c_n}\\1&&&-\frac{c_1}{c_n}\\&\ddots&&\vdots\\&&1&-\frac{c_{n-1}}{c_n}\end{pmatrix}$$
with the similarity transformation $\mathbf T$ given by the unit upper triangular Toeplitz matrix
$$\mathbf T=\begin{pmatrix}1&\frac{c_{n-1}}{c_n}&\cdots&\frac{c_1}{c_n}\\&\ddots&\ddots&\vdots\\&&\ddots&\frac{c_{n-1}}{c_n}\\&&&1\end{pmatrix}$$
Now, the key is that there is an intimate relationship between polynomials, difference equations, and the Frobenius companion matrix; to wit, the difference equation
$$c_n u_{k+n}+\dots+c_1 u_{k+1}+c_0 u_k=0$$
has $p(x)$ as its characteristic polynomial, and its solutions $u_k$ take the form
$$u_k=w_1 \mu_1^k+\dots+w_n \mu_n^k$$
where the $\mu_j$ satisfy $p(\mu_j)=0$ with $|\mu_1|\geq \cdots \geq |\mu_n|$, and the $w_j$ are arbitrary constants. For the rest of this answer, I assume that there is only one dominant root $\mu_1$ (thus, $\mu_1$ is necessarily real and simple).
As $k\to \infty$, the ratio $\dfrac{u_{k+1}}{u_k}$ eventually tends to $\mu_1$, with the convergence rate depending on how far away the dominant root is from the other roots. Thus, one method to find the dominant root $\mu_1$ of a polynomial $p(x)$ would consist of iterating the recurrence relation for the $u_k$ (with starting values chosen such that $w_1$ in the expression for $u_k$ is nonzero) and forming successive ratios. This method is called the Bernoulli iteration.
The connection with companion matrices can be seen if we note that
$$\begin{pmatrix}u_{k+1}\\u_k\\\vdots\\u_{k-n+2}\end{pmatrix}=\begin{pmatrix}-\frac{c_{n-1}}{c_n}&\cdots&-\frac{c_1}{c_n}&-\frac{c_0}{c_n}\\1&&&\\&\ddots&&\\&&1&\end{pmatrix}\cdot\begin{pmatrix}u_k\\u_{k-1}\\\vdots\\u_{k-n+1}\end{pmatrix}$$
(see for instance this related question); we can thus consider Bernoulli's method as equivalent to repeatedly multiplying some arbitrary starting vector $\mathbf u_0$ with $\mathbf C$ ($\mathbf u_j=\mathbf C\mathbf u_{j-1}$), after which the dominant eigenvalue is estimated as the modified Rayleigh quotient
$$\frac{\mathbf e_1^\top\mathbf C\mathbf u_j}{\mathbf e_1^\top\mathbf u_j}$$
where $\mathbf e_1$ is the first column of the identity.
This leaves the problem of starting up the iteration. The customary way is to derive the starting values from the solutions of the equation
$$\mathbf T\mathbf u_0=\mathbf d$$
where $\mathbf d=(c_1\quad 2c_2\quad \cdots\quad n c_n)^\top$ represents the coefficients of $p^\prime(x)$.
As I mentioned earlier, most of the variants of power iteration, when specialized to the Frobenius companion matrix case, give rise to recursive methods for estimating the roots of a polynomial that directly act on the polynomial's coefficients. In particular, the Jenkins-Traub method, the modern descendant of the Bernoulli iteration, can be thought of either as a recursion on polynomials or on companion matrices. Some of the details are discussed in this paper.
Here's some sundry Mathematica code for playing around with Bernoulli's method:
n = 5; p = Expand[Times @@ (x - Range[n])];
cmat[poly_, x_] := Module[
{cl = CoefficientList[poly, x], n = Exponent[poly, x]},
SparseArray[{{1, i_} :> -cl[[n - i + 1]]/Last[cl],
Band[{2, 1}] -> 1}, {n, n}]]
start = LinearSolve[ToeplitzMatrix[UnitVector[n, 1],
(Reverse[Rest[#]/Last[#]]) &[CoefficientList[p, x]]],
CoefficientList[D[p, x], x]];
With[{prec = 20, its = 20},
Ratios[Drop[LinearRecurrence[
-(Reverse[Most[#]/Last[#]]) &[CoefficientList[p, x]],
Reverse[N[start, prec]], its + n + 1], n - 1]]]
With[{prec = 20, its = 20},
Table[((UnitVector[n, 1].cmat[p, x].#)/(UnitVector[n, 1].#)) &[
MatrixPower[cmat[p, x], k, N[start, prec]]], {k, 0, its}]]
You can change p
here to any polynomial whose largest (in magnitude) root is real (of course, you must then set n
to be the degree of p
).
Best Answer
Since $p(x)$ is biquadratic, then if $\alpha$ is root, it follows that $-\alpha$ also is a root.
Looking at $M$ you can notice that if you sum the entries of each column, you'll always get $1$. This implies $1$ is an eigenvalue. (Do you know why?).
You have two roots now.
Continue with long division to find the remaining roots.
If you want to use the matrix to find all eigenvalues, recall that $\det (M)$ is the product of all eigenvalues. You can easily compute $\det (M)$ through expansion along the fourth column to find $\det (M)=9$.
Use the first sentence in my answer again to find the other eigenvalues.