These are multiple question, I will try to answer some of them.
I assume that $a, b > 0$.
An equilibrium is isolated if there does not exist a small "ball" around it such that it is the only equilibrium in that ball. For example:
$$
\dot{x_1} = 0
$$
has no isolated equilibrium because no matter how close you look at an equilibrium, there are always infinitely many other equilibrium points close by. Similiar, the system
$$
\begin{align}
\dot{x}_1 &= x_1 - x_2 \\
\dot{x}_2 &= x_1 - x_2 \\
\end{align}
$$
has no isolated equilibrium: For example $x_1 = x_2 = 1$ is is an equilibrium, but so is $x_1 = x_2 = 1.1$ and $x_1 = x_2 = 1.01$ and $x_1 = x_2 = 1.00000001$ and so on. No matter how close you look, there are always "more" equilibrium points.
So to your example, you have:
$$
\begin{align}
\dot{x}_1 &= x_2 + x_3 \\
\dot{x}_2 &= -a \sin(x_1) - b x_2 \\
\dot{x}_3 &= -a \sin(x_1) + x_3
\end{align}
$$
To get the equilibrium points you set $\dot{x} = 0$ so you need $x_2 = -x_3$ from the first equation. Then you get
$$
\begin{align}
\dot{x}_2 &= \dot{x}_3 = 0 \\
\implies -a \sin(x_1) - b x_2 &= -a \sin(x_1) + x_3 \\
\implies -b x_2 &= x_3 \\
\implies -b x_2 &= -x_2
\end{align}
$$
We assumed that $b > 0$ and if $b \neq 1$ the last equation can only be true if $x_2 = 0$ and then $x_3 = 0$. If you insert that you get
$$
-a \sin(x_1) = 0
$$
so the equilbrium points are $x_1 = \sin(k \pi), k \in \mathbb{Z}, x_2 = 0, x_3 = 0$. Although that means you have infinitly many equilibrium points they are all isolated because the roots of $\sin(x_1) = 0$ are isolated. Conclusion: You can use Lyapunov method.
If $b = 1$ the system has no isolated equilibrium points.
You can use indirect method but it will only tell you local results and sometimes you can't use it (if you get zero eigenvalues).
You have the function
$$
V(x) = \frac{1}{2} x^T x + a(1 - \cos(x_1))
$$
but it is not a Lyapunov function for your system:
$$
\dot{V}(x) = - b x_2^2 + x_1 x_2 - x_3^2 + x_1 x_3
$$
If $x_3 = 0$ and $x_1 = 2 b x_2$ you get $\dot{V}(x) = b x_2^2$ and that is positive no matter how small you make $x_2$. So this function can't be used.
With the help of MATHEMATICA.
$$
A = \left(
\begin{array}{cccc}
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
-k & -k-K & 0 & 0 \\
k & -k & 0 & -\delta \\
\end{array}
\right)
$$
and the command
P = LyapunovSolve[A, Transpose[A], -IdentityMatrix[4]]
we obtain
$$
P = \left[
\begin{array}{cccc}
-\frac{\delta ^2 \left(3 k K+k (3 k+2)+K^2+K\right)+K (k+K) (2 k+K+1)}{2 \delta k (k+K) (2 k+K)} & -\frac{\delta ^2 k+\delta ^2 K+K}{4 \delta k^2+2
\delta k K} & -\frac{1}{2} & -\frac{k+1}{2 (k+K)} \\
-\frac{\delta ^2 k+\delta ^2 K+K}{4 \delta k^2+2 \delta k K} & \frac{\delta ^2 (k+K)+K (2 k+K+1)}{2 \delta (k+K) (2 k+K)} & \frac{k+1}{2 (k+K)} &
-\frac{1}{2} \\
-\frac{1}{2} & \frac{k+1}{2 (k+K)} & -\frac{\frac{\delta ^2 \left(2 k^2+2 k K+k+K^2\right)}{k+K}+(k+1) K}{2 \delta k} & \frac{K}{2 \delta } \\
-\frac{k+1}{2 (k+K)} & -\frac{1}{2} & \frac{K}{2 \delta } & \frac{k K+K}{2 \delta k+2 \delta K} \\
\end{array}
\right]
$$
Best Answer
It is difficult in general to find a solution to this equation. One way would be to use Kronecker calculus, but in this case that would be of no help.
First note that the period of the system is $\pi$ and so the stability condition is given by the existence of positive definite matrix-valued functions $P,Q:[0,2\pi\mapsto\mathbb{R}^{2\times 2}$, $P$ differentiable, $Q$ continuous, verifying $P(0)=P(k\pi)$ and $Q(0)=Q(k\pi)$, for some $k\in\mathbb{N}$, $k\ne 0$, such that
$$ \dot{P}(t)+P(t) A(t) + A(t)^T P(t) + Q(t)=0 $$ holds for all $t\in[0,k\pi]$.
This is not analytically solvable in most cases and one often has to rely on algorithms. However, in the present case, one can exploit the nice structure of the matrix $A(t)$ to parametrize all the possible solutions.
To this aim, define $$ Z(t)=\begin{bmatrix} \sin(t) & \cos(t)\\ -\cos(t) & \sin(t) \end{bmatrix} $$ we see that $Z(t)^TZ(t)=I$. Define also $$ J=\begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix} $$ and observe that $\dot{Z}(t)=Z(t)J^T$.
We now observe that if we use the change of variables $z(t)=Z(t)^Tx(t)$ we get that
$$\dot{z}(t)=\dot{Z}(t)^Tx(t)+Z(t)^T\dot{x}(t)=(\dot{Z}(t)^TZ(t)+Z(t)^TA(t)Z(t))z(t).$$
Using now the fact that $\dot{Z}(t)=Z(t)J^T$, we get that $\dot{Z}(t)^TZ(t)=JZ(t)^TZ(t)=J$ and, so,
$$\dot{z}(t)=J_0z(t)$$
where $$ J_0=J+Z(t)^TA(t)Z(t)=\begin{bmatrix} 0 & 1\\ -1 & -a \end{bmatrix}. $$
So, this shows that by a proper change of variables, one can express the system into LTI form, for which stability is easy to assess. In fact, the eigenvalues of $J_0$ have all negative real part for all $a>0$.
Therefore, there exist some positive definite matrices $P_0$ and $Q_0$ such that
$$J_0^TP_0+P_0J_0+Q_0=0$$
holds.
From that, it is possible to show that $(P_0,Q_0)$ is a solution of the above Lyapunov equation if and only if $(P(t),Q(t))=(Z(t)P_0Z(t)^T,Z(t)Q_0Z(t)^T)$ is a solution to the time-varying Lyapunov equation.
The proof of this result follows from direct substitution of the expressions and very tedious algebra.
This allows us to state that the following statements are equivalent
$$J_0^TP_0+P_0J_0+Q_0=0$$ holds.
$$ \dot{P}(t)+P(t) A(t) + A(t)^T P(t) + Q(t)=0 $$ holds for all $t\in[0,2\pi]$.