Well, in the $1 \times 1$ case, a matrix is positive semi-definite precisely when its single entry is a non-negative number, and a random variable $X$ has zero variance if and only if it is a.s. constant. (If you don't know what ‘a.s.’ means, you may ignore it throughout this discussion.) Indeed, assuming $\mathbb{E}[X] = 0$ (which is no loss of generality), we have
$$\operatorname{Var} X = \int_{\Omega} X^2 \mathrm{d} \mathbb{P}$$
and since the integrand is non-negative, this is zero if and only if the integrand is a.s. zero, i.e. if and only if $X = 0$ a.s. In the case where $\mathbb{E}[X] \ne 0$, we have (by linearity) $\operatorname{Var} X = 0$ if and only if $X = \mathbb{E}[X]$ a.s.
In general, if you have a $n \times n$ symmetric matrix $V$, there is an orthogonal matrix $Q$ such that $Q V Q^{\sf T}$ is a diagonal matrix $D$, and $V$ is positive semi-definite if and only if the diagonal entries of $D$ are all non-negative. But if $V$ is the covariance matrix of $\mathbf{X}$, then $D$ is the covariance matrix of $Q \mathbf{X}$, and so $V$ is positive semi-definite but not positive definite if and only if some component of $Q \mathbf{X}$ is a.s. constant. This happens if and only if some linear combination of $\mathbf{X}$ is ‘fully correlated’, to use your phrasing.
I doubt that this is the best approach since it basically only uses the definition of positive semidefinite (PSD) matrices and requires a fact about the pointwise product of PSD matrices, but maybe it will give you some ideas.
We have that,
$$
\sin(x_i-x_j)^2 = \frac{1}{2} - \frac{1}{2}\cos(2(x_i-x_i))
= \frac{1}{2} - \frac{1}{2}\big[ \cos(2x_i)\cos(2x_j) + \sin(2x_i)\sin(2x_j) \big]
$$
Therefore we can write,
$$
A_{i,j} = e^{-\lambda/2}\exp\left(\frac{\lambda}{2}\cos(2x_i)\cos(2x_j) +\frac{\lambda}{2}\sin(2x_i)\sin(2x_j) \right)
$$
Now, note that the matrix,
$$
B_{i,j} = \frac{\lambda}{2}\cos(2x_i)\cos(2x_j) + \frac{\lambda}{2}\sin(2x_i)\sin(2x_j)
$$
is the sum of two rank-1 outer products and therefore PSD.
Note: Maybe for your needs its sufficient to show the part of the kernel in the exponent is the sum of separable functions, in which case you are done.
Using Taylor expansion we have,
$$
A_{i,j}
%= e^{-\lambda/2} \sum_{k=0}^{\infty} \frac{1}{k!} \left( \frac{\lambda}{2}\cos(2x_i)\cos(2x_j) + \frac{\lambda}{2}\sin(2x_i)\sin(2x_j) \right)^k
= e^{-\lambda/2} \sum_{k=0}^{\infty} \frac{1}{k!} B_{i,j}^k
$$
So in matrix form, using $\circ$ to denote the poitwise product of matrices,
$$
A = e^{-\lambda/2} \left( I + B + \frac{1}{2!} B\circ B + \frac{1}{3!} B\circ B\circ B + \cdots \right)
$$
Finally, we claim that this matrix is PSD.
First, note that the pointwise product of two PSD matrices is again PSD. Therefore, each of the terms $B\circ B\circ \cdots \circ B$ is PSD. The sum of PSD matrices is also PSD.
This is an infinite series of pointwise additions and products of PSD matrices. If this converges (which you know it does), it will also be PSD.
Best Answer
I found a sufficient answer in other posts:
Gaussian kernels for arbitrary metric spaces
Is the exponential of −d a positive definite kernel for a general metric space (X,d)?
For further details on which metric spaces result in PSD kernels of this type, I highly recommend to take a look in the following papers:
Open Problem: Kernel methods on manifolds and metric spaces
Geodesic exponential kernels: When Curvature and Linearity Conflict