This is apparently a uniform distribution over a rectangular region whose dimensions are not specified. Turn the region around the origin by $-45^{\circ}$.
If this is the case then the distribution will look like this:
So, we have two independent random variables both uniformly distributed over $[-a,a]$ and $[-b,b]$, respectively.
The covariance matrix is easy to calculate now:
$$\begin{bmatrix}\frac{a^2}3&0\\
0&\frac{b^2}3\end{bmatrix}.$$
The eigenvectors are
$$\begin{bmatrix}1\\
0\end{bmatrix} \text{ and } \begin{bmatrix}\ 0\\
1\end{bmatrix}$$
and te corresponding eigenvalues are $$\frac{a^2}3\text{ and } \frac{b^2}3.$$
Utilizing the fact that the ratio of the eigenvalues is $3$ we can tell that the covariance matrix is
$$\begin{bmatrix}\frac{a^2}3&0\\
0&a^2\end{bmatrix}.$$
But this is the rotated covariance matrix. We have to turne back the experiment by $45^{\circ}$. The rotation matrix is
$$\begin{bmatrix}\frac1{\sqrt2}&-\frac1{\sqrt2}\\\frac1{\sqrt2}&\frac1{\sqrt2}\end{bmatrix}.$$
So, the covariance matrix is
$$\begin{bmatrix}\frac1{\sqrt2}&-\frac1{\sqrt2}\\\frac1{\sqrt2}&\frac1{\sqrt2}\end{bmatrix}\begin{bmatrix}\frac{a^2}3&0\\
0&a^2\end{bmatrix}=a^2\begin{bmatrix}\frac1{3\sqrt2}&-\frac1{\sqrt2}\\\frac1{3\sqrt2}&\frac1{\sqrt2}\end{bmatrix}.$$
$a$ is still unknown. Notice that only one equation was givan and there were two unknowns.
Let $S\in M_n(\mathbb R)$ be the symmetric matrix under consideration.
Assume in a preliminary step, that the eigenvalues of $S$ are pairwise distinct. Picking a unit eigenvector for each eigenvalue gives us an orthogonal, even an orthonormal system in $\mathbb R^n$ of size $n$, hence an orthonormal basis.
Bundling of the chosen eigenvectors as column vectors yields an $n\times n$ matrix, let's call it $O$, and using the transpose and the identity matrix, the orthonormality can be expressed as $\,O^T\!O=\mathbb 1_n\,\!$. Which (since the dimension is finite) is equivalent to $\,OO^T=\mathbb 1_n\,$ or $\,O^{\,T}=O^{\,-1}$.
Thus $O$ is an orthogonal matrix.
Recall that orthogonal matrices (preserving orthogonality and norms) are precisely those which transform any orthonormal basis into an(other) orthonormal basis.
By definition of $O$ we have $$SO\,=\,OD\;\iff\; S\,=\,OD\,O^T$$
with $D$ denoting a diagonal matrix containing the eigenvalues in the appropriate order. So $S$ is diagonalisable, and one may say "diagonalisable with respect to an orthonormal basis".
This accomplishes a notable and most useful characteristic of symmetric matrices.
And it's valid in full generality, i.e., after raising the initial assumption of distinct eigenvalues, because in every eigenspace, independently of each other, one can choose an orthonormal basis of that subspace, and proceed in the same way.
Best Answer
You know that $M=\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}\in\mathcal{M}_3(\mathbb{R})$ is of the form $\begin{pmatrix}a&b&c\\b&e&f\\c&f&i\end{pmatrix},$ that $M\begin{pmatrix}0\\0\\1\end{pmatrix}=\begin{pmatrix}0\\0\\0\end{pmatrix}$ which implies $c=f=i=0,$ that $M\begin{pmatrix}2\\1\\0\end{pmatrix}=\begin{pmatrix}2\\1\\0\end{pmatrix}$ which implies $2a+b=2$ and $2b+e=1.$ You also know that $a+e\leq 0.$ Then $b=2(1-a)=\frac{1-e}{2}$ which implies $e=1-4(1-a).$ You get finally $$a+1-4(1-a)\leq0\implies a\leq\frac{5}{3},$$ and for a such $a$ if you consider $b=2(1-a), e=1-4(1-a)$ and $c=f=i=0$ then a such matrix will satisfy what you are looking for.