The diagonalization theorem, here for example, states that you can take
$$ A = \left[\begin{matrix} -1 & 0 & 1\\3 & 0 & -3\\1 & 0 & -1\end{matrix}\right]$$
and turn it into a diagonal matrix
$$ V = \left[\begin{matrix} 0 & 0 & 0\\0 & 0 & 0\\0 & 0 & -2\end{matrix}\right] $$
where the diagonal elements of $V$ are the eigenvalues $(0,0,-2)$ of $A$ using
$$V = P^{-1} A P$$
where $P = (v_1 \quad v_2 \quad v_3)$ is invertible. This only happens if $A$ has $n$ linearly-independent eignevectors $v_1, v_2, v_3.$ In this case, although $\lambda_1 = \lambda_2 = 0$, you have a non-singular
$$
P =
\left[\begin{matrix} 1 & 0 & -1\\0 & 1 & 3\\1 & 0 & 1\end{matrix}\right]
$$
To decide so, first find all eignevectors, form $P$ and check if $P$ is non-singular (equivalently, $v_1, v_2, v_3$ are linearly-independent). In the first matrix, however,
$$P
=
\left[\begin{matrix} 1/4 & 1 & 0\\1/2 & 1 & 0\\1 & 1 & 0\end{matrix}\right]
$$
which is singular.
For ease of formatting and explanation, I'll be doing everything for the $5 \times 5$ example. However, the same trick works for any $n \times n$ antisymmetric matrix (though slightly differently for even $n$).
Suppose
$$
A =
\begin{pmatrix}0&0&0&0&a_{15}\\0&0&0&a_{24}&0\\0&0&a_{33}&0&0\\0&a_{42}&0&0&0\\a_{51}&0&0&0&0 \end{pmatrix}
$$
Here's a neat trick: we note that
$$
A^2 = \pmatrix{
a_{15}a_{51}&&&&\\
&a_{24}a_{42}&&&\\
&&(a_{33})^2&&\\
&&&a_{24}a_{42}&\\
&&&&a_{15}a_{51}\\
}
$$
So, the eigenvalues of $A^2$ are precisely $\{a_{15}a_{51}, a_{24}a_{42}, (a_{33})^2\}$.
Now, note that if $\lambda$ is an eigenvalue of $A$, then $\lambda^2$ must be an eigenvalue of $A^2$. This gives you six candidates for the eigenvalues of $A$.
In fact, with more thorough analysis, we can guarantee that the eigenvalues will be precisely $\lambda = \pm \sqrt{a_{i,(n+1-i)}a_{(n+1-i),i}}$ for $i = 1,\dots,\lfloor n/2\rfloor$ and, for odd $n$, $\lambda = a_{(n+1)/2,(n+1)/2}$.
Proof that this is the case: Let $e_1,\dots,e_n$ denote the standard basis vectors. Let $S_{ij}$ denote the span of the vectors $e_i$ and $e_j$.
Note that $A$ is invariant over $S_{i(n-i)}$ for $i = 1,\dots,\lfloor n/2\rfloor$. We may then consider the restriction $A_{i(n-i)}: S_{i(n-i)} \to S_{i(n-i)}$, which can be represented by the matrix
$$
\pmatrix{0 & a_{i(n-i)}\\a_{(n-i)i} & 0}
$$
It suffices to find the eigenvalues of this transformation.
For the case of an odd $n$, it is sufficient to note that $a_{(n+1)/2,(n+1)/2}$ lies on the diagonal with zeros in its row and column.
Another explanation: denote the matrix
$S = \pmatrix{e_1 & e_{n} & e_2 & e_{n-1} & \cdots}$
Noting that $S$ is orthogonal (i.e. $S^{-1} = S^{T}$), we find that
$$
SAS^{-1} =
\pmatrix{
0&a_{1,n}\\
a_{n,1}&0\\
&&0&a_{2,n-1}\\
&&a_{n-1,2}&0\\
&&&&\ddots
}
$$
This matrix is similar, and therefore has the same eigenvalues. However, it is also block diagonal.
Best Answer
if you make $A$ symmetric, then $A$ would certainly be diagonalizable. that is you make $a_{1n} = a_{n1}, a_{2n-1} = a_{n-12}, \cdots.$ it is enough for matrix $A$ to be normal, i.e., $AA^* = A^*A,$ where $A^*$ is the complex conjugate of $A^T,$ for it to be diagonalizable. if you look at a $3 \times 3$ matrix $A = diag(a, b, c)\ $, you see that $AA^* = diag(|a|^2, |b|^2, |c|^2), A^TA = diag(|c|^2, |b|^2, |a|^2)$ now you only need that $|a| = \pm |c|$ for $A$ to be diagonalizable over $C$ which is less stringent than $a = c$
see if you can verify that $|a_{1n}| = \pm |a_{n1}|, |a_{2n-1}| = \pm |a_{n-12}|, \cdots$ is necessary and sufficient for $A$ to be diagonlizable.
edit:
hint for part (b). i will use $e_1 = (1,0,0,0,0,0,0)^T,\cdots$ for the standard basis vectors in $R^7.$
here we have $n = 7$ and $Ae_1 = 49e_7, A e_2 = 36e_6,\cdots, Ae_6 = 4e_2, Ae_7 = e_1$ observe the pairing $(Ae_1, Ae_7), (Ae_2, Ae_6), \cdots$ and a singleton $Ae_4 = 16 e_4$ last one already tells you that $4$ is an eigenvalue and a corresponding eigenvectors is $e_4.$ the rest make up three $2 \times 2$ blocks.
can you find the eigenvalues and the corresponding eigenvectors for each of the three pairs?