I try to answer (2). You can try to think about (1) afterwards.
Define $U := \{u \in V: J(u,v) = v \text{ for any } v \in V \}$. Since $J$ is non-degenerate, then $U = \{0\}$. So, if $V$ is not trivial, there exists an element $e_1 \in V \setminus \{0\}$ such that $J(e_1,f_1) \neq 0$ for some $f_1 \in V$. Up to rescaling, you can assume $J(e_1,f_1) = 1$. Call $W := \text{Span}(e_1,f_1)$ and define
$$W^J := \{u \in V: J(u,w) = 0 \text{ for any }w \in W\}.$$ Let us take a look at $W \cap W^J$. If $v \in W \cap W^J$, then $v = ae_1+bf_1$ for some $a,b$, and $J(v,e_1) = 0 = J(v,f_1)$. But then $J(v,e_1)=J(ae_1+bf_1,e_1) = -b=0$ and similarly $a=0$. So $W \cap W^J = \{0\}$. Let now $v$ be any vector in $V$. If $J(v,e_1) = -a$ and $J(v,f_1) = b$, then you can write
$$v = be_1+af_1+v-be_1-af_1.$$
You have that $be_1+af_1 \in W$ and
\begin{align}
J(v-be_1-af_1,e_1) & = J(v,e_1)+a = a-a=0\\
J(v-be_1-af_1,f_1) & = J(v,f_1)-b = b-b = 0.
\end{align}
This tells you that any vector $v \in V$ can be written as a sum of a vector in $W$, that is $be_1+af_1$, and a vector in $W^J$, namely $v-be_1-af_1$. Consequently $V = W \oplus W^J$. If $W^J = \{0\}$, then you are done and $J$ can be written in the basis $\{e_1,f_1\}$ as
$$
\left(
\begin{matrix}
J(e_1,e_1) & J(e_1,f_1) \\
J(f_1,e_1) & J(f_1,f_1)
\end{matrix}
\right) =
\left(
\begin{matrix}
0 & 1 \\
-1 & 0
\end{matrix}
\right).
$$
Otherwise, choose $e_2 \neq 0$ in $W^J$ and repeat the process getting $f_2$ such that $J(e_2,f_2)=1$. Going on you will find a basis $\{e_1,e_2,\dots,e_n,f_1,f_2,\dots,f_n\}$ of $V$ such that
$$J(e_i,e_j) = 0, \quad J(e_i,f_k) = \delta_{ik}, \quad J(f_i,f_j) = 0,$$ where $\delta_{ij}$ denotes the Kronecker delta. The process ends after $n$ steps, as $\dim V < \infty$. Notice that the presence of $J$ forces $\dim V = 2n$, i.e. the dimension of $V$ is even.
FYI: a non-degenerate skew-symmetric bilinear form $J$ like this is generally called symplectic form on $V$, and $(V,J)$ is then called symplectic space.
Suppose $g$ is the orthogonal sum $g=r\oplus s$ where $g$ is semi-simple. Since $g$ is sem-simple, Killing is not degenerated, this implies that for every $y\in s$, there exists $z\in g$ such that $B(y,z)\neq 0$, we can write $z=u+v, u\in r, v\in s$, we deduce that $B(y,z)=B(y,u+v)=B(y,v)\neq 0$ since $r$ is orthogonal to $s$. This implies that the restriction of $B$ to $s$ is not degenerated.
Best Answer
As noted in a comment, the statement is actually false as stated. However, if we leave out the restriction to traceless matrices, it becomes almost true:
Proposition: Let $n \in \mathbb N \setminus \{2\}$ and $B \in M_{n\times n}(\mathbb C)$ a symmetric matrix. Then
$$so(B) := \{X \in M_{n \times n}(\mathbb C) : XB + BX^T =0 \}$$
(with its natural vector space structure and Lie bracket given by matrix commutator) is a semisimple Lie algebra if and only if the bilinear form corresponding to $B$ is non-degenerate. (For $n=2$, $so(B)$ is never semisimple.)
Proof sketch / hints:
Straightforward check that the space is a Lie algebra.
Look at two extreme cases: $B=0$ (the zero matrix) and $B =Id_n$. -- In the first case, the condition imposed in the definition of $so(B)$ is empty, so that $so(0)$ is the full Lie algebra of all $n \times n$-matrices, usually called $\mathfrak{gl}_n(\mathbb C)$. This is easily seen to have non-trivial centre and hence is not semisimple. -- In the second case, the Lie algebra is what is usually called $$\mathfrak{so}_n(\mathbb C) := \{X \in M_{n \times n}(\mathbb C): X^T = -X\}$$ and can be shown to be semisimple for all $n \neq 2$ (actually simple for $n=3$ and $n\ge 5$; whereas for $n=2$, it is the one-dimensional abelian Lie algebra). To show this might be the hardest part of the proof, but is of course possible:
A computational proof along the lines of $\mathfrak{sl}(3,F)$ is simple might just get a bit intricate.
An explicit computation of the root spaces along the lines of https://math.mit.edu/classes/18.745/Notes/Lecture_15_Notes.pdf is maybe the best from a theoretical viewpoint, but here one really needs to know what one is looking for (and e.g. chooses $B$ as the matrix with $1$'s on the antidiagonal instead, so that one can "see" a Cartan subalgebra easily).
Finally, user orangeskid brings up a nice approach in a comment, which relies on some Lie theory including a compactness argument to show that for all $n$, $\mathfrak{so}_n(\color{red}{\mathbb R})$ (hence also $\mathfrak{so}_n(\mathbb C)$) is reductive. Then, it's relatively easy to show that for $n \neq 2$, its centre is trivial, so we have semisimplicity (and even kind of "explained" the exception at $n=2$).