A bilinear form can be written as a sum of a symmetric and a skew symmetric part.
Let $c:V\times V\to \mathbb{R}: (v,w) \mapsto \tfrac12 ( b(v,w) + b(w,v) )$ and $d:V\times V \to \mathbb{R}: (v,w) \mapsto \tfrac12 ( b(v,w) - b(w,v) )$.
Then $b(v,w) = c(v,w) + d(v,w)$, so if an automorphism preserves both $c$ and $d$, then it preserves $b$. By definition of $c$, if $f$ preserves $b$, it preserves $c$ (and similarly for $d$).
So the automorphism group of $b$ is the intersection of the orthogonal automorphism group of $c$ with the symplectic automorphism group of $d$.
I found this satisfying at the time, but I believe when I tried to actually understand the intersection, I found it very difficult to compute. I was interested in finite fields, and wanted to know things like "how many elements does it have?" and I believe the answer varied a lot depending on how the groups intersected. I don't recall ever coming to a full understanding of it, but perhaps they are well known subgroups of orthogonal and symplectic groups.
As Travis Willse writes in a comment, let $V$ be a finite-dimensional $K$-vector space and $b: V \times V \rightarrow K$ a non-degenerate bilinear form, either symmetric or skew-symmetric (and for safety, let's assume $char(K)=0$). Choose a basis $e_1, ..., e_m$ of $V$ and let $S$ be the matrix with entries $S_{ij} :=b(e_i, e_j)$. Note that $S$ is (skew-)symmetric iff $b$ is.
Note that if $v = \sum v_i e_i, w = \sum w_i e_i$ then $b(v,w) = \pmatrix{v_1 \cdots v_m} \cdot S \cdot \pmatrix{w_1 \\ \vdots \\ w_m}$ (where $\cdot$ denotes ordinary matrix multiplication), which, with a little abuse of notation, we can write as $$b(v,w) = v^T S w.$$
Further, everyone learned in their first Linear Algebra course how our choice of basis induces an isomorphism
$$End_K(V) \simeq M_m(K) \\
f \mapsto M.$$
Now if $f \in End_K(V)$ satisfies $b(f(v), w) = - b(v, f(w))$ for all $v,w$, then equivalently, for the matrix $M$ corresponding to $f$,
$$(Mv)^T S w = -v^TS (Mw) \\
v^TM^T S w = -v^T S M w$$
for all $v,w \in V$ which by the non-degeneracy of the form implies (and is implied by)
$$M^T S = -SM$$
and of course it doesn't matter on what side you put the minus.
Note that the same bilinear form $b$ can give out two different matrices $S_1,S_2$ here if we choose different bases. In that case, $S_1$ and $S_2$ will be congruent i.e. there will exist an invertible matrix $P$ such that $S_2 = P^TS_1 P$. The matrix Lie algebras then look different, but of course conjugation with that same base change matrix $P$ (now using the inverse instead of the transpose) will show those seemingly different matrix Lie algebras are isomorphic.
Actually, one can also scale the bilinear form / the matrix S with any non-zero scalar and get the same matrix Lie algebra.
In general, these two are the only things we can do to keep the Lie algebras isomorphic.
As Travis also points out, the matrices which you write down as $S$ there over a general field give the so-called split forms of these Lie algebras. Over algebraically closed fields, there are no others anyway, as for example any $2n \times 2n$-symmetric matrix over $\mathbb C$ is congruent to a scaled version of your $S$.
That is not true over other fields though! E.g. over the real numbers, remember that Sylvester's Inertia Theorem gives us several different non-congruent symmetric bilinear forms. Your $S$ for $m=2n$ and the orthogonal group is congruent to a matrix of index $n$, i.e. to one with $n$ $1$'s and $n$ $-1$'s on the diagonal. This cannot be brought via congruence and/or scaling to the matrix $S'= I_{2n}$. Instead, this $S'$ gives a genuinely different Lie algebra, namely the classical "compact" one usually called $\mathfrak{so}_2n$, belonging to $b$ being the standard scalar product; when written out as matrices, this is just the Lie algebra of skew-symmetric real matrices. (And I stress again: Over $\mathbb R$, that is not isomorphic to the one you get with your $S$.)
Further reading:
What is this Lie group and does it have interesting properties?
What is the set of all matrices satisfying $\mathfrak{so}(n)$ definition?
Root space decomposition of $C_n=\mathfrak{sp}(2n,F)$ and further links in there.
Best Answer
I try to answer (2). You can try to think about (1) afterwards.
Define $U := \{u \in V: J(u,v) = v \text{ for any } v \in V \}$. Since $J$ is non-degenerate, then $U = \{0\}$. So, if $V$ is not trivial, there exists an element $e_1 \in V \setminus \{0\}$ such that $J(e_1,f_1) \neq 0$ for some $f_1 \in V$. Up to rescaling, you can assume $J(e_1,f_1) = 1$. Call $W := \text{Span}(e_1,f_1)$ and define $$W^J := \{u \in V: J(u,w) = 0 \text{ for any }w \in W\}.$$ Let us take a look at $W \cap W^J$. If $v \in W \cap W^J$, then $v = ae_1+bf_1$ for some $a,b$, and $J(v,e_1) = 0 = J(v,f_1)$. But then $J(v,e_1)=J(ae_1+bf_1,e_1) = -b=0$ and similarly $a=0$. So $W \cap W^J = \{0\}$. Let now $v$ be any vector in $V$. If $J(v,e_1) = -a$ and $J(v,f_1) = b$, then you can write $$v = be_1+af_1+v-be_1-af_1.$$ You have that $be_1+af_1 \in W$ and \begin{align} J(v-be_1-af_1,e_1) & = J(v,e_1)+a = a-a=0\\ J(v-be_1-af_1,f_1) & = J(v,f_1)-b = b-b = 0. \end{align} This tells you that any vector $v \in V$ can be written as a sum of a vector in $W$, that is $be_1+af_1$, and a vector in $W^J$, namely $v-be_1-af_1$. Consequently $V = W \oplus W^J$. If $W^J = \{0\}$, then you are done and $J$ can be written in the basis $\{e_1,f_1\}$ as $$ \left( \begin{matrix} J(e_1,e_1) & J(e_1,f_1) \\ J(f_1,e_1) & J(f_1,f_1) \end{matrix} \right) = \left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} \right). $$ Otherwise, choose $e_2 \neq 0$ in $W^J$ and repeat the process getting $f_2$ such that $J(e_2,f_2)=1$. Going on you will find a basis $\{e_1,e_2,\dots,e_n,f_1,f_2,\dots,f_n\}$ of $V$ such that $$J(e_i,e_j) = 0, \quad J(e_i,f_k) = \delta_{ik}, \quad J(f_i,f_j) = 0,$$ where $\delta_{ij}$ denotes the Kronecker delta. The process ends after $n$ steps, as $\dim V < \infty$. Notice that the presence of $J$ forces $\dim V = 2n$, i.e. the dimension of $V$ is even.
FYI: a non-degenerate skew-symmetric bilinear form $J$ like this is generally called symplectic form on $V$, and $(V,J)$ is then called symplectic space.