As long as $S$ is symmetric, the group of linear maps preserving the inner product induced by $S$ will always be isomorphic to $O(n)$ (and so in particular will always have the same Lie algebra). This is because given any inner product you can find an orthornormal basis and with respect to this basis $S$ is just the identity matrix.
The reason I'm familiar with for choosing $S$ to be one of the above matrices is that then the root space decomposition of the Lie algebra is a lot easier. For example, when choosing a Cartan subalgebra of a matrix Lie algebra, it is nice to be able to choose these to consist of only diagonal matrices. This doesn't work for the usual definition of $so(n)$ but does if you choose $S$ appropriately.
As Travis Willse writes in a comment, let $V$ be a finite-dimensional $K$-vector space and $b: V \times V \rightarrow K$ a non-degenerate bilinear form, either symmetric or skew-symmetric (and for safety, let's assume $char(K)=0$). Choose a basis $e_1, ..., e_m$ of $V$ and let $S$ be the matrix with entries $S_{ij} :=b(e_i, e_j)$. Note that $S$ is (skew-)symmetric iff $b$ is.
Note that if $v = \sum v_i e_i, w = \sum w_i e_i$ then $b(v,w) = \pmatrix{v_1 \cdots v_m} \cdot S \cdot \pmatrix{w_1 \\ \vdots \\ w_m}$ (where $\cdot$ denotes ordinary matrix multiplication), which, with a little abuse of notation, we can write as $$b(v,w) = v^T S w.$$
Further, everyone learned in their first Linear Algebra course how our choice of basis induces an isomorphism
$$End_K(V) \simeq M_m(K) \\
f \mapsto M.$$
Now if $f \in End_K(V)$ satisfies $b(f(v), w) = - b(v, f(w))$ for all $v,w$, then equivalently, for the matrix $M$ corresponding to $f$,
$$(Mv)^T S w = -v^TS (Mw) \\
v^TM^T S w = -v^T S M w$$
for all $v,w \in V$ which by the non-degeneracy of the form implies (and is implied by)
$$M^T S = -SM$$
and of course it doesn't matter on what side you put the minus.
Note that the same bilinear form $b$ can give out two different matrices $S_1,S_2$ here if we choose different bases. In that case, $S_1$ and $S_2$ will be congruent i.e. there will exist an invertible matrix $P$ such that $S_2 = P^TS_1 P$. The matrix Lie algebras then look different, but of course conjugation with that same base change matrix $P$ (now using the inverse instead of the transpose) will show those seemingly different matrix Lie algebras are isomorphic.
Actually, one can also scale the bilinear form / the matrix S with any non-zero scalar and get the same matrix Lie algebra.
In general, these two are the only things we can do to keep the Lie algebras isomorphic.
As Travis also points out, the matrices which you write down as $S$ there over a general field give the so-called split forms of these Lie algebras. Over algebraically closed fields, there are no others anyway, as for example any $2n \times 2n$-symmetric matrix over $\mathbb C$ is congruent to a scaled version of your $S$.
That is not true over other fields though! E.g. over the real numbers, remember that Sylvester's Inertia Theorem gives us several different non-congruent symmetric bilinear forms. Your $S$ for $m=2n$ and the orthogonal group is congruent to a matrix of index $n$, i.e. to one with $n$ $1$'s and $n$ $-1$'s on the diagonal. This cannot be brought via congruence and/or scaling to the matrix $S'= I_{2n}$. Instead, this $S'$ gives a genuinely different Lie algebra, namely the classical "compact" one usually called $\mathfrak{so}_2n$, belonging to $b$ being the standard scalar product; when written out as matrices, this is just the Lie algebra of skew-symmetric real matrices. (And I stress again: Over $\mathbb R$, that is not isomorphic to the one you get with your $S$.)
Further reading:
What is this Lie group and does it have interesting properties?
What is the set of all matrices satisfying $\mathfrak{so}(n)$ definition?
Root space decomposition of $C_n=\mathfrak{sp}(2n,F)$ and further links in there.
Best Answer
To expand the comments by user WE Tutorial School into an answer:
First of all, regarding your comment, for the orthogonal Lie algebra (which happens to be also the special orthogonal Lie algebra), it is not true that it exists only in odd dimension. Rather, it is just so that in even dimension $2n$, one uses the form given by the matrix $\pmatrix{0&I_n\\I_n&0}$, whereas in odd dimension $2n+1$, one uses the one given by $\pmatrix{1&0&0\\0&0&I_n\\0&I_n&0}$.
To be more precise, one could also replace these matrices $S$ by any ones that are congruent to them, because that gives the same symmetric bilinear form, just expressed in a different basis. In particular, over an algebraically closed field, where all symmetric forms are the same via base change, you could also just work with the identity matrix $S=I_m$ in any dimension $m$. Over other fields (like $\mathbb R$) though, one has as many non-isomorphic Lie algebras here as there are non-equivalent symmetric bilinear forms, as there are non-congruent symmetric matrices. For example over $\mathbb R$, if one takes for $S$ the identity matrix, one gets the "compact" form of the Lie algebras instead, which differ from the ones above (for $m\ge 3$ at least). The above $S$ with the slightly different definition in odd and even dimensions instead give the split forms of the (special) orthogonal Lie algebras.
The reason one often chooses those matrices, even over an algebraically closed field where at first $S=I_m$ looks like a more generic choice, is that if one writes out the matrices that make up the Lie algebra with respect to those parity-depending matrices $S$, it's relatively easy to "see" a nice Cartan subalgebra, root spaces etc. It matches that the root systems eventually look different for the odd and even dimensional case: For even $m=2n$, one gets a root system of type $D_n$, whereas for odd $m=2n+1$, one gets a root system of type $B_n$. For example here I recently worked with that for the case $n=2, m=2n+1=5$.
Now, to your actual question: as for the symplectic Lie algebra, why does it not work in odd dimension? Well it's a plain fact of linear algebra that on such spaces, any skew-symmetric bilinear form is degenerate. Note that the $S$ written down above for the orthogonal case had to be symmetric, $S^t=S$, and both in the odd and even dimension case the matrix given there satisfies that. But here we would need one with $S^t=-S$. Well in even dimension $m=2n$, $S=\pmatrix{0&I_n\\-I_n&0}$ does the job (and again gives a good matrix representation of the elements, eventually leading to the root system $C_n$; again, over non-algebraically closed field, there are in general other forms as well, and this one is just the "split form"). But in odd dimension, we cannot imitate the above trick: The attempt $\pmatrix{1&0&0\\0&0&I_n\\0&-I_n&0}$ (or any $\pmatrix{a\neq 0&0&0\\0&0&I_n\\0&-I_n&0}$) would not be skew-symmetric, whereas $\pmatrix{0&0&0\\0&0&I_n\\0&-I_n&0}$ of course is degenerate. And as said above, one shows in linear algebra that there is no choice of matrix in odd dimensions which gives what one would want. See the link given by WE Tutorial School, https://math.stackexchange.com/a/3629615/96384, or any good introduction to bilinear forms.