The intuition behind Scott sentences is that they give recipes for back-and-forth arguments. An excellent exposition is given in Marker's book Model Theory: An Introduction, Section 2.4. The section starts with Cantor's back-and-forth proof that any two countable dense linear orders without endpoints are isomorphic, introduces Ehrenfeucht–Fraïssé games, and builds up to Scott's isomorphism theorem.
Your recent questions indicate that you're interested in the concept of $\omega$-homogeneity, which is also closely related to the back-and-forth idea. I think you'll learn a lot more from reading Marker's Section 2.4 thoroughly than anything I could write in this answer. But I'll try to give the basic idea.
In Knight's presentation, we start with "complete" formulas $\varphi_{\overline{a}}(\overline{x})$, which you should think of as completely describing the behavior of the tuple $\overline{a}$. More on how to get these formulas below. But first, let's imagine we have structures $M$ and $N$ which satisfy all the same sentences $\rho_{\overline{a}}$. We want to show $M\cong N$ by a back-and-forth argument.
Ok, we want to build up an isomorphism between the two element-by-element. Pick some $a_1\in M$. $\rho_\varnothing$ gives a list of complete formulas in one variable and says "every element satisfies one of the formulas in this list, and every formula in the list is satisfied by some element". So $a_1$ satisfies some complete formula $\varphi_1(x)$ on the list, and that same complete formula is satisfied by some element $b_1\in N$. We start our isomorphism by mapping $a_1\mapsto b_1$.
Next, we pick an element $b_2\in N$. The sentence $\rho_{b_1}$ gives a list of complete formulas in two variables which extend $\varphi_1(x)$ and says "if $x$ satisfies $\varphi_1(x)$, then every pair $xy$ extending $x$ satisfies one of the formulas in this list, and every formula in the list is sastified by some pair extending $x$." So $b_1,b_2$ satisfies some complete formula $\varphi_2(x,y)$ on the list, and there is some $a_2\in M$ such that $a_1,a_2$ satisfies that same complete formula. We extend our isomorphism by mapping $a_2\mapsto b_2$.
Continuing this way (back and forth) to handle all of the elements in some enumerations of the countable structures $M$ and $N$, we arrive at an isomorphism.
Now the question remains how to come up with the "complete" formulas $\varphi_{\overline{a}}$, for $\overline{a}$ in $M$. As a first approximation, we can take the conjunction of all atomic and negated atomic formulas satisfied by $\overline{a}$. This captures the quantifier-free type of $\overline{a}$. But knowing the quantifier-free type of $\overline{a}$ does not tell us the possible quantifier-free types of extensions $\overline{a}b$ of the tuple. So we consider formulas of the form $$(\forall y\, \bigvee \psi_i(x,y)) \land (\bigwedge \exists y\, \psi_i(x,y))$$ where the $\psi_i$ are complete quantifier-free formulas. Let's call these formulas $1$-extension formulas. Now we can take the conjunction of all $1$-extension formulas satisfied by a tuple. That tells us about all of the complete quantifier-free types of extensions of the tuple, but not yet about all of the complete $1$-extension types of extensions of the tuple!
In other words, $1$-extension formulas let us start with a partial isomorphism between finite tuples and extend by one more element - but no more! That's not enough to build a total isomorphism by back-and-forth.
So this leads us to define $2$-extension formulas, which look just like $1$-extension formulas, but now the $\psi_i$ are conjunctions of $1$-extension formulas. Satisfying the same $2$-extension formulas means that we can extend a back-and-forth argument by two steps, but no more. Continuing, we can define $n$-extension formulas for all finite $n$. Satisfying the same $n$-extension formulas for all finite $n$ ensures we can continue a back-and-forth argument any finite number of steps, but not necessarily countably many steps.
Scott's idea was to iterate this construction through the countable, defining $\alpha$-extension formulas for all countable ordinals $\alpha$.
Now if we assume that $M$ is a countable structure, it has only countably many tuples, so it won't take all $\aleph_1$-many countable ordinals to distinguish all the tuples in $M$. In other words, there is some countable ordinal $\alpha$ (the Scott rank of $M$) such that when trying to understand the behavior of tuples from $M$, it suffices to consider all the $\alpha$-extension formulas satisfied by the tuple. The countable conjunction of all of these are the "complete formulas" referred to in the explanation of the Scott sentence above, and they allow back-and-forth arguments to go "all the way up".
Assume ZFC. Then $\beta_{\mathrm{SOL}}>\beta_{\mathcal{L}_{\infty,\omega_1}}$.
Lemma: For each limit ordinal $\beta$ there is a $\Sigma_1$ surjection $F_\beta:[\beta]^{<\omega}\to L_\beta$, uniformly $\Sigma_1^{L_\beta}$ without parameters.
In fact, we can restrict the domain of $F_\beta$ to be sets of limit ordinals and integers.
(This is a standard fact.)
Claim 1: $\beta_{\mathrm{SOL}}\geq\beta_{\mathcal{L}_{\infty,\omega_1}}$.
Suppose otherwise. Then there is a finite set of limit ordinals $S\subseteq(\beta_{\mathrm{SOL}}+1)$ with $\beta_{\mathrm{SOL}}\in S$ and such that for each $\beta\in S$ there is a $\Sigma_1$ formula $\varphi$ such that $\varphi^{L_\beta}(S\cap\beta,\cdot)$ defines a theory
witnessing that $\beta<\beta_{\mathcal{L}_{\infty,\omega_1}}$. (Let $\beta_0=\beta_{\mathrm{SOL}}$. Let $S_0\subseteq\beta_0$ be a finite set of limit ordinals such that there is a $\Sigma_1$ formula $\varphi_0$
such that $\varphi_0(S_0,\cdot)$ works over $L_{\beta_0}$. Let $\beta_1=\max(S_0)$. Let $S_1\subseteq\beta_1$ be a finite set of limits and $\varphi_1$ be such that $\varphi((S_0\cup S_1)\cap\beta_1,\cdot)$ works over $L_{\beta_1}$. Let $\beta_2=\max((S_0\cup S_1)\cap\beta_1)$. Etc. Note that $\beta_0>\beta_1>\beta_2>\ldots$, so the process eventually stops with $S_n=\emptyset$. Set $S=S_0\cup S_1\cup\ldots\cup S_{n-1}$.)
Let $S=\{\beta_0>\beta_1>\ldots>\beta_n\}$ and let $\varphi_k$ be such that
$\varphi_k(S\cap\beta_k,\cdot)$ works over $L_{\beta_k}$ for each $k\leq n$.
Using the formulas $\varphi_0,\ldots\varphi_n$, we can write down a second-order statement whose unique model $M$ is $V_{\beta_0+\omega}$. For easily there is statement whose models are exactly those of the form $V_{\beta+\omega}$ with $\beta$ infinite (up to isomorphism). So assert this, and also that $M$ thinks that there is a sequence $\gamma_0>\ldots>\gamma_n$ of limit ordinals such that for each $k\leq n$, letting $T_k=\{\psi\bigm|L_{\gamma_k}\models\varphi_k(\{\gamma_{k+1},\ldots,\gamma_n\},\psi)\}$, then $T_k$ is not satisfiable, but for each $x\in L_{\gamma_k}$, $T\cap x$ is satisfiable.
Note that $V_{\beta+\omega}$ (with $\beta$ infinite) is always correct about satisfiability for $\mathcal{L}_{\infty,\omega_1}$, as a Lowenheim-Skolem argument shows. (The basic point is to consider hulls closed under $\omega$-sequences, and coding these to get them in $V_{\beta+\omega}$. Having enough of these uses some Choice, though.). So let $M=V_{\beta+\omega}$ be a model of these things. Then $\gamma_n=\beta_n$. For let $T^{\gamma_n}_n$ and $T_n^{\beta_n}$ be the interpretations of $\varphi_n(\emptyset,\cdot)$ over $L_{\gamma_n}$ and $L_{\beta_n}$ respectively. Then because $\varphi_n$ is $\Sigma_1$ and no parameter is used,
if $\gamma_n\leq\beta_n$ then $T_n^{\gamma_n}\subseteq T_n^{\beta_n}$,
and if $\beta_n\leq\gamma_n$ it is vice versa. And if $\gamma_n<\beta_n$ then $T_n^{\gamma_n}\in L_{\beta_n}$, and if $\beta_n<\gamma_n$ it is vice versa. But then the satisfiability/non-satisfiability of the various fragments (with correctness of $M$ regarding this) implies $\beta_n=\gamma_n$. Proceeding by inuction, at stage $k$ we know $\gamma_{k+1}=\beta_{k+1},\ldots,\gamma_n=\beta_n$, so $T_k^{\beta_k}$ and $T_k^{\gamma_k}$ are defined from the same parameter, and so we can argue otherwise like when $k=n$. So in particular we get $\gamma_0=\beta_0=\beta_{\mathrm{SOL}}$.
Now let $\psi$ be the (second order) statement used above to describe $V_{\beta_0+\omega}$. We now define a second order theory $T$ over $L_{\beta_0}$, with constant symbols $\alpha$ for each $\alpha<\beta_0$, plus two further constant symbols $\dot{\xi},\dot{\beta}$: $T$ asserts $\psi$ + "$\dot{\xi},\dot{\beta}$ are ordinals" + "$\alpha$ is an ordinal" for each $\alpha<\beta_0$ + "$\alpha_0<\alpha_1<\dot{\xi}<\dot{\beta}$" for each $\alpha_0<\alpha_1<\beta_0$ + "$\dot{\beta}$ is the ordinal $\gamma_0$ specified by $\psi$". If $x\in L_{\beta_0}$ then $T\cap x$ is satisfiable, as $V_{\beta_0+\omega}$ models it (the point being that we interpret $\dot{\beta}$ as $\beta_0$ and $\dot{\xi}$ as some ordinal $<\beta_0$), but $T$ is not satisfiable (because there is no longer space left to interpret $\dot{\xi}$). It is also $\Sigma_1^{L_{\beta_0}}$.
This contradicts that $\beta_0=\beta_{\mathrm{SOL}}$.
Claim 2: $\beta_{\mathrm{SOL}}\neq\beta_{\mathcal{L}_{\infty,\omega_1}}$.
Since $\mathcal{L}_{\infty,\omega_1}$-satisfiability is absolute to $V_{\xi+\omega}$, note that there is a SOL-statement $\psi$ such that $V_{\beta+\omega}$ is the unique model of $\psi$, where $\beta=\beta_{\mathcal{L}_{\infty,\omega_1}}$.
But then we can use the same trick building a theory $\psi$ + "slightly too many ordinals $<\beta$" as at the end of the proof of Claim 1, for a contradiction.
It would be interesting to know whether one can get rid of the appeals to AC.
Best Answer
There are various things we can say.
Right off the bat though there's a serious type problem. When we talk about decidability without any further elaborations, we're referring to sets of natural numbers, or things which can "naturally" be coded as such (e.g. finite strings of rationals, formulas in the language of set theory, etc.). However, there are continuum-many $\mathcal{L}_{\omega_1,\omega}$-sentences, so right off the bat this is going to break down.
There are two obvious responses to this:
Restrict attention to countable fragments of $\mathcal{L}_{\omega_1,\omega}$.
Use some sort of generalized computability theory to handle the situation.
And in fact we can merge the two approaches:
Each of these three approaches yields interesting results, and in fact it's a bad idea to separate them too much (as indicated by the third). There's too much to put into a single answer, but I'll mention two specific topics which have driven lots of research and are surprisingly accessible. Note that I've backed away from decidability per se; the issues here are so foundational that it's really worth stepping all the way back and seeing how "complexity in $\mathcal{L}_{\omega_1,\omega}$" is developed in the first place.
Overall, I recommend the paper Barwise: infinitary logic and admissible sets as a starting point. While as the title suggests this focuses on the impact of one researcher in particular, it covers a lot of ground - and Barwise really was the most important figure in computability-theoretic infinitary logic, in my opinion. It's also extremely readable.
(The paper Barwise: abstract model theory and generalized quantifiers is also excellent and probably of interest, but less directly related. Ash and Knight's book meanwhile is extremely lovely and relevant, but is really limited to the particular case of $\mathcal{L}_{\omega_1,\omega}\cap L_{\omega_1^{CK}}$, which is not the whole story.)
Scott sentences
Probably the most striking thing about $\mathcal{L}_{\omega_1,\omega}$ right off the bat is Scott's isomorphism theorem, which says that every countable structure (in a countable language) can be "pinned down" by a single $\mathcal{L}_{\omega_1,\omega}$-sentence. Precisely:
Since countable structures are perfect for computability theory, we run into a bunch of natural questions right off the bat about the difficulty of finding a "Scott sentence" for a given structure. The first big theorem here is that this can force us beyond the computable:
(Note that "computable $\mathcal{L}_{\omega_1,\omega}$-sentence" does make sense - $\mathcal{L}_{\omega_1,\omega}$-sentences are naturally coded by individual reals, or more clearly labelled well-founded trees, and we can talk about the computability of those codes.) The simplest example of such a thing is the Harrison order - see here.
It turns out that the question of measuring the "Scott complexity" of a computable (or general) structure is surprisingly nuanced - see e.g. here or here. I don't want to go into that here, though.
Fragments and proof systems
Building on work of Lopez-Escobar who looked at the whole $\mathcal{L}_{\omega_1,\omega}$, Barwise introduced the notion of a countable fragment of infinitary logic and showed that countable fragments have extremely nice properties. Specifically, suppose that $\mathbb{A}$ is a countable admissible set, that is, a countable transitive set satisfying $\mathsf{KP}$. Let $\mathcal{L}_{\mathbb{A}}=\mathcal{L}_{\infty,\omega}\cap\mathbb{A}$; that's not a typo, since $\mathbb{A}$ might not realize that each of its elements is countable. Then $\mathcal{L}_{\mathbb{A}}$ satisfies analogues of various properties of first-order logic, like the compactness and completeness theorems - now the Barwise compactness and Barwise completeness theorems. This lets us develop for example an analogue of Godel's incompleteness theorem fro fragments of infinitary logic (see also this old MO question of mine).
I'm glossing over and simplifying a lot here; Barwise's wonderful book is the go-to for all this. But the point is that with a reasonable proof theory in place, "Godel-style" complexity analyses can be carried out smoothly.
Note, however, that the relevant notion of computability is not classical computability; rather, we look at admissible recursion theory (or computability on admissible sets). The idea is that "finite" is replaced with "$\in\mathbb{A}$" and "c.e." is replaced with "$\Sigma_1$ over $\mathbb{A}$" - think about the special case $\mathbb{A}=L_\omega$ to see why this makes sense. This was initially hinted at by Kreisel and Kripke in the context of levels of $L$ specifically, the key breakthrough being that in "one level up" the right analogy is $\Pi^1_1$ = c.e. and $\Delta^1_1$ = finite (these correspond to "$\Sigma_1$-over-$L_{\omega_1^{CK}}$ subset of $\omega$" and "subset of $\omega$ which is an element of $L_{\omega_1^{CK}}$," respectively, and $\omega_1^{CK}$ is the first admissible ordinal $>\omega$).
The standard source on higher recursion theory is Sacks' book, although it really focuses on levels of $L$ (until it gets to $E$-recursion, which is a very different type of thing anyways). Computability on arbitrary admissible sets can actually be quite weird; an admissible set need not have a simply-definable well-ordering of itself, so we can't "index Turing machines" anymore in general.