I think the way to think about trigonometric substitutions is to remember the fundamental result that
$$\sin^2a + \cos^2a = 1$$
for any angle $a$. So, in your example, you have a square-root
$$\sqrt{x^2 - 4}$$
Imagine for a moment that that 4 was a 1 and that they were reversed, like so:
$$\sqrt{1 - x^2}$$
Then the substitution $x = \sin\theta$ would result in something simple, without the square-root, namely
$$\sqrt{1 - x^2} = \sqrt{1 - \sin^2\theta} = \cos\theta$$
That's the gist behind trig substitutions.
Now, in your case in particular, things aren't so simple so let's work it out step by step. First, imagine the 4 is a 1:
$$\sqrt{x^2 - 1}$$
We can't use $x = \sin\theta$, nor $x = \cos\theta$, here because we'd get the square-root of a negative quantity. For instance, using $x = \sin\theta$,
$$\sqrt{x^2 - 1} = \sqrt{-\cos^2\theta}$$
So how can we use the fundamental relation at the top of this post to get something like $x^2 - 1$? Well, divide both sides by $\cos^2a$ and write it as follows:
$$\frac{\sin^2a}{\cos^2a} + \frac{\cos^2a}{\cos^2a} = \frac{1}{\cos^2a} \quad \rightarrow \quad \tan^2a = \frac{1}{\cos^2a} - 1 = \sec^2a - 1$$
Now it's obvious that $x = \sec\theta$ give us
$$\sqrt{x^2 - 1} = \sqrt{\sec^2\theta - 1} = \sqrt{\tan^2\theta} = \tan\theta$$
and we once again got rid of the square-root. The only thing that remains is the 4. We can deal with that by scaling. Choose $x = 2\sec\theta$, instead, so
$$\sqrt{x^2 - 4} = \sqrt{4\sec^2\theta - 4} = \sqrt{4(\sec^2\theta - 1)} = \sqrt{4\tan^2\theta} = 2\tan\theta$$
So, to summarise, all you really need to remember is the fundamental relation at the top of this post and "massage" it as necessary to get rid of any square-roots that are in your way.
The two main forms that you'll use are:
$$\sin^2a + \cos^2a = 1$$
and
$$\tan^2a + 1 = \sec^2a$$
which, as we've seen, are really one and the same.
The substitution rule/change of variables theorem says the following:
Suppose $f:[a,b]\to\Bbb{R}$ is continuous, and $u:[\alpha,\beta]\to [a,b]$ is differentiable with Riemann-integrable derivative (or at this point if you don't like remembering various hypotheses, just assume everything is smooth). Then,
\begin{align}
\int_{\alpha}^{\beta}f(u(x))\cdot u'(x)\,dx &= \int_{u(\alpha)}^{u(\beta)}f(t)\,dt
\end{align}
"substitute $t=u(x)$"
If you state the theorem like this, there is no need at all for any injectivity assumptions on $u$; this equality follows immediately from the fundamental theorem of calculus and chain rule (if $F$ is a primitive of $f$, then the LHS and RHS are equal to $F(u(\beta))-F(u(\alpha))$). The problem is that people often don't carefully specify the two functions $f$ and $u$; what ends up happening is they misapply the theorem and then impose extra conditions like injectivity (which of course doesn't hurt, but it doesn't really address the issue).
In your example, let $f(t)=t^2$ and $u(x)=\sin x$ (we can define these functions on all of $\Bbb{R}$, so there's no domain issues here at all, and all the compositions make sense etc). Then,
\begin{align}
\int_0^{2\pi}\sin^2x\cdot \cos x\,dx &=\int_0^{2\pi}f(u(x))\cdot u'(x)\,dx\\
&=\int_{u(0)}^{u(2\pi)}f(t)\,dt\\
&=\int_0^0t^2\,dt\\
&= 0.
\end{align}
This really is by a direct application of the theorem; I'm not sure why you say it is erroneous.
Of course, a corollary of the theorem I wrote above is the following:
Suppose $g:[\alpha,\beta]\to\Bbb{R}$ is continuous and $v:[\alpha,\beta]\to [a,b]$ is $C^1$ with $C^1$ inverse. Then,
\begin{align}
\int_{\alpha}^{\beta}g(x)\,dx &= \int_{v(\alpha)}^{v(\beta)}g(v^{-1}(t))\cdot (v^{-1})'(t)\,dt\\
&=\int_{v(\alpha)}^{v(\beta)}g(v^{-1}(t))\cdot \frac{1}{v'(v^{-1}(t))}\,dt
\end{align}
"substitute $x=v^{-1}(t)$"
The "advantage" of this formula is that on the LHS there is only $g$, i.e without any variable changes, and we move all the stuff involving changes of variables to the other side of the equation (so all instances of $v$ appear only on the RHS). Compare this to my first formula, where we didn't make any assumptions of injectivity, and thus as a result we have $u$ appearing on both the LHS and the RHS of the equation. The added hypothesis of injectivity is the price we pay if we want to isolate everything to one side.
Sometimes in computations, this second form of the theorem (which is really a special case of the one above) is more useful, which is why people may sometimes insist that injectivity is a must.
Best Answer
For the second integral, note that the substitution $u=\sqrt{x^2+a^2}$ implies: $$ u\ge 0 \quad \mbox{and}\quad x=\pm\sqrt{u^2-a^2} $$ so:
$$ dx=\frac{udu}{\sqrt{u^2-a^2}} \mbox{for}\quad x \ge 0 $$
$$ dx=\frac{udu}{-\sqrt{u^2-a^2}} \mbox{for}\quad x < 0 $$
and the integral splits in two parts as $\int_{-b}^0 +\int_0^b$. This gives the correct result.
We have an analogous situation for the first integral with the substitution $$ u=\sin x \qquad \cos x=\pm \sqrt{1-u^2} $$