In general, if you have a recurrence relation of the form
$$
a_0, \cdots, a_{n-1} \text{ arbitrary }, \quad a_n = \sum_{i=1}^{n} c_i a_{n-i}, \quad c_i \in R
$$
(where $R$ is some ring (commutative unital let's say)), the set of sequences $a_n$ satisfying that recurrence relation is a free $R$-module of dimension $n$; that's just because there's always a unique solution given any set of initial parameters.
I'm saying this because you're using the same recurrence relation in both sequences $T_n$ and $U_n$, the difference being that you're working with the ring $\mathcal R$ of (let's say) analytic functions in one variable $x$ over the real numbers (polynomials and trigonometric functions are analytic). By analytic I mean admitting a convergent Taylor expansion everywhere.
For initial parameters $a_0, a_1 \in \mathcal R$, let $C(a_0,a_1)_n$ denote the corresponding solution to your recurrence relation, so that $C(a_0,a_1)_n$ is characterized by
$$
C(a_0,a_1)_0 = a_0, \quad C(a_0,a_1)_1 = a_1, \quad C(a_0,a_1)_n = 2x C(a_0,a_1)_{n-1} - C(a_0,a_1)_{n-2}.
$$
Notice that for $a_0,a_1,b_0,b_1 \in \mathcal R$, we have
$$
C(a_0,a_1)_n + C(b_0,b_1)_n = C(a_0 + b_0, a_1 + b_1)_n
$$
and for $\lambda \in \mathcal R$,
$$
C(\lambda a_0, \lambda a_1)_n = \lambda C(a_0,a_1)_n
$$
by $\mathcal R$-linearity of the condition imposed on $C$ (this is what I meant by $\mathcal R$-linearity; I do not mean $\mathbb R$-linearity!).
We have
$$
C(1,x)_n = T_n, \qquad C(1,2x) = U_n.
$$
If $f,g \in \mathcal R$ are arbitrary, the $\mathcal R$-linearity shows that
$$
C(f,g)_n = f C(1,0)_n + g C(0,1)_n,
$$
so it suffices to compute only those.
We see that as vectors in $\mathcal R^{\oplus 2}$,
$$
2(1,x) - (1,2x) = (1,0)
$$
so
$$
C(1,0)_n = 2 T_n - U_n.
$$
Now let us have a look at $C(0,1)_n$. We have
$$
a_0 = 0 \\
a_1 = 1 \\
a_2 = 2x(1) - 0 = 2x
$$
and the recurrence relation continues afterwards exactly like $U_n$ since it has the same initial conditions, just shifted by an index. That means
$$
\forall n \ge 1, \quad C(0,1)_n = U_{n-1}, \quad C(0,1)_0 = 0.
$$
Therefore, if $f, g \in \mathcal R$ are arbitrary analytic functions, you will have
$$
C(f,g)_n = f C(1,0)_n + g C(0,1)_n = f(2 T_n - U_n) + g U_{n-1}.
$$
For instance, if $f=1$ and $g=x$, you have
$$
T_n = C(1,x)_n = (2 T_n - U_n) + x U_{n-1}.
$$
(and for $n=0$, we have $T_0 = C(1,0)_0 + C(0,1)_0 = 1 + 0 = 1$; I couldn't use the above formula in the general case because $C(0,1)$ is defined separately for $n=0$).
How come is that true? Well let's put U's on one side and T's on the other:
$$
U_n - x U_{n-1} = T_n
$$
This is true for $n=1$:
$$
U_1 - x U_0 = 2x - x(1) = x = T_1
$$
and for $n=2$:
$$
U_2 - x U_1 = (4x^2-1) - x(2x) = 2x^2 - 1 = T_2
$$
and just because you might be worried about the apparition of $n-3$ in the following, it's also true for $n=3$:
$$
U_3 - x U_2 = \\
(2x(4x^2-1) - 2x) - x(4x^2 - 1) \\
= 4x^3 - 3x \\
= 2x(2x^2-1) - x \\
= T_3
$$
and if we substitute in the recurrence relation:
$$
U_n - x U_{n-1} \\
= (2x U_{n-1} - U_{n-2}) - x(2x U_{n-2} - U_{n-3}) \\
= 2x(U_{n-1} - x U_{n-2}) - (U_{n-2} - x U_{n-3}) \\
= 2x T_{n-1} - T_{n-2} \\
= T_n.
$$
So yeah, I didn't do any sorcery apparently. Note that by re-arranging the previous equation:
$$
T_n = (2 T_n - U_n) + x U_{n-1}.
$$
You get
$$
U_n \\
= (2T_n - T_n) + x U_{n-1} \\
= T_n + x U_{n-1} \\
= ((2 T_n - U_n) + x U_{n-1}) + x U_{n-1} \\
= (2 T_n - U_n) + 2x U_{n-1} \\
= (1) C(1,0)_n + 2x C(0,1)_n.
$$
I guess in some sense $T_n$ and $U_n$ were considered fundamental to this system since $U_n = C(0,1)_{n+1}$ and $C(0,1)_0 = 0$, so people just dismissed the "zero" party of $C(0,1)$.
That's what I have to say. I guess there could be more fundamental stuff to discuss about this, but I'm out for now!
Hope that helps,
Best Answer
$n=m=0 \Rightarrow \int T_{n}(x)\,T_{m}(x)\,{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}=\int \frac{dx}{\sqrt{1-x^2}}=arcsin x$
$\int \frac{dx}{\sqrt{1-x^2}}=arcsin x \Rightarrow \int_{-1}^{1} T_{n}(x)\,T_{m}(x)\,{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}=\pi$
$x=cos \theta $ and $n=m\neq 0 \Rightarrow \int T_{n}(x)\,T_{m}(x)\,{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}=-\int\cos^2n\theta d\theta$
$-\int\cos^2n\theta d\theta=-(\frac \theta2+\frac{\sin 2n\theta}{4n}) \Rightarrow \int_{-1}^{1} T_{n}(x)\,T_{m}(x)\,{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}=\frac\pi 2 $
let $m\neq n$
$x=cos\theta \Rightarrow dx=-sin\theta d\theta$
$x=cos\theta \Rightarrow T_m(x)=\cos m\theta$
$\int T_{n}(x)\,T_{m}(x)\,{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}=-\int \cos m\theta \cos n\theta{d\theta}$
$\int \cos m\theta \cos n\theta{d\theta}=\frac{\sin(m-n)\theta}{2(m-n)}+\frac{\sin(m+n)\theta}{2(m+n)} \Rightarrow \int_{-1}^{1} T_{n}(x)\,T_{m}(x)\,{\frac {\mathrm {d} x}{\sqrt {1-x^{2}}}}=0$