It is known that the roots of the Chebyshev polynomials of the second kind, denote it by $U_n(x)$, are in the interval $(-1,1)$ and they are simple (of multiplicity one). I have noticed that the roots of $U_n{(x)}+U_{n-1}(x)$ (by looking at the law ranks of $n$) also lies in $(-1,1)$, I also noticed that for $(1-x)U_n{(x)}+U_{n-1}(x)$ the roots lie in $(-2,2)$. But I don't have any idea how to prove that in general, I wonder, first, if these claims are true? and how can I start proving them?
[Math] Roots of the Chebyshev polynomials of the second kind.
analysischebyshev polynomialsorthogonal-polynomialsroots
Related Solutions
To resolve the second question, note first that the Legendre polynomials are odd functions for odd order (0 then is one root of the polynomial), and even functions for even order. Thus, with regards to solubility in terms of radicals, you should be able to derive (possibly complicated!) radical expressions at least up until $P_9(x)$. To use that as an example, note that
$$\frac{P_9(\sqrt{x})}{\sqrt{x}}$$
is a quartic; thus, one can use the quartic formula to derive explicit expressions for its roots, and then you can easily derive the roots of $P_9(x)$ .
$P_{10}(x)$ is where your trouble starts. If we take a look at the polynomial
$$P_{10}(\sqrt{x})$$
we have a quintic to contend with. I'll skip the relatively tedious details, but you can verify that its Galois group is not a solvable group, and thus the solution cannot be expressed in terms of radicals (you can use theta or hypergeometric functions, though).
So, not much hope in the symbolic front. In the numeric front, things are much easier. The slickest way of getting accurate values of the roots of the Legendre polynomial is to use the Jacobi matrix in my previous answer. Since there exist stable and efficient algorithms (e.g. QR algorithm or divide-and-conquer) for the symmetric eigenproblem (in LAPACK, for instance), and things can be set such that only eigenvalues are returned, you have a good way of generating good approximate values of Legendre polynomial roots. (In the context of Gaussian quadrature, where the roots of orthogonal polynomials play a pivotal role, the scheme is referred to as the Golub-Welsch algorithm.)
Alternatively, as I mentioned in the comments, there exist asymptotic approximations for the roots, which can then be subsequently polished with a few applications of Newton-Raphson. One such asymptotic approximation is due to Francesco Tricomi. Letting $\xi_{n,k}$ be the $k$-th root of $P_n(x)$, ordered in decreasing order, we have
$$\xi_{n,k}\approx\left(1-\frac1{8n^2}+\frac1{8n^3}\right)\cos\left(\pi\frac{4k-1}{4n+2}\right)$$
and $O(n^{-4})$ and further terms are omitted. Other asymptotic approximations due to Luigi Gatteschi use roots of Bessel functions, but I won't say more about those.
Evaluating polynomials of arbitrarily large degree in a Chebyshev basis is practical, and provably numerically stable, using a barycentric interpolation formula. In this case, extended precision isn't needed, even for order 1,000,000 polynomials. See the first section of this paper and the references, or here (Myth #2) for more details. I'll summarize briefly. Let's say you have a polynomial $f$ in a Chebyshev basis, and you know its values at the Chebyshev nodes $$ f_j = f(x_j) $$ $$ x_j = \cos\left(\frac{\pi j}{N}\right), \;\; 0\leq j\leq N $$
Then for any $x$ which isn't one of the Chebyshev nodes $x_j$, we have $$ f(x) = \frac{\displaystyle \sum_{j=0}^N \frac{w_j}{x-x_j}f_j}{\displaystyle \sum_{j=0}^N \frac{w_j}{x-x_j}}, $$ where $$ w_j = \left\{ \begin{array}{cc} (-1)^j/2, & j=0\text{ or }j=N,\\ (-1)^j, & \text{otherwise} \end{array} \right. $$
I believe a naive implementation of the above for various values of $x$ provides a stable algorithm that does not suffer the numerical difficulties encountered when trying to sum up the Chebyshev polynomials directly. The key is to work with the representation of the function by its values, not by its coefficients in a Chebyshev basis.
Best Answer
For your first case:
Since $U_n(x) =\frac{\sin((n+1)t)}{\sin(t)} $ where $x = \cos(t) $,
$\begin{array}\\ U_n(x)+U_{n-1}(x) &=\frac{\sin((n+1)t)}{\sin(t)}+\frac{\sin(nt)}{\sin(t)}\\ &=\frac{\sin((n+1)t)+\sin(nt)}{\sin(t)}\\ &=\frac{2\sin((n+1/2)t)\cos(t/2)}{\sin(t)}\\ \end{array} $
and this is zero when $t(n+1/2) =k\pi $ for some integer $k$, or $t =\frac{k\pi}{n+1/2} $ for $1 \le k \le n$.
This gives $n$ real roots, and that is all since $U_n(x)+U_{n-1}(x)$ is of degree $n$.