Induction to prove Linear Independence

inductionlinear algebra

This is a problem from Shilov's Linear Algebra which I have been struggling with (Pg. 56 Q5)

Problem Statement

Show that the functions $t^{r_1},t^{r_2},\dots,t^{r_k}$ are linearly independent in the space $K(a,b)$, where $0<a<b$ and $r_1,r_2,\dots,r_k$ are distinct real numbers.

Attempt

Following the hint given, I have assumed linear dependence of the form:

$$\alpha_1t^{r_1}+\alpha_2t^{r_2}+\cdots+\alpha_kt^{r_k}=0$$
$$\alpha_1+\alpha_2\frac{t^{r_2}}{t^{r_1}}+\cdots+\alpha_k\frac{t^{r_k}}{t^{r_1}}=0$$

Now differentiating:
$$\alpha_2(r_2-r_1)t^{r_2-r_1-1}+\cdots+\alpha_k(r_k-r_1)t^{r_k-r_1-1}=0$$
from here I run into problems.

Letting $k=1$ to prove the base case for the induction, it is clear that the final term vanishes. However, I don't see how the case for k=1 is true from there, is it because of the assumption in the beginning?

Furthermore, as I am quite unfamiliar with induction I do not see what use it brings here. The only thing I can think of is that the induction step indeed turns out to be false giving us a contradiction, thus proving independence.

Summary of Questions

  1. Is the $k=1$ case true due to the initial assumption of linear dependence?
  2. What is the need for differentiation?
  3. Is the only way to show linear independence here to use induction? If not can you provide me with a link/resource where another method is used.

Best Answer

  1. For $\ k=1\ $, you have just one function $\ t^{r_1}\ $. Since it's non-zero, the set $\ \big\{t^{r_1}\big\}\ $ is linearly independent.

  2. As Kavi Rama Murthy's answer shows, differentiation isn't needed. It does, however, provide you with another way of attacking the problem. Your induction hypothesis should be that $\ t^{r_1}, t^{r_2},\dots, t^{r_{k-1}}\ $ are linearly independent for any set of $\ k-1\ $ distinct real numbers $\ r_1,r_2,\dots,$$r_{k-1}\ $. Since $\ r_2-r_1,r_3-r_1,\dots,r_k-r_1\ $ are a set of $\ k-1\ $ distinct real numbers, your induction hypothesis imlpies that $\ t^{r_2-r_1},t^{r_3-r_1},\dots,t^{r_k-r_1}\ $ are linearly independent. Thus, from the equation $$ \alpha_2(r_2-r_1)t^{r_2-r_1-1}+\cdots+\alpha_k(r_k-r_1)t^{r_k-r_1-1}=0 $$ you can deduce that $\ (r_i-r_1)\alpha_i=0\ $ for $\ i\ge2\ $, and hence that $\ \alpha_i=0\ $, because $\ r_i-r_1\ne0\ $. Now substituting $\ \alpha_i=0\ $ for $\ i\ge2\ $ back in your original equation $$ \alpha_1t^{r_1}+\alpha_2t^{r_2}+\dots+\alpha_kt^{r_k}=0 $$ gives you $\ \alpha_1t^{r_1}=0\ $, and so $\ \alpha_ 1=0\ $ as well. This completes the induction step of deducing the linear independence of $\ t^{r_1}, t^{r_2},\dots,t^{r_k}\ $ from the induction hypothesis.

  3. There's at least one way you can avoid using induction directly, at least for distinct rational $\ r_1,r_2,\dots,r_k\ $. I suspect you can use the denseness of the rationals in the reals to parlay this into a proof for arbitrary distinct real $\ r_1,r_2,\dots,r_k\ $, but since the details seem to get fairly messy, I haven't bothered trying to produce one. In any case, whether all the subsidiary results you need to appeal to can also be proved without using induction is rather moot, and the proof itself is fairly circuitous, it's not clear to me what you'll gain by avoiding the fairly simple inductive proofs already available to you.

    If $\ r_i\ $ are distinct rational numbers for $\ i=1,2,\dots,k\ $, and $$ \alpha_1t^{r_1}+\alpha_2t^{r_2}+\dots+\alpha_kt^{r_k}=0\ , $$ let $\ r_m=\min_\limits{1\le i\le k}r_i\ $ and $\ q_i=r_i-r_m\ $, then $\ q_1,q_2,\dots,q_k\ $ are distinct non-negative rationals, and on dividing the above equation through by $\ t^{r_m}\ $, we obtain $$ \alpha_1t^{q_1}+\alpha_2t^{q_2}+\dots+\alpha_kt^{q_k}=0\ . $$ Let $\ q_i=\frac{n_i}{d_i}\ $, where $\ n_i\ $ is a non-negative and $\ d_i\ $ a positive integer, $\ d=d_1d_2\dots d_k \ $, $\ n=1+\max_\limits{1\le i\le k} dq_i\ $, and $\ x_1,x_2,\dots,x_n\ $ be a set of distinct members of the interval $\ [a,b]\ $. Substituting $\ t=x_i\ $ in the above equation, we get $$ \alpha_1x_i^{q_1}+\alpha_2x_i^{q_2}+\dots+\alpha_kx_i^{q_k}=0 $$ for all $\ i=1,2,\dots,n\ $. Putting $\ y_i=x_i^{\frac{1}{d}}\ $, we can write this as $$ \pmatrix{0\\0\\\vdots\\0}=\pmatrix{y_1^{dq_1}&y_1^{dq_2}&\dots&y_1^{dq_k}\\ y_2^{dq_1}&y_2^{dq_2}&\dots&y_2^{dq_k}\\ \vdots&\vdots&\ddots&\vdots\\ y_n^{dq_1}&y_n^{dq_2}&\dots&y_n^{dq_k}}\pmatrix{\alpha_1\\\alpha_2\\\vdots\\\alpha_k}\ . $$ Every column of the matrix on the right of this equation is a column of the Vandermonde matrix $$ \pmatrix{1&y_1&y_1^2&\dots&y_1^{n-1}\\ 1&y_2&y_2^2&\dots&y_2^{n-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&y_n&y_n^2&\dots&y_n^{n-1}} $$ which has determinant $\ \prod_\limits{1\le i<j\le k}\big(y_j-y_i\big)\ $. Since $\ y_i\ne y_j\ $ for $\ i\ne j\ $, the determinant is non-zero, therefore the matrix is non-singular and so must have linearly independent columns. It follows that the unique solution of the above linear equations is $$ \alpha_1=\alpha_2=\dots=\alpha_k=0 $$ and therefore $\ t^{r_1},t^{r_2},\dots,t^{r_k}\ $ are linearly independent.