Rudin Theorem $3.4 (b)$

sequences-and-seriessolution-verification

Rudin left this proof as an exercise, so I would appreciate if someone could look over my attempt.

(b) Suppose $\{x_n\}$, $\{y_n\}$ are sequences in $\mathbb{R}^k$, $\{\beta_n\}$ is a sequence of real numbers, and $x_n \to x$, $y_n \to y$, $\beta_n \to \beta$. Then
\begin{align*}
\lim\limits_{n \to \infty} (x_n + y_n) & = x + y \\
\lim\limits_{n \to \infty} x_n \cdot y_n & = x \cdot y \\
\lim\limits_{n \to \infty} \beta_n x_n & = \beta x.
\end{align*}

Here is my attempt. I'm going to stick to the notation used by Rudin.

Let $x_n = (a_{1,n}, \ldots, a_{k,n})$, $y_n = (b_{1,n} \ldots, b_{k,n})$, $\beta_n = (\beta_1, \ldots, \beta_k)$, $x = (a_1, \ldots, a_n)$, $y = (b_1, \ldots, b_n)$. Then for any $n$, we have:
$$
x_n + y_n = (a_{1,n} + b_{1,n}, \ldots, a_{k,n} + b_{k,n})
$$

Since $x_n \to x$, by part (a), the sequence converges component-wise, so $a_{j,n} \to a_j$ for each $j$. Since $y_n \to y$, we have $b_{j,n} \to b_j$ for each $j$. So for each $j$, $a_{j,n} + b_{j,n} \to a_j + b_j$ by Theorem 3.3(a) (sum of convergent sequences). So, by part (a) again, since $x_n + y_n$ converges componentwise, we have
$$
x_n + y_n \to (a_1 + b_1, \ldots, a_k + b_k) = (a_1, \ldots, a_k) + (b_1, \ldots, b_k) = x + y.
$$

Moving to the second limit. For any $n$, we have
$$
x_n \cdot y_n = (a_{1,n}, \ldots, a_{k,n}) \cdot (b_{1,n}, \ldots, a_{k,n}) = \sum\limits_{i=1}^k a_{i,n} b_{i,n}.
$$

Again, as $x_n \to x$, we have $a_{j,n} \to a_j$, and as $y_n \to y$, we have $b_{j,n} \to b_j$. By Theorem 3.3, products of convergent sequences convergence, so for each $j$, we have $a_{j,n} b_{j,n}$ converges to $a_j b_j$. By Theorem 3.3 again and induction on $k$, we have
\begin{align*}
\sum\limits_{i=1}^k a_{i,n} b_{i,n} \to \sum\limits_{i=1}^k a_i b_i = (a_1, \ldots, a_k) \cdot (b_1, \ldots, b_k) = x \cdot y.
\end{align*}

Moving now to the third limit. For any $n$, we have
$$
\beta_n x_n = (\beta_n a_{1,n}, \ldots, \beta_{n} a_{k,n}).
$$

By Theorem 3.3, for each $j$, we have $\beta_n a_{j,n} \to \beta a_j$ (product of convergent sequences). So $\beta_n x_n$ converges component-wise, so it converges, and we have
$$
\beta_n x_n \to (\beta a_1, \ldots, \beta a_k) = \beta (a_1, \ldots, a_k) = \beta x.
$$

How do these look?

Best Answer

These all look fine. The proofs are long because you have relied on the facts of convergence from $\mathbb R^1$, as well as finite-dimensionality, but the proofs are correct. These proofs wouldn't work, however, in infinite-dimensional normed vector spaces, which you will undoubtedly soon study if you continue in analysis.

The relevant idea here that simplifies all the proofs, and generalizes to infinite dimensions, is that the vector space structure is compatible with the distance. The Euclidean spaces are all inner product spaces with induced norm $|x|=\sqrt{x\cdot x}$, so the only necessary data is the vector space structure and the norm (or inner product as necessary). For example,

  1. Proof of the first limit:

$$|(x_n+y_n)-(x+y)| \le |x_n-x| + |y_n-y| \to 0.$$

  1. Proof of the second limit: \begin{align*} |x_n\cdot y_n - x\cdot y| &= |(x_n-x)\cdot y_n + x\cdot (y_n -y)|\\ &\le |(x_n-x)\cdot y_n| + |x\cdot(y_n-y)|\\ &\text{Now apply Cauchy-Schwarz to finish}\\ &\le |x_n-x|\,|y_n| + |x|\,|y_n-y| \to 0. \end{align*} (Small comment, this idea of adding and subtracting a term like this crops up all the time when dealing with "products", including the dot product, and more generally inner products. It doesn't have a name as far as I know, but let's call it the bilinear trick.)

  2. Proof of the last limit (a product is involved, so think about the bilinear trick): \begin{align*} |\beta_nx_n - \beta x| &= |(\beta_n-\beta)x_n + \beta(x_n-x)| \\ &\le |\beta_n-\beta|\,|x_n| + |\beta|\,|x_n-x| \to 0. \end{align*}

Related Question