Topology induced by Eulidean metric is the same as product topology

general-topologyproof-explanation

I have seen this theorem in Munkres where he proves that the topology induced by the Euclidean metric, the square metric and the product topology on $\mathbb{R}^n$ are the same.
I know there is an answer to this question on here with a slightly different approach, But I was hopping someone could explain a few thing on this particular one to me.

enter image description here

  1. I am having trouble verifying the Inequality for myself , any hints?

  2. Why is $B_d(x,\epsilon) \subset B_{\rho}(x,\epsilon)$?

And also the second inclusion. $ B_{\rho}(x,\epsilon /\sqrt{n}) \subset B_d(x,\epsilon) $

Without looking, I would have written the first inclusion as $ B_{\rho}(x,\epsilon) \subset B_d(x,\epsilon) $ since $\rho (x,y) \leq d(x,y)$ so I expect any ball defined by the metric $\rho$ to be smaller than a ball defined by $d$. and then we could conclude that the topology induced by $\rho$ is finer. But this seem to go in opposite direction to me.

Best Answer

Let us recall that $$\rho(\mathbf{x}, \mathbf{y}) = \max(|x_1 - y_2|, \ldots,|x_n -y_n|)$$ and $$d(\mathbf{x}, \mathbf{y}) = \sqrt{\sum_{i=1}^n (x_i - y_i)^2}$$

As to 1. let $\mathbf{x}, \mathbf{y}$ be arbitrary elements of $\mathbb{R}^n$. We thus have that $\rho(\mathbf{x}, \mathbf{y}) = |x_m - y_m|$ for some $m \in \{1,\ldots,n\}$ (he maximum is assumed at some coordinate), and thus $\rho(\mathbf{x}, \mathbf{y})^2 = |x_m - y_m|^2 = (x_m - y_m)^2$ (as $|a|^2 = a^2$ for all $a \in \mathbb{R}$) and so

$$\rho(\mathbf{x}, \mathbf{y})^2 \le \sum_{i=1}^n (x_i- y_i)^2 = d(\mathbf{x}, \mathbf{y})^2$$ because we just add extra some positive numbers $(x_j-y_j)^2$. As all numbers (metric values) are $\ge 0$ we can take square roots on both sides and get $\rho(\mathbf{x}, \mathbf{y}) \le d(\mathbf{x}, \mathbf{y})$ which is the first part of the inequality.

On the other hand, for each $j \in \{1,\ldots,n\}$ we have that (because $m$ is where the maximal absolute difference is and for positive numbers $a \le b \to a^2 \le b^2$) that $(x_j - y_j)^2 = |x_j - y_j|^2 \le |x_m - y_m|^2 = \rho(\mathbf{x}, \mathbf{y})^2$ and thus using this trivial upperbound for each of the $n$ terms in the sum:

$$d(\mathbf{x}, \mathbf{y})^2 = \sum_{j=1}^n (x_j - y_j)^2 \le n\rho(\mathbf{x}, \mathbf{y})^2$$

and again we take the square root on both sides to see that $\rho(\mathbf{x}, \mathbf{y}) \le \sqrt{n}d(\mathbf{x}, \mathbf{y})$, which is the final part of the inequality.

We want to see that $$B_d(\mathbf{x}, \varepsilon) \subseteq B_{\rho}(\mathbf{x},\varepsilon)$$

So let $\mathbf{y} \in B_d(\mathbf{x}, \varepsilon)$. This means exactly that $d(\mathbf{x},\mathbf{y}) < \varepsilon$, and thus we conclude by the first inequality that:

$$\rho(\mathbf{x},\mathbf{y}) \le d(\mathbf{x},\mathbf{y}) < \varepsilon$$

which again means $\mathbf{y} \in B_\rho(\mathbf{x}, \varepsilon)$. Hence the inclusion.

To see that $$B_\rho(\mathbf{x}, \frac{\varepsilon}{\sqrt{n}}) \subseteq B_d(\mathbf{x}, \varepsilon)$$

again pick an arbitrary $\mathbf{y} \in B_\rho(\mathbf{x}, \frac{\varepsilon}{\sqrt{n}})$ so that $\rho(\mathbf{x},\mathbf{y}) < \frac{\varepsilon}{\sqrt{n}}$. Using the second inequality we have

$$d(\mathbf{x}, \mathbf{y}) \le \sqrt{n}\rho(\mathbf{x}, \mathbf{y}) < \sqrt{n} \frac{\varepsilon}{\sqrt{n}} = \varepsilon$$ so that $\mathbf{y} \in B_d(\mathbf{x}, \varepsilon)$ as required.