We need to show that (c) implies (a).
Suppose that every infinite subset of $E$ has a limit point in $E$. We show that then $E$ is both closed and bounded.
Suppose, if possible, that the set $E$ is unbounded. Then there exists a point $x_1 \in E$ such that $\vert x_1 \vert >1$, for otherwise the set $E$ would be contained in the unit closed ball about the origin in $\mathbb{R}^k$.
Using the same reasoning, we can find a point $x_2 \in E$ such that $$\vert x_2 \vert > 1 + \max \left( 2, \vert x_1 \vert \right).$$
Having chosen the point $x_{n-1}$ (where $n \geq 3$), we can choose a point $x_n \in E$ such that $$\vert x_n \vert > 1 + \max \left( n , \vert x_1 \vert, \ldots, \vert x_{n-1} \vert \right).$$ Otherwise, the set $E$ would be contained in a closed ball of radius equal to $1 + \max \left(n, \vert x_1 \vert, \ldots, \vert x_{n-1} \vert \right)$ and centered at the origin.
We have thus inductively chosen a sequence $\{x_n \}_{n \in \mathbb{N}}$ of distinct points of $E$ such that, for each $n \in \mathbb{N}$, we have $\vert x_n \vert > n$ and $\vert x_n \vert > \vert x_i \vert$ for all $i \in \{\ 1, \ldots, n-1 \ \}$.
Let us define the set $S$ as $$S \colon= \{ \ x_n \ \colon \ n \in \mathbb{N} \ \}.$$
This set $S$ is an infinite subset of $E$. We show that this set $S$ has no limit points in $\mathbb{R}^k$ and hence no limit points in $E$.
For any $m, n \in \mathbb{N}$ such that $n > m$, we have $$\vert x_n - x_m \vert \geq \vert x_n \vert - \vert x_m \vert \geq 1 + \max \left( n, \vert x_1 \vert, \ldots, \vert x_{n-1} \vert \right) - \vert x_m \vert > 1.$$
Thus it follows that, for any $m, n \in \mathbb{N}$ such that $n \neq m$, the inequality $\vert x_n - x_m \vert > 1$ holds.
So if some point $x \in \mathbb{R}^k$ were a limit point of $S$, then there would be infinitely many values $n \in \mathbb{N}$ such that $$\vert x_n - x \vert < \frac 1 4,$$ and for any two (distinct) such points $x_m$ and $x_n$ of $S$, we would have $$\vert x_m - x_n \vert \leq \vert x_m - x \vert + \vert x_n - x \vert < \frac 1 4 + \frac 1 4 = \frac 1 2,$$
which contradicts what we have shown above about the distance between any two distinct points of $S$.
So the set $S$, though an infinite subset of $E$, fails to have a limit point in $\mathbb{R}^k$ and hence in $E$.
Therefore, the set $E$ must be bounded.
Next, suppose that $E$ is not closed. Then $E$ has a limit point $x_0 \in \mathbb{R}^k - E$. Since $x_0$ is a limit point of $E$, every neighborhood of $x_0$ contains a point of $E$ distinct from the point $x_0$ itself (in fact infinitely many points of $E$).
Thus, there is a point $x_1 \in E$ such that $$0 < \vert x_1 - x_0 \vert < \frac 1 2.$$
Again there is a point $x_2 \in E$ such that $$0 < \vert x_2 - x_0 \vert < \min \left( \vert x_1 - x_0 \vert, \frac 1 3 \right).$$
Assuming that the point $x_{n-1}$ (where $n \geq 3$) has been chosen, we can choose a point $x_n \in E$ such that $$0 < \vert x_n - x_0 \vert < \min \left( \vert x_1 - x_0 \vert, \ldots, \vert x_{n-1} - x_0 \vert, \frac{1}{n+1} \right).$$
Thus we have recursively defined a sequence $\{x_n \}_{n\in\mathbb{N}}$ of points of $E$ for which $x_n \neq x_m$ for all $m, n \in \mathbb{N}$ such that $m \neq n$ and also $$0 < \vert x_n - x_0 \vert < \frac 1 n \ \mbox{ for all } \ n \in \mathbb{N}.$$
Let us define the set $S$ as follows:
$$S \colon= \left\{ \ x_n \ \colon \ n \in \mathbb{N} \ \right\}.$$
This set $S$ is an infinite subset of $E$.
We show that $x_0$ is the only limit point of $S$. That is, we show that $x_0$ is a limit point of $S$ but no other point $y$ of $\mathbb{R}^k$ can be a limit point of $S$.
Let $\delta$ be any positive real number. Then, by the archimedean property of $\mathbb{R}$, we can find $n_\delta \in \mathbb{N}$ such that $$n_\delta > \frac 1 \delta,$$ and so, for all $n \in \mathbb{N}$ such that $n \geq n_\delta$, we have $$0 < \vert x_n - x_0 \vert < \frac{1}{n+1} < \frac 1 n_\delta < \delta,$$
which implies that $x_0$ is indeed a limit point of $S$.
Now if $y \in \mathbb{R}^k$ and $y \neq x_0$, then $\vert y - x_0 \vert > 0$. So we can find a positive integer $N$ such that $$N > \frac{2}{\vert y -x_0 \vert}.$$
So, for every $n\in \mathbb{N}$ such that $n \geq N$, we have
$$ 0 < \vert x_n - x_0 \vert < \frac{1}{n} \leq \frac 1 N < \frac{\vert y - x_0 \vert}{2}$$
and hence, for every $n\in \mathbb{N}$ such that $n \geq N$, we have
$$\vert x_n - y \vert \geq \vert y - x_0 \vert - \vert x_n - x_0 \vert \geq \vert y - x_0 \vert - \frac{\vert y - x_0 \vert}{2} > \frac{\vert y - x_0 \vert}{3}. $$
So if we take a positive real number $\epsilon$ such that
$$0 < \epsilon < \frac{1}{2} \min \left( \vert x_1 - y \vert, \ldots, \vert x_N - y \vert, \frac{\vert y - x_0 \vert}{3} \right),$$
then there is no point of set $S$ that lies in the neighborhood of the point $y$ of radius $\epsilon$, other than the point $y$ itself if $y \in S$; that is,
$$S \cap \left( N_\epsilon (y) - \{ y \} \right) = \emptyset,$$
which implies that the point $y$ cannot be a limit point of the set $S$.
But $y$ was any point of $\mathbb{R}^k$ other than the point $x_0$. Therefore, $x_0$ is the only limit point of the set $S$.
But $x_0 \not\in E$ by our hypothesis. Thus we have found an infinite subset $S$ of $E$ such that no point of $E$ is a limit point of $S$. The only limit point of $S$, namely the $x_0$, ( which is also a limit point of the set $E$ by our hypothesis) does not belong to $E$.
So if every infinite subset of the set $E \subset \mathbb{R}^k$ were to have a limit point in $E$, then the set $E$ must be closed and bounded.
Best Answer
Here is my interpretation of how Rudin argued the first point in your question (to contradict the case that $y^{n} < x$).
The idea is to find a $y^*$ that would lie in the "gap" of numbers created if $y$ were indeed less than the "true $\sqrt[n]{x}$". (Here $y$ refers to $y=\sup E$ where $E=\{t:t^n < x, t\in\mathbb{R^+}\}$ as defined by Rudin.)
In other words, we are trying to construct such a $y^*$ that has the two properties $(y^*)^n < x$ and $y^* > y$. The first property says that $y^* \in E$, the set which $y$ bounds above. The second property contradicts the fact that $y$ is an upper bound of $E$, since $y^* \not\le y$ and $y^* \in E$.
To construct $y^*$ we need to find a suitable $h$ such that $y^*=y+h$.
Intuitively, one would like to let $h$ be a positive quantity less than the difference $\sqrt[n]{x} - y$. However, we have not yet shown the existence of $\sqrt[n]{x}$ so this complicates our approach.
One alternative is to look at the difference $x - y^n$, which is a valid expression at this point in the proof. Graphically, this approach can be thought of as picking the position of $y+h$ on the horizontal axis based on the function value $(y+h)^n$ on the vertical axis.
In other words, instead of specifying $h$ directly we will try to specify $h$ in terms of what $y+h$ maps to under exponentiation by $n$.
Hence, we look for an $h$ such that $(y+h)^n - y^n < x - y^n$.
(Again, think of comparing function values of the curve $f(t) = t^n$ on a graph. The left-hand side $(y+h)^n - y^n$ is some positive quantity smaller than the height of the vertical "gap" assumed to exist between $y^n$ and $x$ on the graph.)
(Since $f(t) = t^n$ is a strictly increasing function (for positive $t$), inequalities are preserved, so the hypothetical horizontal "gap" (in which we were originally interested) will correspond with a vertical "gap" after this transformation. Intuitively, (pretending $\sqrt[n]{x}$ is defined) we could write, $$ \sqrt[n]{x} - y > 0 \iff \sqrt[n]{x} > y \iff f(\sqrt[n]{x}) > f(y) \iff \sqrt[n]{x}^n > y^n \iff x > y^n \iff x - y^n > 0 $$ with the following justifications for each of the double arrows (in order from left to right): (1) addition/subtraction by $y$, (2) $f$ is strictly increasing, (3) definition of $f$, (4) definition of $\sqrt[n]{x}$, (5) addition/subtraction by $y^n$.)
Using the observation Rudin made$^1$ we can inject another expression into the inequality to get:
$(y+h)^n - y^n < hn(y+h)^n < x - y^n$
(This "injection" is okay for now because we have technically not defined $h$ yet, but instead are still working backwards to specify $h$.)
The trouble with this inequality is that the middle expression contains $h$ inside a binomial term raised to the power of $n$, which makes it hard to algebraically isolate. I believe Rudin makes the convenient assumption at this point that $h$ is small — in other words $0 < h < 1$ — in order to complete his definition of $h$.
We can now inject another expression into the inequality to yield:
$(y+h)^n - y^n < hn(y+h)^n < hn(y+1)^n < x - y^n$
Isolating just the rightmost two expressions and rearranging gives the final inequality:
$h < \frac{x - y^n}{n(y+1)^n}$
End Note
$b^n - a^n = (b-a)(b^{n-1} + ab^{n-2} + ... + a^{n-2}b + a^{n-1})$
so
$b^n - a^n < (b-a)n(b^{n-1})$
in the case $0 < a < b$.
(Rudin let $b = y+h$ and $a=y$ in the proof.)