The idea is simple: Find upper and lower bounds for
$$X := \sqrt{\mathrm{round}(x^2)}$$
and show that $\mathrm{round}(X) = x$.
Let $\mathrm{ulp}(x)$ denote the unit of least precision at $x$
and let $E(x)$ and $M(x)$ denote the exponent and mantissa of $x$, i.e.,
$$x = M(x) \cdot 2^{E(x)}$$
with $1 \le M(x) < 2$ and $E(x) \in \mathbb Z$. Define
$$\Delta(x) = \frac{\mathrm{ulp}(x)}x = \frac{\mu \cdot 2^{E(x)}}x = \frac\mu{M(x)}$$
where $\mu=2^{-52}$ is the machine epsilon.
Expressing the rounding function by its relative error leads to
$$X = \sqrt{(1+\epsilon) \cdot x^2} = \sqrt{(1+\epsilon)} \cdot x
< \big( 1+\frac\epsilon2 \big) \cdot x$$
We know that $|\epsilon| \le \frac12\Delta(x^2)$ and get (ignoring the trivial case $x=0$)
$$\frac Xx < 1 + \frac{\Delta(x^2)}4 = 1 + \frac\mu{4 M(x^2)}$$
By observing $M(x)$ and $M(x^2)$ e.g. over the interval $[1, 4]$,
it can be easily be shown that $\frac{M(x)}{M(x^2)} \le \sqrt2$ which gives us
$$\frac Xx < 1 + \frac{\mu\sqrt2}{4 M(x)}$$
and therefore
$$X < x + \frac{\sqrt2}4 \frac{\mu}{M(x)} \cdot x < x + \frac12 \mathrm{ulp}(x)$$
Analogously we get the corresponding lower bound. Just instead of
$$\sqrt{(1+\epsilon)} < \big( 1+\frac\epsilon2 \big)$$
we use something like
$$\sqrt{(1-\epsilon)} > \big( 1 - (1+\epsilon) \cdot \frac\epsilon2 \big)$$
which suffices, since we used a very generous estimate ($\sqrt2/4<\frac12$) in the last step.
Because of $|X-x|$ being smaller than $\frac12 \mathrm{ulp}(x)$, $x$ is the double
closest to $X$, therefore $\mathrm{round}(X)$ must equal to $x$, q.e.d.
In step $(1)$ of my formula I defined an equation for the relative error of $1.0 \times \beta^e + 1/2\ \text{ulp}$. This equation was incorrect because I did not multiply the $1/2\ \text{ulp}$ term by $\beta^e$, which of course is necessary because the absolute value of $1/2\ \text{ulp}$ changes with $e$. Doing this allows $\beta^e$ to be factored out as before, such that the relative error is no longer a function of the exponent $e$ used to represent the floating-point number approximating a real number.
\begin{align}
\frac{\frac{\beta}{2}\beta^{-p} \times \beta^e}{\beta^e} & =
\frac{\beta}{2}\beta^{-p} &&
\text{Maximum relative error as defined by references} \newline
\frac{\frac{\beta}{2}\beta^{-p} \times \beta^e}{\beta^e + \frac{\beta}{2}\beta^{-p}} & =
\frac{1}{\beta^{-e} + 2\beta^{p-1}} &&
\text{Incorrect definition of relative error from question} \newline
\frac{\frac{\beta}{2}\beta^{-p} \times \beta^e}{\beta^e + \frac{\beta}{2}\beta^{-p}\beta^e} & =
\frac{\frac 12 \beta^{1-p} \times \beta^e}{\left(\frac 12 \beta^{1-p} + 1\right) \beta^e} &&
\text{Correct relative error of $1.0 \times \beta^e + 1/2$ ulp} \newline
& = \frac{\frac 12 \beta^{1-p}}{\frac 12 \beta^{1-p} + 1} \newline
& = \frac{1}{1 + 2\beta^{p-1}} \newline
& = \frac{1}{1 + \frac{2}{\beta} \beta^{p}} \newline
& \approx \frac{\beta}{2} \beta^{-p} && \text{for $\beta^p \gg 1$.}
\end{align}
Best Answer
The usual rule is that you should round at all stages of the computation. This is supposed to represent how computers do floating point math but using a granularity of numbers that reduces the labor and makes the problem more obvious. Computers round at every stage because they have to store the value in memory, which they do in the defined floating point format. In that case $1+\frac 1x=1$ for both of your inputs and I agree with you it is strange that you were asked to do the computation twice when both give $0$. If I were writing the problem the first input would be $2$ or something like that where the computation comes out reasonably well. The second input would be something large like you got, showing that you get $0$.
You should not have already rounded $0.000166667$ to $0.00$ because you can represent it as $1.67\cdot 10^{-4}$. That is how floating point works. It only disappears when you add it to $1$.
As a nit, I would show all the numbers with two decimal places, so your $6\cdot 10^3$s should be $6.00\cdot 10^3$s. I think that is in the spirit of the problem.