The idea is simple: Find upper and lower bounds for
$$X := \sqrt{\mathrm{round}(x^2)}$$
and show that $\mathrm{round}(X) = x$.
Let $\mathrm{ulp}(x)$ denote the unit of least precision at $x$
and let $E(x)$ and $M(x)$ denote the exponent and mantissa of $x$, i.e.,
$$x = M(x) \cdot 2^{E(x)}$$
with $1 \le M(x) < 2$ and $E(x) \in \mathbb Z$. Define
$$\Delta(x) = \frac{\mathrm{ulp}(x)}x = \frac{\mu \cdot 2^{E(x)}}x = \frac\mu{M(x)}$$
where $\mu=2^{-52}$ is the machine epsilon.
Expressing the rounding function by its relative error leads to
$$X = \sqrt{(1+\epsilon) \cdot x^2} = \sqrt{(1+\epsilon)} \cdot x
< \big( 1+\frac\epsilon2 \big) \cdot x$$
We know that $|\epsilon| \le \frac12\Delta(x^2)$ and get (ignoring the trivial case $x=0$)
$$\frac Xx < 1 + \frac{\Delta(x^2)}4 = 1 + \frac\mu{4 M(x^2)}$$
By observing $M(x)$ and $M(x^2)$ e.g. over the interval $[1, 4]$,
it can be easily be shown that $\frac{M(x)}{M(x^2)} \le \sqrt2$ which gives us
$$\frac Xx < 1 + \frac{\mu\sqrt2}{4 M(x)}$$
and therefore
$$X < x + \frac{\sqrt2}4 \frac{\mu}{M(x)} \cdot x < x + \frac12 \mathrm{ulp}(x)$$
Analogously we get the corresponding lower bound. Just instead of
$$\sqrt{(1+\epsilon)} < \big( 1+\frac\epsilon2 \big)$$
we use something like
$$\sqrt{(1-\epsilon)} > \big( 1 - (1+\epsilon) \cdot \frac\epsilon2 \big)$$
which suffices, since we used a very generous estimate ($\sqrt2/4<\frac12$) in the last step.
Because of $|X-x|$ being smaller than $\frac12 \mathrm{ulp}(x)$, $x$ is the double
closest to $X$, therefore $\mathrm{round}(X)$ must equal to $x$, q.e.d.
Denote by $[p,q]$ the set of integers $k$ with $p\leq k\leq q$. Then $e\in[0,62]$, $\>m\in[0,511]$. The set $R$ of representable real numbers is therefore given by
$$R=\left\{\pm\left(1+[0,511]\cdot 2^{-9}\right)2^{[0,62]-31}\right\}\cup\{0\}\ .$$
The smallest positive number in $R$ is $2^{-31}$, then we have $512$ jumps of size $2^{-40}$, bringing us to $2^{-30}$, and so on in jumps of ever doubling size, until we reach $2^{31}$. Then come $511$ jumps of size $2^{-9}\cdot 2^{31}=2^{22}$, bringing us to $(2-2^{-9})2^{31}$. The latter is the largest representable number in this system.
It follows that the largest occurring difference between numbers in $R$ is $2^{22}$.
Best Answer
The information you need is all in the Wikipedia pages on floating point arithmetic and IEEE 754. I agree with the comment suggesting that you may do better on a computer stackexchange site, but let me sketch a few hints about the mathematics that may help you understand the Wikipedia pages.
The basic mathematical idea is that a number $x$ with sign bit $s \in \{0, 1\}$, mantissa $c \in \Bbb{N}$ and exponent $q \in \Bbb{Z}$ in base $2$ represents the number
$$x = (-1)^s \times c \times 2^q$$
(The Wikipedia pages also refer to the mantissa $c$ as the "significand" or "coefficient".) For each defined precision, there are rules for the allowable ranges of $c$ and $q$.
If you want to add to $x$ another IEEE 754 number $x'$ say represented by $s'$, $c'$ and $q'$, you first adjust the representations so the exponent is the smaller of $q$ and $q'$. E.g., if $q \le q'$, you represent $x'$ with exponent $q$ using the identity:
$$x' = (-1)^{s'} \times c' \times 2^{q'} = (-1)^{s'} \times (2^{q'-q}c') \times 2^q$$
Once you have made the exponents the same, you can meaningfully add or subtract the adjusted mantissas and calculate the sign of the result according to the sign bits $s$ and $s'$, giving say:
$$x + x' = (-1)^{s''} \times c'' \times 2^q$$
E.g., if $s = 0$, $s' = 1$ and $c < 2^{q'-q}c$, this will be $-1^1 \times (2^{q'-q}c' - c) \times 2^q$.
You then "normalise" the result, i.e., you round and scale to make it conform to the rules for the required precision (if possible). With care, you can arrange for the intermediate results in all these calculations to involve just one or two additional bits.
As you will find from the Wikipedia pages cited above, there are various special cases and optimisations in the representation. E.g., as you seem to know, the exponent is represented by adding a bias value dependent on the precision ($15$ in the case of half precision.) If a number can be normalised, you can infer that the top bit of the mantissa is $1$ and the representation omits it. Denormalised numbers are represented with the exponent bits all $0$ and do include all bits of the mantissa (this applies to one of the numbers in your example).