As for your first question, one should define "standard", or acknowledge that a "canonical model" has been gradually established. As a comment indicated, it appears at least that the way you use IRWLS is rather standard.
As for your second question, "contraction mapping in probability" could be linked (however informally) to convergence of "recursive stochastic algorithms". From what I read, there is a huge literature on the subject mainly in Engineering. In Economics, we use a tiny bit of it, especially the seminal works of Lennart Ljung -the first paper was Ljung (1977)- which showed that the convergence (or not) of a recursive stochastic algorithm can be determined by the stability (or not) of a related ordinary differential equation.
(what follows has been re-worked after a fruitful discussion with the OP in the comments)
Convergence
I will use as reference Saber Elaydi "An Introduction to Difference Equations", 2005, 3d ed.
The analysis is conditional on some given data sample, so the $x's$ are treated as fixed.
The first-order condition for the minimization of the objective function, viewed as a recursive function in $m$,
$$m(k+1) = \sum_{i=1}^{N} v_i[m(k)] x_i, \;\; v_i[m(k)] \equiv \frac{w_i[m(k)]}{ \sum_{i=1}^{N} w_i[m(k)]} \qquad [1]$$
has a fixed point (the argmin of the objective function).
By Theorem 1.13 pp 27-28 of Elaydi, if the first derivative with respect to $m$ of the RHS of $[1]$, evaluated at the fixed point $m^*$, denote it $A'(m^*)$, is smaller than unity in absolute value, then $m^*$ is asymptotically stable (AS). More over by Theorem 4.3 p.179 we have that this also implies that the fixed point is uniformly AS (UAS).
"Asymptotically stable" means that for some range of values around the fixed point, a neighborhood $(m^* \pm \gamma)$, not necessarily small in size, the fixed point is attractive , and so if the algorithm gives values in this neighborhood, it will converge. The property being "uniform", means that the boundary of this neighborhood, and hence its size, is independent of the initial value of the algorithm. The fixed point becomes globally UAS, if $\gamma = \infty$.
So in our case, if we prove that
$$|A'(m^*)|\equiv \left|\sum_{i=1}^{N} \frac{\partial v_i(m^*)}{\partial m}x_i\right| <1 \qquad [2]$$
we have proven the UAS property, but without global convergence. Then we can either try to establish that the neighborhood of attraction is in fact the whole extended real numbers, or, that the specific starting value the OP uses as mentioned in the comments (and it is standard in IRLS methodology), i.e. the sample mean of the $x$'s, $\bar x$, always belongs to the neighborhood of attraction of the fixed point.
We calculate the derivative
$$\frac{\partial v_i(m^*)}{\partial m} = \frac {\frac{\partial w_i(m^*)}{\partial m}\sum_{i=1}^{N} w_i(m^*)-w_i(m^*)\sum_{i=1}^{N}\frac{\partial w_i(m^*)}{\partial m}}{\left(\sum_{i=1}^{N} w_i(m^*)\right)^2}$$
$$=\frac 1{\sum_{i=1}^{N} w_i(m^*)}\cdot\left[\frac{\partial w_i(m^*)}{\partial m}-v_i(m^*)\sum_{i=1}^{N}\frac{\partial w_i(m^*)}{\partial m}\right]$$
Then
$$A'(m^*) = \frac 1{\sum_{i=1}^{N} w_i(m^*)}\cdot\left[\sum_{i=1}^{N}\frac{\partial w_i(m^*)}{\partial m}x_i-\left(\sum_{i=1}^{N}\frac{\partial w_i(m^*)}{\partial m}\right)\sum_{i=1}^{N}v_i(m^*)x_i\right]$$
$$=\frac 1{\sum_{i=1}^{N} w_i(m^*)}\cdot\left[\sum_{i=1}^{N}\frac{\partial w_i(m^*)}{\partial m}x_i-\left(\sum_{i=1}^{N}\frac{\partial w_i(m^*)}{\partial m}\right)m^*\right]$$
and
$$|A'(m^*)| <1 \Rightarrow \left|\sum_{i=1}^{N}\frac{\partial w_i(m^*)}{\partial m}(x_i-m^*)\right| < \left|\sum_{i=1}^{N} w_i(m^*)\right| \qquad [3]$$
we have
$$\begin{align}\frac{\partial w_i(m^*)}{\partial m} = &\frac{-\rho''(|x_i-m^*|)\cdot \frac {x_i-m^*}{|x_i-m^*|}|x_i-m^*|+\frac {x_i-m^*}{|x_i-m^*|}\rho'(|x_i-m^*|)}{|x_i-m^*|^2} \\
\\
&=\frac {x_i-m^*}{|x_i-m^*|^3}\rho'(|x_i-m^*|) - \rho''(|x_i-m^*|)\cdot \frac {x_i-m^*}{|x_i-m^*|^2} \\
\\
&=\frac {x_i-m^*}{|x_i-m^*|^2}\cdot \left[\frac {\rho'(|x_i-m^*|)}{|x_i-m^*|}-\rho''(|x_i-m^*|)\right]\\
\\
&=\frac {x_i-m^*}{|x_i-m^*|^2}\cdot \left[w_i(m^*)-\rho''(|x_i-m^*|)\right]
\end{align}$$
Inserting this into $[3]$ we have
$$\left|\sum_{i=1}^{N}\frac {x_i-m^*}{|x_i-m^*|^2}\cdot \left[w_i(m^*)-\rho''(|x_i-m^*|)\right](x_i-m^*)\right| < \left|\sum_{i=1}^{N} w_i(m^*)\right|$$
$$\Rightarrow \left|\sum_{i=1}^{N}w_i(m^*)-\sum_{i=1}^{N}\rho''(|x_i-m^*|)\right| < \left|\sum_{i=1}^{N} w_i(m^*)\right| \qquad [4]$$
This is the condition that must be satisfied for the fixed point to be UAS. Since in our case the penalty function is convex, the sums involved are positive. So condition $[4]$ is equivalent to
$$\sum_{i=1}^{N}\rho''(|x_i-m^*|) < 2\sum_{i=1}^{N}w_i(m^*) \qquad [5]$$
If $\rho(|x_i-m|)$ is Hubert's loss function, then we have a quadratic ($q$) and a linear ($l$) branch,
$$\rho(|x_i-m|)=\cases{ (1/2)|x_i- m|^2 \qquad\;\;\;\; |x_i-m|\leq \delta \\
\\
\delta\big(|x_i-m|-\delta/2\big) \qquad |x_i-m|> \delta}$$
and
$$\rho'(|x_i-m|)=\cases{ |x_i- m| \qquad |x_i-m|\leq \delta \\
\\
\delta \qquad \qquad \;\;\;\; |x_i-m|> \delta}$$
$$\rho''(|x_i-m|)=\cases{ 1\qquad |x_i-m|\leq \delta \\
\\
0 \qquad |x_i-m|> \delta} $$
$$\cases{ w_{i,q}(m) =1\qquad \qquad \qquad |x_i-m|\leq \delta \\
\\
w_{i,l}(m) =\frac {\delta}{|x_i-m|} <1 \qquad |x_i-m|> \delta} $$
Since we do not know how many of the $|x_i-m^*|$'s place us in the quadratic branch and how many in the linear, we decompose condition $[5]$ as ($N_q + N_l = N$)
$$\sum_{i=1}^{N_q}\rho_q''+\sum_{i=1}^{N_l}\rho_l'' < 2\left[\sum_{i=1}^{N_q}w_{i,q} +\sum_{i=1}^{N_l}w_{i,l}\right]$$
$$\Rightarrow N_q + 0 < 2\left[N_q +\sum_{i=1}^{N_l}w_{i,l}\right] \Rightarrow 0 < N_q+2\sum_{i=1}^{N_l}w_{i,l}$$
which holds. So for the Huber loss function the fixed point of the algorithm is uniformly asymptotically stable, irrespective of the $x$'s. We note that the first derivative is smaller than unity in absolute value for any $m$, not just the fixed point.
What we should do now is either prove that the UAS property is also global, or that, if $m(0) = \bar x$ then $m(0)$ belongs to the neighborhood of attraction of $m^*$.
Best Answer
In an expression like
$$ \beta^{new}\leftarrow \text{argmin}_{b}(\textbf{z}-\textbf{X}b)^T\textbf{W}(\textbf{z}-\textbf{X}b) $$
the point is that the output, $\beta^{new}$, is the result of considering all possible $b\in \mathbb{R}^p$ or whatever other space you are optimizing over. That's why there's no superscript: in the optimization problem $\beta$ is a dummy variable, just like with an integral (and I'm deliberately writing $b$ not $\beta$ to reflect $b$ being a dummy variable, not the target parameter).
The overall procedure involves getting a $\beta^{(t)}$, computing the "response" for the WLS, and then solving the WLS problem for $\beta^{(t+1)}$; as you know, we can use derivatives to get a nice closed-form solution for the optimal $\hat \beta$ for this problem. Thus $\beta^{old}$, which is fixed, appears in the vector $\textbf{z}$ in the WLS computation and then leads to $\beta^{new}$. That's the "iteration" part, that we use our current solution to create a new response vector; the WLS part then is solving for the new $\hat \beta$ vector. We keep doing this until there's no "significant" change.
Remember that the WLS procedure doesn't know that it is being used iteratively; as far as it is concerned, it is presented with an $X$, $y$, and $W$ and then outputs
$$ \hat{\beta} = (X^T W X)^{-1} X^T W y $$ like it would do in any other instance. We are being clever with our choice of $y$ and $W$ and iterating.
Update: We can derive the solution to the WLS problem without using any component-wise derivatives. Note that if $Y \sim \mathcal N(X\beta, I)$ then $W^{1/2}Y \sim \mathcal N(W^{1/2}X\beta, W)$ from which we have that $$ \frac{\text d}{\text d\beta}\|W^{1/2}Y - W^{1/2}X\beta\|^2 = -2X^TWY + 2X^TWX\beta. $$
Setting the derivative equal to 0 and solving we obtain
$$ \hat{\beta} = (X^TWX)^{-1} X^TWY. $$
Thus for any inputs $W$, $X$, and $Y$ (provided W is positive definite and $X$ is full column rank) we get our optimal $\hat{\beta}$. It doesn't matter what these inputs are. So what we do is we use our $\beta^{old}$ to create our $Y$ vector and then we plug that in to this formula which outputs the optimal $\hat \beta$ for the given inputs. The whole point of the WLS procedure is to solve for $\hat \beta$. It in and of itself doesn't require plugging in a $\hat \beta$.