Let $V$ a finite dimensional vector space with $dim\;V=n$ and $V'$ be its dual space over a field $\Bbb F$.
This can be proved in two ways:
$\mathbf {Proof \;1}:$ Let $B_S=\{s_1,...,s_k\}$ be a basis for the subspace $S$. So $dim\;S=k$. Extend $B_S$ to a basis of $V$ say $B_V=\{s_1,...,s_k,s_{k+1},...,s_n\}$. Let $B^{'}_V=\{f_1,...,f_n\}$ be the dual basis to $B_V$ where $f_i(s_j)=\delta_{i,j}$, the Kronecker Delta.
My Claim is that $\{f_{k+1},...,f_n\}$ is the basis of $N=S^0$. Indeed let $f\in V'$ then $f=a_1f_1+...+a_nf_n$ for some $a_1,...,a_n \in \Bbb F$. Now $f(s_j)=a_jf_j(s_j)=a_j$. So $f=f(s_1)f_1+...+f(s_n)f_n$. If $i\in \{k+1,...,n\}$ and $j \in \{1,...,k\}$ then $f_i(s_j)=0$. So if $s\in S$ then $s=b_1s_1+...+b_ns_k$ for some $b_1,....,b_k\in \Bbb F $. Then $f_i(s)=0\; \forall i \in \{k+1,...,n\}$. Hence $f_{k+1},...,f_n \in S^0$. Then $ span \{f_{k+1},...,f_n \}\subseteq S^0$ since $S^0$ is a subspace. Now suppose $f\in S^0$ then $f(s_1)=...=f(s_k)=0$ since $s_1,...,s_k \in S$. Then writing $f=f(s_1)f_1+...+f(s_n)f_n$, the first $k$ terms are zero i.e $f=f(s_{k+1})f_{k+1}+...+f(s_n)f_n$. So $f\in span\{f_{k+1},...,f_n\}$. Hence $S^0\subseteq span\{f_{k+1},...,f_n\}$. Hence $S^0=span\{f_{k+1},...,f_n\}$, and of course $\{f_{k+1},...,f_n\}$ is linearly independent being a subset of a set of basis vectors.So $dim\;S^0=n-k=dim\;V-dim\;S$.So $dim\;V=dim\;S + dim\;S^0$ . $\lhd$
$\mathbf {Proof \;2}:$ Now suppose $i:S\rightarrow V$ be the natural inclusion map i.e $i(s)=s,\; \forall s \in S$. Consider its dual map $i':V'\rightarrow S'$ defined as follows $i'(\phi)=\phi\,\circ i$. Thus $i'$ is a linear map from $V'$ to $S'$. Then $dim\,range(i')+dim\,null(i')=dim\,V'=dim\,V$. Now my first claim is,
$\it{Claim \;1}:$ $null(i')=S^0.$
$\it{Pf}:$ If $\phi \in null(i')\Leftrightarrow i'(\phi)=0\Leftrightarrow \phi\,\circ i=0\Leftrightarrow(\phi\,\circ i)(s)=0,\,\forall s\in S\Leftrightarrow\phi(i(s))=0,\,\forall s\in S$ $\Leftrightarrow\phi(s)=0,\, \forall s\in S\Leftrightarrow \phi \in S^0\,\bullet$
Now we have $dim\,range(i')+dim\,S^0=dim\,V$. This means that $dim\,range(i')$ must be equal to $dim\,S=dim\,S'$. But $range\,(i')\subseteq S'$. So it means that $i'$ must be surjective i.e $range(i')=S'$. Indeed,
$\it{Claim\;2}:$ $range(i')=S'$.
$\it{Pf}:$ Let $\phi \in S'$. So $\phi:S\rightarrow \Bbb F$. We extend $\phi $ to a functional $\psi :V\rightarrow \Bbb F$ such that $\psi(s)=\phi(s),\,\forall s\in S$. So $i'(\psi)=\psi\,\circ i$. But $(\psi\, \circ i)(s)=\psi(i(s))=\psi(s)=\phi(s),\,\forall s \in S$. Hence $\psi\, \circ i = \phi \Leftrightarrow i'(\psi)=\phi$. So $S'=range(i')\,\bullet$.
Finally we have $dim\,S'+dim\,S^0=dim\,V\Rightarrow dim\,S+dim\,S^0=dim\,V.\,\lhd$
Here's a simple idea you could try with very little extra effort: Your GIF shows that it's oscillating back and forth, a phenomenon that can also occur in classical gradient descent algorithms if the problem is bad conditioned. A very popular and powerful method to alleviate this kind of problem is called momentum, which basically consists of averaging over previous iterations.
So instead of throwing away all the previous iterates, you can do something like
$$ x_{k+1} = (1-\beta)g(x_{k}) + \beta x_k$$
Note that when $\beta=0$, we recover a standard fixed point iteration. Consider a simple fixed point problem like $x=\cos(x)$, which exhibits the oscillatory phenomenon. Then, starting from the same seed here are the residuals $|x_*-x_k|$ for different values of $\beta$:
$$
\small\begin{array}{lllllll}
k & \beta=0 & \beta=0.1 &\beta=0.2 &\beta=0.3 &\beta=0.4 &\beta=0.5
\\\hline 0 & 5.45787 &5.45787 &5.45787 &5.45787 &5.45787 &5.45787
\\1 & 0.2572 & 0.777267 & 1.29733 & 1.8174 & 2.33747 & 2.85754
\\2 & 0.19566 & 0.538475 & 0.690985 & 0.555697 & 0.107195 & 0.610102 \\3 & 0.116858 & 0.162927 & 0.0696096 & 0.00419339 & 0.00218156 & 0.0454083 \\4 & 0.0835784 & 0.0908543 & 0.0249916 & 0.000723828 & 8.0351e-06 & 0.0070347 \\5 & 0.053654 & 0.0431759 & 0.00828335 & 0.000124022 & 3.34983e-08 & 0.0011389 \\6 & 0.0371882 & 0.0224696 & 0.00282738 & 2.12772e-05 & 1.39595e-10 & 0.000185622 \\7 & 0.0245336 & 0.0112062 & 0.000955803 & 3.64953e-06 & 5.81757e-13 & 3.02859e-05 \\8 & 0.0167469 & 0.00571477 & 0.000324182 & 6.26001e-07 & 2.44249e-15 & 4.94232e-06 \\9 & 0.0111768 & 0.00288222 & 0.000109831 & 1.07377e-07 & 1.11022e-16 & 8.06552e-07 \end{array}
$$
A well chosen momentum can speed up convergence tremendously! A variant of this idea specific to fixed point iterations appears to be known as Anderson Acceleration.
Best Answer
They are using the inverse of the Jacobian of the system. We have
$$F(x_1, x_2) = \begin{bmatrix} \dfrac{\partial f_1}{\partial x_1} & \dfrac{\partial f_1}{\partial x_2} \\ \dfrac{\partial f_2}{\partial x_1} & \dfrac{\partial f_2}{\partial x_2} \end{bmatrix} = \begin{bmatrix} 6 x_1 & 8 x_2 \\ -24 x_1^2& 3 x_2^2 \end{bmatrix}$$
Using the given starting point $(x_1(0), x_2(0)) = (-0.5, 0.25)$, we have
$$F(-0.5, 0.25) = \begin{bmatrix} -3. & 2. \\ -6. & 0.1875 \end{bmatrix} \implies F^{-1}(-0.5, 0.25) =\begin{bmatrix} 0.0163934 & -0.174863 \\ 0.52459 & -0.262295 \end{bmatrix}$$