Since you keep $b$ less than $a/2$, each time the quotient is at least $2$ (except the first step).
So instead of the Fibonacci, you need a sequence defined by $A_n=2A_{n-1}+A_{n-2}$ with $A_0=0,A_1=1$. Then this modified algorithm need $n$ steps to compute $\gcd(A_n+A_{n-1},A_n)$ (not $\gcd(A_n,A_{n-1})$, because we should let the first quotient be $1$).
Like the proof of $F_n$ makes the original algorithm run the most steps, also we can proof $A_n$ makes this modified algorithm runs the most steps: by induction on $n$ that if $a>2b$ and $\gcd(a,b)$ need $n$ steps, then $a\geq A_{n+1},b\geq A_n$:
$n=0$ is clearly true. If $n>0$, By induction hypothesis, $b\geq A_n$. There are two cases for $A$: 1) $a\bmod b\geq b/2$, then $a\geq 2A_n+A_n/2\geq A_{n+1}$. 2) $a\bmod b<b/2$, then $a-2b\geq a\bmod b\geq A_{n-1}$, so $a\geq A_{n+1}$.
Cause $A_n\sim c(1+\sqrt2)^n$, the algorithm runs in $\log_{1+\sqrt2}(\min(a,b))$ time.
Best Answer
The main point is that each division actually takes something like $O(n(\deg f-\deg g))$ operations if we are trying to divide $f$ by $g$ and $\deg f>\deg g$.
This is what is happening in the Euclidean algorithm, so if the divisions you are doing along the way are $a_i/a_{i+1}$ for $i=0,\dotsc,m$, then the complexity is of the order of $$n\cdot \left( (\deg a_0-\deg a_1)+(\deg a_1-\deg a_2)+\dotsc + (\deg a_{m}-\deg a_{m+1}) \right)$$ which is a telescoping series that gives you $O(n(\deg a_0-\deg a_{m+1}))=O(n^2)$ in the end.