To address some preliminaries, let $T(a,b)$ be the number of steps taken in the Euclidean algorithm, which repeatedly evaluates $\gcd(a,b)=\gcd(b,a\bmod b)$ until $b=0$, assuming $a\geq b$. Also, let $h=\log_{10}b$ be the number of digits in $b$ (give or take). (Note that in these calculations, by counting steps, we ignore the question of the time-complexity of the $\mathrm{mod}$ function. If we assume it is $O(1)$, then all of the following also applies to the time complexity of the algorithm.
In the worst-case, as you have stated, $a=F_{n+1}$ and $b=F_n$, where $F_n$ is the Fibonacci sequence, since it will calculate $\gcd(F_{n+1},F_n)=\gcd(F_n,F_{n-1})$ until it gets to $n=0$, so $T(F_{n+1},F_n)=\Theta(n)$ and $T(a,F_n)=O(n)$. Since $F_n=\Theta(\varphi^n)$, this implies that $T(a,b)=O(\log_\varphi b)$. Note that $h\approx log_{10}b$ and $\log_bx={\log x\over\log b}$ implies $\log_bx=O(\log x)$ for any $a$, so the worst case for Euclid's algorithm is $O(\log_\varphi b)=O(h)=O(\log b)$.
The average case requires a bit more care, as it depends on the probabilistics of the situation. In order to precisely calculate it, we need a probability distribution. If $a$ is fixed and $b$ is chosen uniformly from $\mathbb Z\cap[0,a)$, then the number of steps $T(a)$ is
$$T(a)=-\frac12+6\frac{\log2}\pi(4\gamma-24\pi^2\zeta'(2)+3\log2-2)+{12\over\pi^2}\log2\log a+O(a^{-1/6+\epsilon}),$$
or, for less accuracy, $T(a)={12\over\pi^2}\log2\log a+O(1)$. (Source: Wikipedia)
In the best case, $a=b$ or $b=0$ or some other convenient case like that happens, so the algorithm terminates in a single step. Thus, $T(a,a)=O(1)$.
If we are working on a computer using 32-bit or 64-bit calculations, as is common, then the individual $\bmod$ operations are in fact constant-time, so these bounds are correct. If, however, we are doing arbitrary-precision calculations, then in order to estimate the actual time complexity of the algorithm, we need to use that $\bmod$ has time complexity $O(\log a\log b)$. In this case, all of the "work" is done in the first step, and the rest of the computation is also $O(\log a\log b)$, so the total is $O(\log a\log b)$. I want to stress, though, that this only applies if the number is that big that you need arbitrary-precision to calculate it.
(This underscores the difference between the mathematician's big-O and the programmer's big-O notation: in the first case, you want the bound to be true $\forall n$, even those $n$ which are so absurdly large that no one can ever write them down or store them in memory, whereas in the second case, programmers are primarily interested in $n\in[0,2^{64}]$, and that's a liberal estimate. For them, it's more important to see the "leading contribution" to the time complexity, and for the Euclidean algorithm, the smaller number drives the difficulty of the calculation by and large.)
What does $\sup(s_n)$ mean ?
In this context, I believe that they mean $\sup_{k\geq n}(s_k)$ so that the expression for the upper limit is $\lim_{n\rightarrow \infty} \sup_{k\geq n}(s_k)$. Note that you can have some numbers in your sequence larger than the upper limit. In fact, every number in the sequence can be larger than the upper limit, take the sequence $\{\frac{1}{n}\}$. Its upper limit is 0, but every number in the sequence is larger than 0. Another important property is that a sequence converges in $\mathbb{R}$ if and only if the upper and lower limits are in $\mathbb{R}$ and coincide.
Edit: The supremum of the set $\{\frac{1}{n}\}$ is 1, but the upper limit is not 1. To compute the upper limit, the idea is this. Take your sequence $\{s_n\}$ and form a new sequence, call it $\{t_n\}$ (they all may be infinite). Define $t_1$ to be the sup of $\{s_1,s_2,s_3\ldots\}$. Define $t_2$ to be the sup of $\{s_2,s_3,s_4\ldots\}$. $t_3$ to be the sup of $\{s_3,s_4,s_5\ldots\}$. and so on. Now check that $\{t_n\}$ is a decreasing sequence (you are throwing away potentially larger terms). Therefore it has a limit (maybe infinite). This limit is the upper limit.
Edit 2: Now for the lower limit. Start with the given sequence $\{s_n\}$. The lower limit is $\lim_{n\rightarrow \infty}\inf_{k\geq n}(s_k)$. Let's unravel this definition as we did for the upper limit. Define a new sequence of numbers $\{t_n\}$ (possibly including $-\infty$) as follows. Define $t_1$ to be the inf of $\{s_1,s_2,s_3\ldots\}$. Define $t_2$ to be the inf of $\{s_2,s_3,s_4\ldots\}$. $t_3$ to be the inf of $\{s_3,s_4,s_5\ldots\}$. and so on. Now check that $\{t_n\}$ is an increasing sequence, and therefore has a limit (possibly infinite). This limit is called the lower limit.
Best Answer
For $a\in[1,e]$ we have that $\ln(a)\in [0,1]$ thus $x_n(a)=\exp^{(n)}(a)$ is a subsequence going to infinity such that $T(x_n(a))$ can be any number in $[0,1]$ depending of the initial value of $a$.
Now you asked this for integers, but $y_n(a)=\lfloor x_n(a)\rfloor$ is an integer subsequence such that $x_n(a)\le y_n(a)<x_n(a)+1$
And since $\ln$ and its iterates are striclty increasing functions we have ($n\gg 1$)
$T(x_n(a))\le T(y_n(a))\lt T(x_n(a)+1)$
edit: this is true if it is the same $i$ in $\log^*$ but the following shows it is.
But $\ln(x_n(a)+1)=\ln(x_n(a))+\ln(1+1/x_n(a))=\ln(x_n(a))+x_n(a)^{-1}+o(x_n(a)^{-1})$
And when we iterate $\ln(\ln(x_na(a)+1)=\ln(\ln(x_n(a))+(x_n(a)\ln(x_n(a))^{-1}+o((x_n(a)\ln(x_n(a))^{-1})$
But according to the process $\ln(x_n(a))\ge 1$ so we can majorate roughly
$\ln(\ln(x_na(a)+1)\le\ln(\ln(x_n(a))+O(x_n(a)^{-1})$ and this goes on as we iterate again.
All that to say that $T(x_n(a)+1)\sim T(x_n(a))$.
Thus $\lim\limits_{n\to\infty} T(y_n(a))=\lim\limits_{n\to\infty} T(x_n(a))=\ln(a)$.
Edit: I realize that I have worked with a different definition of $log^*$.
In my mind it was $\log^*(x)=\max\{i\in\mathbb N\mid \ln^{(i)}(x)\ge 1\}$, the demo can be adapted to your definition I think.
By your definition $T(n)<0$ while $f(n)\ge 0$ so $f$ can only be an upper bound for $T$.