I'd like to know the space complexity of storing bit string representations of the numbers in the factorial sequence (as in a memoized factorial function). So each number $f_i=i!$ in $i=0 \cdots n$ takes $\log_2 f_i$ bits, but how fast does that grow with $i$? Even better would be to know the sum of that sequence or (more reasonably) what a good bound is.
[Math] the growth rate of the logarithm of the factorial sequence
computational complexityfactorial
Related Solutions
When classifying problems, they are not classified according to the size of the output, in bits, but rather, the size of the input. The size of the input is the size of the problem, which is the size we care about when defining standard complexity classes. Problems in P take time bounded by a polynomial function of the problem size. Problems in P-SPACE take space bounded by a polynomial function of the problem size. Problems in E take time bounded by an exponential function of the problem size, and so on. If the size of the output is exponential in the size of the input problem, (which, in this case would be the initial set), then it's clear that the problem must be, at minimum, exponential.
If you wish to define your own classification of problems (POUT-TIME and POUT-SPACE or something) in terms of the size of the output, you are welcome to, but this is not how standard complexity classes are defined. Your friend is correct.
To address some preliminaries, let $T(a,b)$ be the number of steps taken in the Euclidean algorithm, which repeatedly evaluates $\gcd(a,b)=\gcd(b,a\bmod b)$ until $b=0$, assuming $a\geq b$. Also, let $h=\log_{10}b$ be the number of digits in $b$ (give or take). (Note that in these calculations, by counting steps, we ignore the question of the time-complexity of the $\mathrm{mod}$ function. If we assume it is $O(1)$, then all of the following also applies to the time complexity of the algorithm.
In the worst-case, as you have stated, $a=F_{n+1}$ and $b=F_n$, where $F_n$ is the Fibonacci sequence, since it will calculate $\gcd(F_{n+1},F_n)=\gcd(F_n,F_{n-1})$ until it gets to $n=0$, so $T(F_{n+1},F_n)=\Theta(n)$ and $T(a,F_n)=O(n)$. Since $F_n=\Theta(\varphi^n)$, this implies that $T(a,b)=O(\log_\varphi b)$. Note that $h\approx log_{10}b$ and $\log_bx={\log x\over\log b}$ implies $\log_bx=O(\log x)$ for any $a$, so the worst case for Euclid's algorithm is $O(\log_\varphi b)=O(h)=O(\log b)$.
The average case requires a bit more care, as it depends on the probabilistics of the situation. In order to precisely calculate it, we need a probability distribution. If $a$ is fixed and $b$ is chosen uniformly from $\mathbb Z\cap[0,a)$, then the number of steps $T(a)$ is
$$T(a)=-\frac12+6\frac{\log2}\pi(4\gamma-24\pi^2\zeta'(2)+3\log2-2)+{12\over\pi^2}\log2\log a+O(a^{-1/6+\epsilon}),$$
or, for less accuracy, $T(a)={12\over\pi^2}\log2\log a+O(1)$. (Source: Wikipedia)
In the best case, $a=b$ or $b=0$ or some other convenient case like that happens, so the algorithm terminates in a single step. Thus, $T(a,a)=O(1)$.
If we are working on a computer using 32-bit or 64-bit calculations, as is common, then the individual $\bmod$ operations are in fact constant-time, so these bounds are correct. If, however, we are doing arbitrary-precision calculations, then in order to estimate the actual time complexity of the algorithm, we need to use that $\bmod$ has time complexity $O(\log a\log b)$. In this case, all of the "work" is done in the first step, and the rest of the computation is also $O(\log a\log b)$, so the total is $O(\log a\log b)$. I want to stress, though, that this only applies if the number is that big that you need arbitrary-precision to calculate it.
(This underscores the difference between the mathematician's big-O and the programmer's big-O notation: in the first case, you want the bound to be true $\forall n$, even those $n$ which are so absurdly large that no one can ever write them down or store them in memory, whereas in the second case, programmers are primarily interested in $n\in[0,2^{64}]$, and that's a liberal estimate. For them, it's more important to see the "leading contribution" to the time complexity, and for the Euclidean algorithm, the smaller number drives the difficulty of the calculation by and large.)
Best Answer
Note that $n! = \prod_{i=1}^{n} i \leq \prod_{i=1}^{n} n = n^{n}$. So $log(n!) \leq log(n^{n}) = n log(n)$. And so $log(n!) = O(n log(n))$.