The short answer is that there are no implications.
As Scott Aaronson said on his blog, the fact that even a polynomial time algorithm for GI has no implications for complexity classes is one reason some people conjectured that GI might be in P. He continued:
In fact, the main implication for complexity theory, is that we’d have to stop using GI as our flagship example of a problem that has statistical zero-knowledge proof protocols and dozens of other amazing properties, despite not being known to be in P, which would make all those properties trivial. :-) It’s similar to how the main implication of PRIMES in P, was that we had to stop using primality as our flagship example of a problem that was known to be in randomized polynomial time but not deterministic polynomial time. Still, I managed to adapt in my undergrad class, by talking about the randomized test for primality, and then treating the fact that it’s actually in P as just a further amazing wrinkle in the story. And I’d be happy to adapt similarly in the case of GI…
Similarly there are no implications for quantum computing.
As for the prospects for further improvement, the best reference as of this writing may be Jeremy Kun's notes on Babai's talk. One key point is that Babai's algorithm relies on a strategy that he calls "split or Johnson" and he can show that this strategy is not enough to put GI in P.
UPDATE: Babai has posted a preprint to the ArXiv.
UPDATE: Babai's talk at the Institute for Advanced Study is available online. IMO this talk is much better than the very first talk he gave at Chicago; he's had time to prepare slides and greatly clarify the presentation.
Johnson graphs do not cause difficulty to existing programs. Actually they are rather easy; nauty can handle them up to tens of millions of vertices, and so can other programs such as Traces and Bliss.
The difficulty that Babai refers to is more theoretical. The Johnson graph $J(v,t)$, which has $n=\binom vt$ vertices, has the property that $\Theta(n^{1/t})$ vertices need to be fixed in order for partition refinement to produce a partition with all cells at most $cn$ in size ($c<1$), nor does the group have a nice recursive structure like a wreath product. Babai says that a major part of his breakthrough was to realise that only the Johnson graphs (and closely related graphs) have these undesirable properties. With that knowledge, special processing takes care of them. Previously it could have been that some other families of graphs exist with similar properties.
I think this description is reasonably close. I recently listened to Babai speaking on his algorithm for 5 hours but some parts remain unclear to me.
Btw, the automorphism group of $J(v,t)$ for $t\lt v/2$ is $S_v$ acting on $t$-sets. Programs find them easy because of this rich group.
About the "supplementary problem", fixing $O(\log n)$ vertices helps very little. It is necessary to match $\Theta(n^{1/t})$ vertices in each graph before elementary methods will provide much further information about isomorphisms. It's hard to define "elementary methods", but they include all the obvious things like classifying vertices according to their adjacencies with the fixed vertices and more generally fixed-dimensional Weissfeiler-Lehman refinement.
Babai's statement that you quote is easy to misunderstand. It isn't the Johnson graphs themselves that were a scourge to theorists. It's more that the Johnson graphs showed that certain difficult structural properties (mentioned above) are possible and nobody knew a general method for handling these properties. Once Babai showed that only the Johnson graphs caused that structure, the difficultly vanished.
Best Answer
Babai apparently retracted some parts of his proof, now he claims that he can do $O(\exp(n^c))$ for some small $c$ (say, $c=0.01$), but not all $c>0$. See http://people.cs.uchicago.edu/~laci/update.html