The only rigorous bound I am aware of is due to Gonzalo Navarro*
$$c\geq 1-{\rm e}/\sqrt{\sigma},$$
for an alphabet of $\sigma$ characters. Obviously, for the binary string ($\sigma=2$) this bound is ineffective. Navarro also mentions a large-$\sigma$ conjecture $c=1-1/\sqrt{\sigma}$, which for the binary string would give $c=0.2929$, quite close to your numerical finding.
*G. Navarro, A guided tour to approximate string matching (2001)
My initial answer was wrong. Instead of completely deleting it, I left it at the bottom of the post. I use notation now that slightly differ from my original post.
As before, I give only a partial answer related to the max eigenvalue question and the limsup. Recall that the tail estimates for the max eigenvalue is
$$(P(\lambda_1(n) >2\sqrt{n}+tn^{-1/6})\sim e^{-Ct^{3/2}}$$
(see e.g. Ledoux's reviews or
http://math.univ-lyon1.fr/~aubrun/recherche/smalldeviations/smalldeviations.pdf).
Now, the question is what is the scale in which the recentered and rescaled maximal eigenvalues start behaving like independent random variables.
In my original post I essentially claimed this starts happening at scale roughly $n$. This is wrong: the correct scale is given by the so-called GUE minor process (described by Baryshnikov in 2001). Relevant to Gil's problem is the scaling limit computed by Forrester and Nagao: http://arxiv.org/pdf/0801.0100.pdf. They show that the correct scaling in $n^{2/3}$, that is
that the (centered and rescaled) maximal eigenvalues of the GUE(n) and GUE(m) decorelate if $m-n>>n^{2/3}$ and are strongly correlated in $m-n<<n^{2/3}$. Using that, and Borel Cantelli, one gets the correct scale. The limsup can be taken over the sequence $a_n=n^3$ instead of $n$. If you go over that sequence, you get from Borel-Cantelli that you cannot exceed $t_{a_n}a_n^{-1/6}$ infinitely often as long as $$\sum_n e^{-C t_{a_n}^{3/2}}<\infty.$$ This will happen as soon as $t_{a_n}^{3/2}>C'\log n$, i.e.
$t_{a_n}\sim C'' (\log n)^{2/3}$. Unravelling the definitions,
this will give you an upper bound on the limsup that is $t_n\sim
(\log n)^{2/3}$; that is, this will show that the function $F(n)$
is at most $C (\log n)^{2/3}/n^{1/6}$. The lower bound on the limsup will follows a similar pattern, using the asymptotic independence stated above, and will give the same function $F(n)$.
One can also compute constants (for the a.s. convergence) in the same manner.
Filling in the details in the above is beyond a mathoverflow answer, I may return to it in detail later.
I speculate that the liminf will use the lower tail estimates, which are of the
form $e^{-Ct^{3})}$. Working with the same argument would then give
$(\log n)^{1/3}/n^{1/6}$ as the correct scaling.
OLD (WRONG) POST:
The following is a partial answer with some hints on how to complete it; I may revisit it later with more details. The answer below refers to the max eigenvalue question, to the limsup, and does not attempt to get the constants, only the scale.
First, the upper bound: recall that the tail estimates for the max eigenvalue is
$$(P(\lambda_1(n) >2\sqrt{n}+t\sqrt{n})\sim e^{-Cnt^{3/2}}$$
(see e.g. Ledoux's reviews or
http://math.univ-lyon1.fr/~aubrun/recherche/smalldeviations/smalldeviations.pdf). Also recall (a kindly neighbor next door, A. Dembo, pointed out the usefulness of this observation in the current context) that the sequence $\lambda_1(n)$ is monotone in $n$. So, the limsup can be taken over a sequence $a_n=(1+\epsilon)^n$ instead of $n$. If you go over that sequence, you get from Borel-Cantelli that you cannot exceed $t_{a_n}\sqrt{a_n}$ infinitely often as long as $$\sum_n e^{-C t_{a_n}^{3/2} a_n}<\infty.$$ This will happen as soon as $a_nt_{a_n}^{3/2}>C'\log n$, i.e.
$t_{a_n}\sim C'' (\log n/a_n)^{3/2}$. Unravelling the definitions,
this will give you an upper bound on the limsup that is $t_n\sim
(\log \log n /n)^{2/3}$; that is, this will show that the function $F(n)$
is at most $C (\log \log n)^{2/3}/n^{1/6}$.
(Remark to @Carlo Beenakker: the difference with TSAW is that the tail estimates there are proportional to $t^3$, not $t^{3/2}$; This will be in line with the liminf computation in the RM case, in which case I expect the same discrepency in exponents of the $\log \log n$ as you noted, see below.)
For the lower bound (on the limsup), you need to take a sequence that
is sparse enough so that you have approximate independence.
Here is a leap of faith, that would require a separate proof: if
you take an $N\times N$ GUE and resample the top $\epsilon N\times \epsilon N$
entries, the maximum eigenvalue does not change much if $\epsilon$ is small.
I believe this follows from delocalization results for the eigenvectors
of GUE, but have not checked details. If that is true, then for an exponentially
growing sequence $a_n=C^n$, the sequence $\lambda_1(a_n)$
will behave like an i.i.d. sequence, to which
you now can again apply the independent Borel-Cantelli to match the previous upper bound.
I speculate that the liminf will use the lower tail estimates, which are of the
form $e^{-C(nt^{3/2})^2}$. Working with the same argument would then give
$(\log \log n/n^2)^{1/3}$ as the correct scaling.
Best Answer
The purpose of this answer is to use the second moment method to make rigorous the heuristic argument of Michael Lugo. (Here is why his argument is only heuristic: If $N$ is a nonnegative integer random variable, such as the number of length-$r$ increasing consecutive sequences in a random permutation of $\{1,2,\ldots,n\}$, knowing that $E[N] \gg 1$ does not imply that $N$ is positive with high probability, because the large expectation could be caused by a rare event in which $N$ is very large.)
Theorem: The expected length of the longest increasing block in a random permutation of $\{1,2,\ldots,n\}$ is $r_0(n) + O(1)$ as $n \to \infty$, where $r_0(n)$ is the smallest positive integer such that $r_0(n)!>n$. ("Block" means consecutive subsequence $a_{i+1},a_{i+2},\ldots,a_{i+r}$ for some $i$ and $r$, with no conditions on the relative sizes of the $a_i$.)
Note: As Michael pointed out, $r_0(n)$ is of order $(\log n)/(\log \log n)$.
Proof of theorem: Let $P_r$ be the probability that there exists an increasing block of length at least $r$. The expected length of the longest increasing block is then $\sum_{r \ge 0} r(P_r-P_{r+1}) = \sum_{r \ge 1} P_r$. We will bound the latter sum from above and below.
Upper bound: The probability $P_r$ is at most the expected number of increasing blocks of length $r$, which is exactly $(n-r+1)/r!$, since for each of the $n-r+1$ values of $i$ in $\{0,\ldots,n-r\}$ the probability that $a_{i+1},\ldots,a_{i+r}$ are in increasing order is $1/r!$. Thus $P_r \le n/r!$. By comparison with a geometric series with ratio $2$, we have $\sum_{r > r_0(n)} P_r \le P_{r_0(n)} \le 1$. On the other hand $\sum_{1 \le r \le r_0(n)} P_r \le \sum_{1 \le r \le r_0(n)} 1 \le r_0(n)$, so $\sum_{r \ge 1} P_r \le r_0(n) + 1$.
Lower bound: Here we need the second moment method. For $i \in \{1,\ldots,n-r\}$, let $Z_i$ be $1$ if $a_{i+1}<a_{i+2}<\ldots<a_{i+r}$ and $a_i>a_{i+1}$, and $0$ otherwise. (The added condition $a_i>a_{i+1}$ is a trick to reduce the positive correlation between nearby $Z_i$.) The probability that $a_{i+1}<a_{i+2}<\ldots<a_{i+r}$ is $1/r!$, and the probability that this holds while $a_i>a_{i+1}$ fails is $1/(r+1)!$, so $E[Z_i]=1/r! - 1/(r+1)!$. Let $Z=\sum_{i=1}^{n-r} Z_i$, so $$E[Z]=(n-r)(1/r! - 1/(r+1)!).$$ Next we compute the second moment $E[Z^2]$ by expanding $Z^2$. If $i=j$, then $E[Z_i Z_j] = E[Z_i] = 1/r! - 1/(r+1)!$; summing this over $i$ gives less than or equal to $n/r!$. If $0<|i-j|<r$, then $E[Z_i Z_j]=0$ since the inequalities are incompatible. If $|i-j|=r$, then $E[Z_i Z_j] \le 1/r!^2$ (the latter is the probability if we drop the added condition in the definition of $Z_i$ and $Z_j$). If $|i-j|>r$, then $Z_i$ and $Z_j$ are independent, so $E[Z_i Z_j] = (1/r! - 1/(r+1)!)^2$. Summing over all $i$ and $j$ shows that $$E[Z^2] \le \frac{n}{r!} + E[Z]^2 \le \left(1 + O(r!/n) \right) E[Z]^2.$$ The second moment method gives the second inequality in $$P_r \ge \operatorname{Prob}(Z \ge 1) \ge \frac{E[Z]^2}{E[Z^2]} \ge 1 - O(r!/n).$$ If $r \le r_0(n)-2$, then $r! \le (r_0(n)-1)!/(r_0(n)-1)$, so $r!/n \le 1/(r_0(n)-1)$, so $P_r \ge 1 - O(1/r_0(n))$. Thus $$\sum_{r \ge 1} P_r \ge \sum_{r=1}^{r_0(n)-2} \left( 1 - O(1/r_0(n)) \right) = r_0(n) - O(1).$$