1) I believe it is possible to obtain $P(M_n<a\ | \tau>n)$, where $\tau:=\inf\{n\ge 1: S_n\le 0\}$ in more or less explicit way only for the case of the simple random walk. These computations can be found in Billingsley, Convergence of Probability measures, Chapter 2.11. For large $n$ these computations become universal as you can apply FCLT, see below.
2) Generally, one can use Brownian motion approximations as was suggested by fedja. Mathematic justification is given in the references below. Before going into this I will say a few words about conditioning of random walks to stay positive.
There are two ways to condition random walk to be positive. First way is to condition on $\{\tau>n\}$, where $\tau:=\inf\{n\ge 1: S_n\le 0\}$. Second way, is to ''condition'' on $\{\tau=\infty\}$. Here you cannot use direct conditioning as $\mathbf P(\tau=\infty)=0$ (for the zero-mean finite variance random walk). So, the way to proceed is to use Doob's $h$ transform. As a result one obtains a Markov chain (Lamperti Markov chain, in fact) which is a discrete-time analogue of Bessel process.
Now I can move on to approximations as $n\to\infty$.
a) When $a$ is of order $\sqrt n$, you need to use corresponding Functional Central Limit Theorem (FCLT). You need functional central limit theorem as the maximum depends on the whole trajectory of the random walk.
The way to use functional limit theorems is described in the above Billinsley's book, you can see an example for unconditioned random walk in Chapter 2.11.
For the first type of conditioning FCLT is proved in Iglehart, 1974, Functional Central Limit Theorems for Random Walks Conditioned to Stay Positive (doi:10.1214/aop/1176996607), see also Bolthausen, 1976, On a Functional Central Limit Theorem for Random Walks Conditioned to Stay Positive. Conditioning of the second type was considered in Bertoin and Doney (1994), On Conditioning a Random Walk to Stay Nonnegative (doi:10.1214/aop/1176988497). FCLT for the second type of conditioning can be found in Caravenna, Chaumont (2008), Invariance principles for random walks conditioned to stay positive (doi:10.1214/07-AIHP119). Finally, in higher dimensions you can use FCLT from http://arxiv.org/abs/1110.1254.
b) When $a$ is much larger then $\sqrt n,$ then $\mathbf P(M_n<a\ |\ \tau>n)\to 1$.
c) When $a<<\sqrt n$ you are dealing with small deviations probability.
If you write $\mathbf P(M_n<a | \tau>n)=\mathbf P(M_n<a , \tau>n)/\mathbf P(\tau>n)$, then approximation for $\mathbf P(\tau>n)\sim Cn^{-1/2}, n\to\infty$, and
$$
\mathbf P(\tau>n, M_n<a)=\mathbf P(\mbox{random walk $S_k$ stays in (0,n) for all } 0<k\le n)
$$
Now, if $a\sim An^{1/2-\delta}$ for $\delta\in(0,1/2)$ you can split this trajectory in $n/a^2$ parts of length $a^2$ and use independence to obtain estimate
$$
\mathbf P(\tau>n, M_n<a)\le \left(\mathbf P(\mbox{random walk $S_k$ stays in (0,2a) for all } 0<k\le a^2)\right)^{n/a^2}
$$
Now, the probability inside the parenthesis is on the right scale and you can apply the FCLT to get
$$
\mathbf P(\mbox{random walk $S_k$ stays in (0,2a) for all } 0<k\le a^2)\to C(A)\in(0,1).
$$
This results in the following upper bound
$$
\mathbf P(\tau>n, M_n<a) \le C(A)^{n/a^2}\approx C(A)^{n^{2\delta}}.
$$
As $C(A)$ is in $(0,1)$ this probability is quite small. .
Best Answer
Let us divide the (time) interval $[0,n]$ into $n/t$ subintervals of length $t$. Let us call the $k$th interval good, if, during that interval, the random walk spends time at least $t/5$ to the left of $x_k-\sqrt{t}$ and at least $t/5$ to the right of $x_k+\sqrt{t}$ (here, $x_k=X_{kn/t}$ is the position of the walk in the beginning of the $k$th time interval). Clearly, the events ($\{k\mbox{th interval is good}\}$, $k=1,\ldots,n/t$) are independent, and there is $\alpha>0$ such that the probability that an interval is good is at least $\alpha$, uniformly in $t$ (you can prove this formally by e.g. comparing the random walk to the Brownian motion).
Next, a good interval contributes an amount at least $\frac{t}{5}\times (\sqrt{t})^2 = \frac{t^2}{5}$ to the sum. Moreover, a Chernoff's bound for the Binomial distribution implies that, with probability at least $1-\exp(-c' \frac{n}{t})$, at least $\frac{\alpha}{2}\times\frac{n}{t}$ intervals will be good. So, at least with the above probability, the sum will be at least $\frac{\alpha}{2}\times\frac{n}{t}\times \frac{t^2}{5} = \frac{\alpha t}{10n}n^2$. Now, take e.g. $t=n/\log^2n$ (or $t=n/(C\log n)$ for large $C$) to get a statement you're aiming at.