Here is an argument that proves the conjecture. More generally, it shows that if $X_1,\dots,X_n$ is a "generic" sequence of positive real numbers, and we form a sum by permuting the terms randomly and putting random signs on the terms (uniform distribution over all possibilities), then the probability that all partial sums are positive is given by $P_n$ as expressed in the OP. By "generic" I mean that no nonempty signed subsequence sums to zero.
First consider a special case where the statement clearly holds: Assume that $X_1>>X_2>>X_3>>\dots >>X_n$ in the sense that the sign of any partial sum will depend only on the sign of the dominating term. A valid signed permutation (one where all partial sums are positive) of $n$ terms must then be formed by inserting the term $X_n$ into a valid signed permutation of $X_1,\dots,X_{n-1}$. Conversely if we insert $X_n$ into a valid signed permutation of $X_1,\dots,X_{n-1}$, then the only way we can turn this into a non-valid signed permutation is if we insert $X_n$ first and with a negative sign. Therefore the probability that a signed permutation of $n$ terms is valid is $$\frac12\cdot \frac34\cdot\frac56\cdots\frac{2n-1}{2n},$$ as required.
Next, let us move the point $(X_1,\dots,X_n)$ in $\mathbb{R}^n$ and see how the probability of a valid signed permutation might change. We need only consider what happens when we pass through a hyperplane given by a signed subsequence being equal to zero. Instead of introducing double indices let me take a generic example: Suppose we pass through the hyperplane where the sum $X_3+X_5+X_6-X_2-X_9$ changes sign. At the moment where the sign change occurs, the only signed permutations that go from valid to invalid or the other way are those where this particular sum occurs as the five first terms (in some order). Now for every signed permutation that goes from valid to invalid, there will be a corresponding one that goes from invalid to valid, by reversing those five first terms and changing their signs. For instance, if $$(X_5+X_6-X_9+X_3-X_2) + \dots$$ goes from valid to invalid, then $$(X_2-X_3+X_9-X_6-X_5) + \dots$$ will go from invalid to valid.
The conclusion is that the probability of a signed permutation being valid (having all partial sums positive) will never change, and will therefore always be equal to the product given in the OP.
UPDATE:
Instead of looking at what happens when we cross one of these hyperplanes, here is another way of seeing that $P_n$ is independent of $X=(X_1,\dots,X_n)$, which has the additional feature of showing that $$P_0P_n + P_1P_{n-1}+\cdots + P_nP_0 = 1.$$
Suppose that $X_1,\dots,X_n$ are generic as described above. Let $P_n(X)$ denote the probability that a random signed permutation will have all partial sums positive, and suppose for induction on $n$ that we have shown that $P_k(X) = P_k$ is independent of $X$ for $k=1,\dots,n-1$.
Now let $Y_1,\dots, Y_n$ be a random signed permutation of $X$, and consider the distribution of the $k$ for which the partial sum $Y_1+\dots+Y_k$ is minimized. I claim that the probability that this occurs for a particular $k$ is $P_kP_{n-k}$, since it means that all consecutive sums of the form $Y_{k+1}+Y_{k+2}+\dots+Y_m$ for $m>k$ must be positive, while similarly all sums of the form $Y_m + Y_{m+1}+\dots+Y_k$ for $m\leq k$ must be negative.
Since generically the minimum is unique, we have $$P_0P_n(X) + P_1P_{n-1}+\cdots + P_n(X)P_0 = 1,$$ which shows that $P_n(X)$ is independent of $X$.
I like this very much, since the equation $P_0P_n + P_1P_{n-1}+\cdots + P_nP_0 = 1$ is crucial for my proof in the Monthly December 2007 of the Wallis product formula for pi, http://www.math.chalmers.se/~wastlund/monthly.pdf. See also Don Knuth's 2010 christmas tree lecture "Why pi?" under "Computer musings" at http://scpd.stanford.edu/knuth/index.jsp.
I did a proof by induction and some algebraic manipulation in the Monthly paper, and Knuth used power series and the binomial theorem some 20-25 minutes into his talk, but the argument motivated by this question is much nicer!
1) I believe it is possible to obtain $P(M_n<a\ | \tau>n)$, where $\tau:=\inf\{n\ge 1: S_n\le 0\}$ in more or less explicit way only for the case of the simple random walk. These computations can be found in Billingsley, Convergence of Probability measures, Chapter 2.11. For large $n$ these computations become universal as you can apply FCLT, see below.
2) Generally, one can use Brownian motion approximations as was suggested by fedja. Mathematic justification is given in the references below. Before going into this I will say a few words about conditioning of random walks to stay positive.
There are two ways to condition random walk to be positive. First way is to condition on $\{\tau>n\}$, where $\tau:=\inf\{n\ge 1: S_n\le 0\}$. Second way, is to ''condition'' on $\{\tau=\infty\}$. Here you cannot use direct conditioning as $\mathbf P(\tau=\infty)=0$ (for the zero-mean finite variance random walk). So, the way to proceed is to use Doob's $h$ transform. As a result one obtains a Markov chain (Lamperti Markov chain, in fact) which is a discrete-time analogue of Bessel process.
Now I can move on to approximations as $n\to\infty$.
a) When $a$ is of order $\sqrt n$, you need to use corresponding Functional Central Limit Theorem (FCLT). You need functional central limit theorem as the maximum depends on the whole trajectory of the random walk.
The way to use functional limit theorems is described in the above Billinsley's book, you can see an example for unconditioned random walk in Chapter 2.11.
For the first type of conditioning FCLT is proved in Iglehart, 1974, Functional Central Limit Theorems for Random Walks Conditioned to Stay Positive (doi:10.1214/aop/1176996607), see also Bolthausen, 1976, On a Functional Central Limit Theorem for Random Walks Conditioned to Stay Positive. Conditioning of the second type was considered in Bertoin and Doney (1994), On Conditioning a Random Walk to Stay Nonnegative (doi:10.1214/aop/1176988497). FCLT for the second type of conditioning can be found in Caravenna, Chaumont (2008), Invariance principles for random walks conditioned to stay positive (doi:10.1214/07-AIHP119). Finally, in higher dimensions you can use FCLT from http://arxiv.org/abs/1110.1254.
b) When $a$ is much larger then $\sqrt n,$ then $\mathbf P(M_n<a\ |\ \tau>n)\to 1$.
c) When $a<<\sqrt n$ you are dealing with small deviations probability.
If you write $\mathbf P(M_n<a | \tau>n)=\mathbf P(M_n<a , \tau>n)/\mathbf P(\tau>n)$, then approximation for $\mathbf P(\tau>n)\sim Cn^{-1/2}, n\to\infty$, and
$$
\mathbf P(\tau>n, M_n<a)=\mathbf P(\mbox{random walk $S_k$ stays in (0,n) for all } 0<k\le n)
$$
Now, if $a\sim An^{1/2-\delta}$ for $\delta\in(0,1/2)$ you can split this trajectory in $n/a^2$ parts of length $a^2$ and use independence to obtain estimate
$$
\mathbf P(\tau>n, M_n<a)\le \left(\mathbf P(\mbox{random walk $S_k$ stays in (0,2a) for all } 0<k\le a^2)\right)^{n/a^2}
$$
Now, the probability inside the parenthesis is on the right scale and you can apply the FCLT to get
$$
\mathbf P(\mbox{random walk $S_k$ stays in (0,2a) for all } 0<k\le a^2)\to C(A)\in(0,1).
$$
This results in the following upper bound
$$
\mathbf P(\tau>n, M_n<a) \le C(A)^{n/a^2}\approx C(A)^{n^{2\delta}}.
$$
As $C(A)$ is in $(0,1)$ this probability is quite small. .
Best Answer
Looking at fedja's comment that you mentioned, we can formalize your description as follows: $$\rho_p=g_p(0),$$ where $g_p$ is the pdf of $$T_p:=\sum_{i=1}^p V_i=2S_p-p,$$ $S_p:=\sum_{i=1}^p U_i$, the $U_i$'s are i.i.d. random variables (r.v.'s) uniformly distributed on $[0,1]$, and $V_i:=2U_i-1$, so that the $V_i$'s are i.i.d. r.v.'s uniformly distributed on $[-1,1]$.
Note that $g_p(t)=\frac12\,f_p(\frac{p+t}2)$ for real $t$, where $f_p$ is the pdf of $S_p$. So, by the Irwin–Hall formula, $$\rho_p=g_p(0)=\frac12\,f_p\Big(\frac p2\Big) =\frac1{2(p-1)!}\sum_{k=0}^{\lfloor p/2\rfloor}(-1)^{k-1}\binom pk\Big(\frac p2-k\Big)^{p-1}.$$
In particular, the respective values of $\rho_p$ for $p=1,2,3,4,5$ are $\frac{1}{2},\frac{1}{2},\frac{3}{8},\frac{1}{3},\frac{115}{384}$.