I can explain a lot of what you are seeing.
(1) If $f(x)$ has negative real roots, then the coefficients of $f$ are always log concave. Proof: Let $f(x) = \prod (x+\lambda_i)$. Then the coefficients of $f$ are the elementary symmetric polynomials $e_k(\lambda_1, \lambda_2, \ldots, \lambda_n)$. The elementary symmetric polynomials obey Newton's inequalities.
(2) Set $s(a)$ and $t(a)$ be concave functions. I don't know how to make the bounds in what I'm about to say rigorous, so I'll just say everything in the approximate sense. Suppose that $f_n(x) = \sum_{k=0}^n f_{k,n} x^k$ is a family of polynomials with $f_{k,n} \approx e^{n \cdot s(k/n)}$ and that $g_n(x)$ is a similar family of polynomials with $g_{k,n} \approx e^{n \cdot t(k/n)}$. Set $h_n(x) = f_n(x) g_n(x)$. Then the coefficient of $x^{2k}$ in $h$ is
$$\sum_{\ell=0}^{2k} f_{\ell,n} g_{2k-\ell,n} \approx \sum_{\ell=0}^{2k} e^{n \cdot ( s(\ell/n) + t((2k-\ell)/n) )}$$
$$ \approx \exp( n \cdot \max_{\ell} (s(\ell/n) + t((2k-\ell)/n) )\approx \exp(n \max_{0 \leq a \leq 2k/n} s(a) + t(2k/n -a)) $$
If $s$ and $t$ are concave, then the function
$$u(b) = \max_{0 \leq a \leq 2b} s(a) + t(2b-a)$$
is also concave. I think that's what you're seeing when you take "fixed proportions of zeros randomly chosen in certain fixed intervals": There is some limit curve for zeroes chosen uniformly in one interval, and choosing multiple intervals combines them by the above rule.
(3) Let $F(x,y) = \sum_n f_n(x) y^n$. If the singularities of $F$ are not too complicated, then there are good tools to extract the asymptotic behavior of the $f_{k,n}$, and those methods will "often" give convex curves of the sort you describe. I'll be more precise about what I mean by often.
$\def\CC{\mathbb{C}}$Let $\bar{U}$ be the set of $(x,y) \in \CC^2$ where $\sum f_{k,n} x^k y^n$ converges absolutely. Assume that $\bar{U}$ contains a neighborhood of $(0,0)$, and let $U$ be the interior of $\bar{U}$. Whether or not a point $(x,y)$ is in $U$ depends only on $(|x|, |y|)$. Let $D$ be the image of $U$ under $(x,y) \mapsto (\log |x|, \log |y|)$. Then $D$ is a convex set which obeys the property that $(u,v) \in D$, with $u' \leq u$ and $v' \leq v$ imply $(u', v') \in D$. See section 2 of these notes for much more.
For a positive number $a$, let $s(a) = - \sup_{(u,v) \in D} (au+v)$. Then it is often true that $\log f_{k,n} \approx n s(k/n)$. The function $\sup_{(u,v) \in D} (au+v)$ is (up to sign conventions) called the Legendre transform of the boundary of $D$.
What do I mean by often?
If we have a sequence $k_n$ with $k_n/n \to a$, it is always true that $\sup_{n \to \infty} \frac{1}{n} \log |f_{k_n,n}| \leq s(a)$. Proof: Choose $r>s(a)$. Then there is a point $(u,v) \in D$ with $au+v = -r$. Then
$$f_{k_n,n} = \int_{|x| = u, |y|=v} F(x,y) x^{-k_n} y^{-n} \frac{dx dy}{x y}$$
An easy bound gives $f_{k_n, n} \leq C \exp (- k_n u - n v) = C \exp(- ((k_n/n) u +v) )$ for some constant $C$.
Conversely, suppose the following conditions hold: There is a single point $(u,v) \in \partial D$ where $au+v$ achieves its maximum. There is an open set $\Omega$ in $\CC^2$ containing the closed polydisc $\{ |x| \leq e^u,\ |y| \leq e^v \}$ such that $F$ extends to a meromorphic function on $\Omega$, with single simple pole along a divisor $\Delta \subset \Omega$. There is a single point $(x,y)$ in $\Delta$ with $(\log |x|, \log |y|) = (u,v)$, and this point is a smooth point of $\Delta$. With all these hypotheses (and perhaps some I have forgotten), $(1/n) \log |f_{k_n,n}| \to s(a)$. See Pemantle and Wilson, part 1. See also this paper where Pemantle and Wilson provide 20 applications of their method, including many of the examples you give.
If $F$ extends to a meromorphic function on some $\Omega$, but the pole set is more complicated, you need to read Pemantle and Wilson's later papers. See especially 2, 3.
The key is to write $U_n(x)$ in terms of $x$ explicitly. Let $x = \cos t$, then $\sin t = \sqrt{1-x^2}$, where we fixed a branch of square root so that $\sqrt{-1} = i$; the value of $U_n$ is independent of the choice. Then $e^{it} = x + i \sqrt{1 - x^2}$. For $|x| \ge 1$, we can thus write $e^{it} = x - \sqrt{x^2 - 1}$.
For $|x| \ge 1$, let $y = x - \sqrt{x^2 -1}$, $A = 1-\sqrt{x^2 -1}$ and $B = (1-x)y + 1 $. Observe that $0 < y \le 1$. By analytic continuation, we can write
$$(1-x)U_n(x) + U_{n-1}(x) = \frac{ A y^{-(n+1)} -B y^n }{2 \sqrt{x^2 -1}}.$$
For $x > \sqrt{2}$, $A < 0$ and $B > 0$. So it's pretty clear the function has no zero there.
Similar argument can be made for $x \le -1$.
Best Answer
I'll plunge in here.
Edit: I've now added a proof, using Brownian motion. (Earlier this was only justified heuristically) Further revised Jan 5, 2011, to correct typos/formatting and to promote material from comments.
The $n$th Chebyshev $T$-polynomials $T_n(x) = \cos(n \arccos( x))$ are the unique degree n polynomials that fold the interval $[-1,1]$ exactly over itself $n$ times, taking $1$ to $1$.
When $n$ is even, $T_n(-1) = 1$ as well, so by an affine transformation you can make it satisfy the inequalities of the question: $$f_n(x) = 1 - T(2 x - 1) . $$
Here for example is the plot of $f_{10}$:
alt text http://dl.dropbox.com/u/5390048/Chebyshev10.jpg
This has a maximum at $c = \cos(\pi/10) / 2 = .975528...$, pretty close to 1. (If a strict maximum is desired, modify the polynomial by adding $\epsilon x$, and change by a linear transformation of the domain so that $f(1)=0$.)
For odd degrees, you can restrict the Chebyshev polynomials to an interval from the first local maximum to 1, and renormalize in a similar way, to get the unique degree $n$ polynomial $V_n(x)$ that folds the unit interval exactly $n-1$ times over itself and has a double root at 0.
One way to know the existence of such a polynomial is this: extend the map of the interval to itself to all of $\mathbb C$ in a way that it is an $n+1$-fold branched cover of $\mathbb C$ (not yet analytic) with the branching pattern corresponds to the desired critical points and critical values. Pull back the complex structure in the range via this map. Using the Riemann mapping theorem, it can be seen that the resulting complex 1-manifold is $\mathbb C$, so you get a polynomial map. By symmetry, the critical points all lie on a straight line, so it can be renormalized as a map from the interval to itself. (This is a special case of more general theories, including Shabat polynomials and dessins d'enfants, as well as the theory of postcritically finite rational maps).
For odd degree $n$, I think $V_n$ probably gives the maximal $c$.
Here's a heuristic argument: One can ask, where are the critical points in $\mathbb C$ for a polynomial that satisfies the given constraints and maximizes $c$. Given the $n-1$ critical points $\{c_i\}$, the derivative of $f$ is up to a constant $\prod x-c_i$. To make the ratio large, we need the ratio of the mean value of the integrand in $[0,c]$ to be small compared to the mean value of - integrand in $[c,1]$: since the integrals add to 0, this ratio is the same as the ratio of arc length. This seems to say that the $c_i$'s want to be close to --- actually, inside--- the interval $[0,c]$. The best way to squeeze them in seems to be to make the interval fold as described.
It's easy to relax from the extreme case. For example, in even degrees, just make an affine change to a larger interval $f_n^{-1}( [1-t, \infty], t \ge 0$; as $t \rightarrow \infty$, $c \rightarrow .5$. For the odd degree examples, add a linear function and renormalize in a similar way.
This is reminiscent of the phenomenon of monotonicity in the theory of iterated real polynomials, but simpler to establish.
Added: a proof, using Brownian Motion
There's a way to formulate the problem as a probability question. If you start a Brownian path from a point near infinity in the plane, it almost surely eventually hits the line segment $J = [0,1]$. The position of $c$ in this line segment is determined by the ratio of the probability that the path first hits $[0,c]$ vs $[c,1]$. (To get the exact function, you can map the complement of the line segment coformally to the complement of a circle; on a circle, hitting measure is proportional to arc length).
Now suppose we have a degree $n$ polymomial $g$, as in the question, scaled so that $g(x)$ with $g([0,1]) = [0,1]$ and $g(0)=g(1) = 0$. As a complex polynomial, it defines a branched cover of $\mathbb C$ over $\mathbb C$. In 2 dimensions, conformal maps preserve the trajectories of Brownian motion: only the time parameter changes. Therefore, Brownian motion on the branched cover of the plane looks exactly like Brownian motion on the plane, but with the extra information of which of the $n$ sheets the trajectory is on at any given time. As the trajectory goes around the various critical values, the sheets are permuted. At any given time, if we just know the position of a Brownian path, the distribution on the sheets is uniform.
Let's denote by $J$ the unit interval $[0,1]$ in the domain (upstairs in the branched cover) and $K$ the same interval $[0,1]$ in the range (downstairs). Thus, $J$ is a union of line segments on sheets above $K$, and furthermore, the two subintervals of $J$, $[0,c]$ and $[c,1]$, both map surjectively to $[0,1]$.
Therefore, the first time the Brownian path downstairs crosses $[0,1]$, it has at least a $1/n$ probability of crossing the $[c,1]$ segment in the branched cover. For the even degree Chebyshev polynomial, as soon as it hits the segment $K$ downstairs it also hits $J$ upstairs, so the probability is exactly $1/n$: therefore, this is optimal. It is the unique optimal example for even degree, since if $g^{-1}(K) \ne J$, there would be a nonzero second chance for paths that hit $g^{-1}(K) \setminus J$ to continue on and still hit $[c,1]$.
The figures below illustrate this. The top figure shows the 6th Chebyshev polynomial, renormalized as above to take $[0,1] \rightarrow [0,1]$. The red interval is $J$, the caterpillar's skin is the inverse image of the circumscribed circle about $K$; also depicted is the inverse image of $\mathbb R$. Brownian motion starting at infinity has an equal probability of arriving in any of the 12 sectors (which each opens out under the map to a halfplane), so the probability of arriving in the leftmost segment is exactly 1/6, with length $(1-\cos(\pi/6))/2 = .0669873\dots$.
alt text http://dl.dropbox.com/u/5390048/Chebyshev6.jpg
The next figure (below) shows a comparison polynomial (also graphed, futher down) with an order 5 zero at 0 and one other critical point at $c$, mapping with critical value $1$. The same data is shown. When a Brownian path starting from infinity first hits $K$ (downstairs), it has equal probability of hitting any of the 6 segments inside the various curves: one of the two red segments (in $J$), or the vein in one of the four leaves. In the 4/6 probability event that it does not hit $J$, when the Brownian path continues on it has some chance of hitting the top interval, so this probability is strictly greater than $1/6$.
alt text http://dl.dropbox.com/u/5390048/MonobumpFlower.jpg
For the odd degree case, a little more is needed. Since the requirement of the question is that $g(0)=g(1)=0$, there are an even number of sheets above any point of $K$, so at least one sheet is absent in $J \setminus g^{-1}(K)$. Let's suppose first that $g^{-1}(K)$ is connected. In that case, we can use the Riemann mapping theorem to map its complement conformally to the exterior of a unit disk; there is a set of measure at least $2\pi/n$ that does not map to $J$. We can follow Brownian motion by letting it "reflect" whenever it hits this portion of the boundary of the unit disk, and continue on until it hits a part that corresponds to $J$. With this formulation, it's obvious that to minimize the probability that the continuing trajectory hits the sensitive area corresponding to $[c,1]$, we need to minimize its length and put it as far away from the sensitive area as possible. That's exactly what happens for $V_n$: on the circle the extra sheet is antipodal to the sensitive sheet.
A similar argument applies to the disconnected case, although without quite as simple a visual representation. It's easy to establish that any optimal polynomial must have the maximal number of sheets above each point in $K$. The hitting probability for the senstive area for random walks starting at points $z$ is a harmonic function on $\mathbb C \setminus J$, with limit 1 along $[c,1]$ and 0 along $[0,c]$. This harmonic function has no critical points, so if there is a component of $g^{-1}(K)$ not attached to $J$, its mean on this component can be reduced by moving it toward 0, by moving the critical values not on the $K$ toward $0$. Below is a picture for the optimal solution for degree 5. There's a short tail to the caterpillar where the Brownian path gets a second chance, but its far away from the sensitive portion so it has only a small chance to next hit there rather than in $[0,c]$. The interval $[c,1]$ is comparatively short because it is exposed out at the end of the interval, but not as exposed so not as short as in the Chebyshev case.
alt text http://dl.dropbox.com/u/5390048/Chebyshev5.jpg
When $n$ is even, there is a solution for $c$ between $(1ā\cos(\pi/n))/2$ and $(1+\cos(\pi/n)/2$. If $n$ is odd, there is a solution for $c$ between $(1ā\cos(\pi/n))/(1+\cos(\pi/n))$ and $2 \cos(\pi/n)/(1+\cos(\pi/n))$. Numerically, the low values for $c$ are {2, 0.5}, {3, 0.333333}, {4, 0.146447}, {5, 0.105573}, {6,0.0669873}, {7, 0.0520951}, {8, 0.0380602}, {9, 0.0310912}, {10, 0.0244717}
**End of added proof **
For comparison, here are plots of degree $n$ polynomials functions (unique up to a constant) that have an $n-1$-fold root at 0, a critical point at $c$, and take value $0$ at 1. At first I guessed that these might give the optimal $c$, but for them, $c = 1-1/n$, much smaller than for the Chebyshev polynomials. The plots are for $n = 2, 3, \dots, 10$.
alt text http://dl.dropbox.com/u/5390048/InterpolatingFunctions.jpg
However, these polynomials answer a different question: given c, what is the minimum degree of a polynomial that is 0 at 0 and 1 and has a unique local maximum at c. These polynomials, also discussed by Wadim Zudilin, have that property. For such a polynomial, the same technique as above can be used. For any candidate polynomial f, a Brownian path starting at infinity has a probability of a probability of 1/n to hit the interval [0,c], 1/n probability to hit [1,c], and (nā2)/n to first hit elsewhere on fā1([0,1]). The same proof shows these examples are optimal