Actually I happened to write a short section on wikipedia about the classic bounds here. To locate the zeros of a polynomial you can also use the Rouché theorem. Also, ymay find interesting material e.g. in the book by Pólya and Szegő.
rmk 1: note that the above bounds of course hold for complex roots as well.
rmk 2: the spirit of the square root formula given in the link is: assume you have a real monic polynomial $p$ of degree $n$, and you just know $a_{n-1}$ and $a_{n-2}$. These numbers put a constraint (Nebenbedingung) on the size of the roots. Remember that $-a_ {n-1}= -\sum_k z_k$ and $a_{n-2}=\sum_ {j < k} z_ j z_ k$ are elementary symmetric functions of the roots $z_k:=x_k+ i y_k$. In particular, for the real parts of the roots we have
$$\sum_{k=1}^n x_k =-a_{n-1}$$
and
$$\sum_{k=1}^n x_k^2 \leq a_{n-1}^2-2a_{n-2}$$
Within these constraints, the maximum, resp. the minimum, of a real root is found maximizing, resp. minimizing $x_1;$ the method of Lagrange multipliers should easily yield (I didn't check) to (a generalization of) the given result.
I can explain a lot of what you are seeing.
(1) If $f(x)$ has negative real roots, then the coefficients of $f$ are always log concave. Proof: Let $f(x) = \prod (x+\lambda_i)$. Then the coefficients of $f$ are the elementary symmetric polynomials $e_k(\lambda_1, \lambda_2, \ldots, \lambda_n)$. The elementary symmetric polynomials obey Newton's inequalities.
(2) Set $s(a)$ and $t(a)$ be concave functions. I don't know how to make the bounds in what I'm about to say rigorous, so I'll just say everything in the approximate sense. Suppose that $f_n(x) = \sum_{k=0}^n f_{k,n} x^k$ is a family of polynomials with $f_{k,n} \approx e^{n \cdot s(k/n)}$ and that $g_n(x)$ is a similar family of polynomials with $g_{k,n} \approx e^{n \cdot t(k/n)}$. Set $h_n(x) = f_n(x) g_n(x)$. Then the coefficient of $x^{2k}$ in $h$ is
$$\sum_{\ell=0}^{2k} f_{\ell,n} g_{2k-\ell,n} \approx \sum_{\ell=0}^{2k} e^{n \cdot ( s(\ell/n) + t((2k-\ell)/n) )}$$
$$ \approx \exp( n \cdot \max_{\ell} (s(\ell/n) + t((2k-\ell)/n) )\approx \exp(n \max_{0 \leq a \leq 2k/n} s(a) + t(2k/n -a)) $$
If $s$ and $t$ are concave, then the function
$$u(b) = \max_{0 \leq a \leq 2b} s(a) + t(2b-a)$$
is also concave. I think that's what you're seeing when you take "fixed proportions of zeros randomly chosen in certain fixed intervals": There is some limit curve for zeroes chosen uniformly in one interval, and choosing multiple intervals combines them by the above rule.
(3) Let $F(x,y) = \sum_n f_n(x) y^n$. If the singularities of $F$ are not too complicated, then there are good tools to extract the asymptotic behavior of the $f_{k,n}$, and those methods will "often" give convex curves of the sort you describe. I'll be more precise about what I mean by often.
$\def\CC{\mathbb{C}}$Let $\bar{U}$ be the set of $(x,y) \in \CC^2$ where $\sum f_{k,n} x^k y^n$ converges absolutely. Assume that $\bar{U}$ contains a neighborhood of $(0,0)$, and let $U$ be the interior of $\bar{U}$. Whether or not a point $(x,y)$ is in $U$ depends only on $(|x|, |y|)$. Let $D$ be the image of $U$ under $(x,y) \mapsto (\log |x|, \log |y|)$. Then $D$ is a convex set which obeys the property that $(u,v) \in D$, with $u' \leq u$ and $v' \leq v$ imply $(u', v') \in D$. See section 2 of these notes for much more.
For a positive number $a$, let $s(a) = - \sup_{(u,v) \in D} (au+v)$. Then it is often true that $\log f_{k,n} \approx n s(k/n)$. The function $\sup_{(u,v) \in D} (au+v)$ is (up to sign conventions) called the Legendre transform of the boundary of $D$.
What do I mean by often?
If we have a sequence $k_n$ with $k_n/n \to a$, it is always true that $\sup_{n \to \infty} \frac{1}{n} \log |f_{k_n,n}| \leq s(a)$. Proof: Choose $r>s(a)$. Then there is a point $(u,v) \in D$ with $au+v = -r$. Then
$$f_{k_n,n} = \int_{|x| = u, |y|=v} F(x,y) x^{-k_n} y^{-n} \frac{dx dy}{x y}$$
An easy bound gives $f_{k_n, n} \leq C \exp (- k_n u - n v) = C \exp(- ((k_n/n) u +v) )$ for some constant $C$.
Conversely, suppose the following conditions hold: There is a single point $(u,v) \in \partial D$ where $au+v$ achieves its maximum. There is an open set $\Omega$ in $\CC^2$ containing the closed polydisc $\{ |x| \leq e^u,\ |y| \leq e^v \}$ such that $F$ extends to a meromorphic function on $\Omega$, with single simple pole along a divisor $\Delta \subset \Omega$. There is a single point $(x,y)$ in $\Delta$ with $(\log |x|, \log |y|) = (u,v)$, and this point is a smooth point of $\Delta$. With all these hypotheses (and perhaps some I have forgotten), $(1/n) \log |f_{k_n,n}| \to s(a)$. See Pemantle and Wilson, part 1. See also this paper where Pemantle and Wilson provide 20 applications of their method, including many of the examples you give.
If $F$ extends to a meromorphic function on some $\Omega$, but the pole set is more complicated, you need to read Pemantle and Wilson's later papers. See especially 2, 3.
Best Answer
I don't have a complete proof yet, but I have a plausible conjecture. Let $\mu$ be a probability measure in the plane, define the potential $$u(z)=\int\log|1-z/t|d\mu(t).$$ Then I conjecture that $\mu\in M$ iff $u$ satisfies $u(z)\leq u(|z|)$ for all $z$.
It is evident that this condition is necessary. It seems that it is strictly stronger than the Obrechkoff condition. I don't think that this condition can be restated as a simple property of $\mu$ itself.
To prove the sufficiency, I am first going to restrict to a dense subclass of $\mu$ with convenient properties (it is clear that it is enough to prove sufficiency for a dense subclass). The convenient properties I have in mind is that $\mu$ does not charge some small angular sector $|\arg z|<\epsilon$ and that it behaves nicely near $0$ and $\infty$, say has some small atom at $-\epsilon$ and nothing else in the disc $|z|<100\epsilon$, and similarly at infinity. In addition, I want to require that $u(|z|)>u(z)$ for all $z$ except on the positive ray.
Then I am going to discretize the measure to obtain a polynomial, whose $(1/n)\log|P_n|$ approximates $u$ nicely near the positive ray, and apply the saddle point method to the integral $$\int_{|z|=r}\frac{P_n(z)}{z^k}\frac{dz}{z},$$ with $n\to\infty$, using the nice behavior near the positive ray, and obtain an asymptotic for the coefficients which will show that they are positive.
The difficulty is that the asymptotics must be uniform in $k$, but I hope to achieve this by the arrangement near $0$ and $\infty$ described above.
In fact, there is an (unpublished and unproved) conjecture of Alan Sokal that if a polynomial satisfies $|P(z)|<P(|z|)$ then some sufficiently high power has positive coefficients. This of course would imply sufficiency of my condition.
ADDED on July 19. The above outline is correct; we are writing a proof which will soon be posted on arxiv.
ADDED on August 23. Here is the precise statement. A probability measure $\mu$ is a limit measure if and only if it is symmetric with respect to complex conjugation, and $u(z)\leq u(|z|)$ where $$u(z)=\int_{|\zeta|\leq 1}\log|z-\zeta|d\mu(\zeta)+\int_{|\zeta|>1}\log|1-z/\zeta|d\mu(\zeta).$$ (The potential I wrote earlier may be divergent for some probability measures, so it has to be modified a little bit). A proof of this is now available: https://arxiv.org/abs/1409.4640.
UPDATE on September 10, 2014. What I called "Sokal's Conjecture" above (Theorem 1 in the preprint cited above) turned out to be known before. It was proved by V. de Angelis, MR1976089. This was found as a result of David Handelman's answer to another MO question: Stability of real polynomials with positive coefficients.