I can explain a lot of what you are seeing.
(1) If $f(x)$ has negative real roots, then the coefficients of $f$ are always log concave. Proof: Let $f(x) = \prod (x+\lambda_i)$. Then the coefficients of $f$ are the elementary symmetric polynomials $e_k(\lambda_1, \lambda_2, \ldots, \lambda_n)$. The elementary symmetric polynomials obey Newton's inequalities.
(2) Set $s(a)$ and $t(a)$ be concave functions. I don't know how to make the bounds in what I'm about to say rigorous, so I'll just say everything in the approximate sense. Suppose that $f_n(x) = \sum_{k=0}^n f_{k,n} x^k$ is a family of polynomials with $f_{k,n} \approx e^{n \cdot s(k/n)}$ and that $g_n(x)$ is a similar family of polynomials with $g_{k,n} \approx e^{n \cdot t(k/n)}$. Set $h_n(x) = f_n(x) g_n(x)$. Then the coefficient of $x^{2k}$ in $h$ is
$$\sum_{\ell=0}^{2k} f_{\ell,n} g_{2k-\ell,n} \approx \sum_{\ell=0}^{2k} e^{n \cdot ( s(\ell/n) + t((2k-\ell)/n) )}$$
$$ \approx \exp( n \cdot \max_{\ell} (s(\ell/n) + t((2k-\ell)/n) )\approx \exp(n \max_{0 \leq a \leq 2k/n} s(a) + t(2k/n -a)) $$
If $s$ and $t$ are concave, then the function
$$u(b) = \max_{0 \leq a \leq 2b} s(a) + t(2b-a)$$
is also concave. I think that's what you're seeing when you take "fixed proportions of zeros randomly chosen in certain fixed intervals": There is some limit curve for zeroes chosen uniformly in one interval, and choosing multiple intervals combines them by the above rule.
(3) Let $F(x,y) = \sum_n f_n(x) y^n$. If the singularities of $F$ are not too complicated, then there are good tools to extract the asymptotic behavior of the $f_{k,n}$, and those methods will "often" give convex curves of the sort you describe. I'll be more precise about what I mean by often.
$\def\CC{\mathbb{C}}$Let $\bar{U}$ be the set of $(x,y) \in \CC^2$ where $\sum f_{k,n} x^k y^n$ converges absolutely. Assume that $\bar{U}$ contains a neighborhood of $(0,0)$, and let $U$ be the interior of $\bar{U}$. Whether or not a point $(x,y)$ is in $U$ depends only on $(|x|, |y|)$. Let $D$ be the image of $U$ under $(x,y) \mapsto (\log |x|, \log |y|)$. Then $D$ is a convex set which obeys the property that $(u,v) \in D$, with $u' \leq u$ and $v' \leq v$ imply $(u', v') \in D$. See section 2 of these notes for much more.
For a positive number $a$, let $s(a) = - \sup_{(u,v) \in D} (au+v)$. Then it is often true that $\log f_{k,n} \approx n s(k/n)$. The function $\sup_{(u,v) \in D} (au+v)$ is (up to sign conventions) called the Legendre transform of the boundary of $D$.
What do I mean by often?
If we have a sequence $k_n$ with $k_n/n \to a$, it is always true that $\sup_{n \to \infty} \frac{1}{n} \log |f_{k_n,n}| \leq s(a)$. Proof: Choose $r>s(a)$. Then there is a point $(u,v) \in D$ with $au+v = -r$. Then
$$f_{k_n,n} = \int_{|x| = u, |y|=v} F(x,y) x^{-k_n} y^{-n} \frac{dx dy}{x y}$$
An easy bound gives $f_{k_n, n} \leq C \exp (- k_n u - n v) = C \exp(- ((k_n/n) u +v) )$ for some constant $C$.
Conversely, suppose the following conditions hold: There is a single point $(u,v) \in \partial D$ where $au+v$ achieves its maximum. There is an open set $\Omega$ in $\CC^2$ containing the closed polydisc $\{ |x| \leq e^u,\ |y| \leq e^v \}$ such that $F$ extends to a meromorphic function on $\Omega$, with single simple pole along a divisor $\Delta \subset \Omega$. There is a single point $(x,y)$ in $\Delta$ with $(\log |x|, \log |y|) = (u,v)$, and this point is a smooth point of $\Delta$. With all these hypotheses (and perhaps some I have forgotten), $(1/n) \log |f_{k_n,n}| \to s(a)$. See Pemantle and Wilson, part 1. See also this paper where Pemantle and Wilson provide 20 applications of their method, including many of the examples you give.
If $F$ extends to a meromorphic function on some $\Omega$, but the pole set is more complicated, you need to read Pemantle and Wilson's later papers. See especially 2, 3.
Best Answer
If I read your formulas correctly you define $\tilde T_{2l}(x)=T_{2l}(x/2)-C_l$ and $\tilde T_{2l+1}(x)=T_{2l+1}(x/2)$, where $$C_l=l\sum_{j=0}^l (-1)^{l-j}\frac{(l+j-1)!}{(l-j)!(2j)!}\frac 1{j+1}\binom{2j}j.$$ This can be rewritten $$C_l=(-1)^l l!\sum_{j=0}^l\frac{(-l)_j(l)_j}{j!(2)_j},$$ where $(a)_j=a(a+1)\dotsm(a+j-1)$. By the Chu-Vandermonde summation formula, $$C_l=(-1)^ll!\frac{(2-l)_l}{(2)_l}=\begin{cases}1, & l=0,\\ -1/2, & l=1,\\ 0, & l\geq 2.\end{cases}$$ So your polynomials are given by $$\tilde T_k(x)=T_k(x/2)-\delta_{0,k}-\frac 12\delta_{2,k}.$$ They satisfy the recursion $$\tilde T_{k+1}(x)=x\tilde T_k(x)-\tilde T_{k-1}(x),\qquad k\geq 4.$$