What a great question! The term "log-log convexity" is indeed used to refer to the property you have described. It pops up in literature on polynomial optimization, and also in a bit of a disguised manner in geometric programming.
Let's look at the latter case, because to be honest that's what I'm personally familiar with. Consider an expression of the form
$$\sum_{k=1}^K c_k x_1^{a_{1k}} x_2^{a_{2k}} \cdot x_n^{a_{nk}}$$
The variables here are $x_i\geq 0$, $i=1,2,\dots, n$, and the fixed parameters are positive $c_k>0$, $k=1,2,\dots K$ and exponents $a_{ij}$, $1\leq i\leq n$, $1\leq j\leq K$. This expression is what is known as a posynomial. In general, it is not convex, nor is it log-convex.
(Side note: if all of the exponents are nonnegative, and $\sum_i a_{ik}\leq 1$ for each $k=1,2,\dots,K$, then this expression is concave. But we want to consider the case where the exponents are not constrained in this way.)
Now suppose I define $x_j = e^{y_j}$, and substitute. This converts the expression to
$$\sum_{k=1}^K c_k e^{a_{1k}y_1} e^{a_{2k}y_2} \cdot e^{a_{nk}y_n}
= \sum_{k=1}^K c_k e^{\sum_{i=1}^n a_{ik}y_i} =
\sum_{k=1}^K \mathop{\textrm{exp}}\left(\textstyle\sum_{i=1}^n a_{ik}y_i+\log c_k\right)$$
Notice how we have depended upon the positivity of the coefficients $c_k$ so we could move then into the exponents; and notice how the exponents are affine functions of $y$.
Now, if I take the logarithm, I get
$$\log \sum_{k=1}^K \mathop{\textrm{exp}}\left(\textstyle\sum_{i=1}^n a_{ik}y_i+\log c_k\right) = f(Ay+\bar{c})$$
where $A\in\mathbb{R}^{n\times k}$ is a collection of the $a_{ij}$ values, $\bar{c}\in\mathbb{R}^K$ is a collection of the $\log c_k$ values, and $f$ is a convex log-sum-exp function
$$f(z) \triangleq \log \sum_{i=1}^K \mathop{\textrm{exp}}(z_k).$$
So in other words, by doing a logarithmic transformation of the variables, and taking the logarithm of the expression, we have arrived at a convex result. What this means is: a posynomial function is a log-log-convex function of its inputs.
(An astute reader will see that the expression was convex before we took that final logarithm. But the log-sum-exp function tends to have better numerical properties the sum-exp function. Also, when $K=1$, as often occurs in practice, the resulting function is linear, and it's useful to exploit that, of course!)
In true geometric programming, posynomials are the only log-log-convex functions considered. Even in generalized geometric programming, the models considered are converted to pure geometric form, so again, posynomials remain the basic nonlinear construct. But it would be entirely possible to consider an entire family of log-log-convex functions here.
One thing I will point out is that standard convexity and log-log-convexity don't mix very well. That is to say, you usually can't mix convex expressions of $x$ and log-convex expressions of $y=\log x$ in the same model, if you wish to preserve the theoretical and practical benefits of convex optimization. (I leave the cases where you can do so to the reader; hint: consider $y\leq \log x$.) That said, you can mix convex and log-convex functions of $y$ in the same model (e.g., sum-exp and log-sum-exp).
Best Answer
(Log-)convex functions do not need to be twice differentiable, so any proof through $\frac{d^2}{dx^2}$ lacks generality. On the other hand $e^x$ is both a convex and log-convex function, and we may wonder when the composition of two convex functions is convex. Assume that $f(x)$ is convex . Then $g(x)=e^{f(x)}$ is convex iff
$$ g(\lambda x+(1-\lambda) y) \leq \lambda g(x) + (1-\lambda) g(y)\qquad \forall \lambda\in[0,1] $$ which is equivalent to
$$ \exp\left[f(\lambda x+(1-\lambda) y)\right]\leq \lambda e^{f(x)}+(1-\lambda)e^{f(y)}\qquad \forall\lambda\in[0,1]. $$ By the convexity of $f$ we know that $f(\lambda x+(1-\lambda) y)\leq \lambda f(x)+(1-\lambda)f(y)$.
Since $\exp$ is increasing we get $$ \exp\left[f(\lambda x+(1-\lambda) y)\right]\leq \exp\left[\lambda f(x)+(1-\lambda)f(y)\right] $$ unconditionally, and since $\exp$ is convex we get $$\exp\left[\lambda f(x)+(1-\lambda)f(y)\right]\leq \lambda e^{f(x)}+(1-\lambda)e^{f(y)}$$ as wanted. In other terms, if $a(x),b(x)$ are convex functions and $a(x)$ is (weakly) increasing, then $(a\circ b)(x)$ is a convex function, as shown here, too. It follows that any log-convex function is also convex, as the exponential of a convex function.
In general nothing can be said about the composition of a concave function (like $\log$) with a convex function. If we take $f_1(x)=x^2, f_2(x)=e^{x^2}, f_3(x)=x e^{x\sqrt{x}}$, then over $\mathbb{R}^+$ we have that $f_1,f_2,f_3$ are convex but $\log f_1$ is concave, $\log f_2$ is convex and $\log f_3$ is neither convex or concave.