What a great question! The term "log-log convexity" is indeed used to refer to the property you have described. It pops up in literature on polynomial optimization, and also in a bit of a disguised manner in geometric programming.
Let's look at the latter case, because to be honest that's what I'm personally familiar with. Consider an expression of the form
$$\sum_{k=1}^K c_k x_1^{a_{1k}} x_2^{a_{2k}} \cdot x_n^{a_{nk}}$$
The variables here are $x_i\geq 0$, $i=1,2,\dots, n$, and the fixed parameters are positive $c_k>0$, $k=1,2,\dots K$ and exponents $a_{ij}$, $1\leq i\leq n$, $1\leq j\leq K$. This expression is what is known as a posynomial. In general, it is not convex, nor is it log-convex.
(Side note: if all of the exponents are nonnegative, and $\sum_i a_{ik}\leq 1$ for each $k=1,2,\dots,K$, then this expression is concave. But we want to consider the case where the exponents are not constrained in this way.)
Now suppose I define $x_j = e^{y_j}$, and substitute. This converts the expression to
$$\sum_{k=1}^K c_k e^{a_{1k}y_1} e^{a_{2k}y_2} \cdot e^{a_{nk}y_n}
= \sum_{k=1}^K c_k e^{\sum_{i=1}^n a_{ik}y_i} =
\sum_{k=1}^K \mathop{\textrm{exp}}\left(\textstyle\sum_{i=1}^n a_{ik}y_i+\log c_k\right)$$
Notice how we have depended upon the positivity of the coefficients $c_k$ so we could move then into the exponents; and notice how the exponents are affine functions of $y$.
Now, if I take the logarithm, I get
$$\log \sum_{k=1}^K \mathop{\textrm{exp}}\left(\textstyle\sum_{i=1}^n a_{ik}y_i+\log c_k\right) = f(Ay+\bar{c})$$
where $A\in\mathbb{R}^{n\times k}$ is a collection of the $a_{ij}$ values, $\bar{c}\in\mathbb{R}^K$ is a collection of the $\log c_k$ values, and $f$ is a convex log-sum-exp function
$$f(z) \triangleq \log \sum_{i=1}^K \mathop{\textrm{exp}}(z_k).$$
So in other words, by doing a logarithmic transformation of the variables, and taking the logarithm of the expression, we have arrived at a convex result. What this means is: a posynomial function is a log-log-convex function of its inputs.
(An astute reader will see that the expression was convex before we took that final logarithm. But the log-sum-exp function tends to have better numerical properties the sum-exp function. Also, when $K=1$, as often occurs in practice, the resulting function is linear, and it's useful to exploit that, of course!)
In true geometric programming, posynomials are the only log-log-convex functions considered. Even in generalized geometric programming, the models considered are converted to pure geometric form, so again, posynomials remain the basic nonlinear construct. But it would be entirely possible to consider an entire family of log-log-convex functions here.
One thing I will point out is that standard convexity and log-log-convexity don't mix very well. That is to say, you usually can't mix convex expressions of $x$ and log-convex expressions of $y=\log x$ in the same model, if you wish to preserve the theoretical and practical benefits of convex optimization. (I leave the cases where you can do so to the reader; hint: consider $y\leq \log x$.) That said, you can mix convex and log-convex functions of $y$ in the same model (e.g., sum-exp and log-sum-exp).
No, it is not. Consider the case when $x_1 =\dots=x_n=y$. Then $f(x)=y+\log n$. The function is affine along this line, so it is not strictly convex.
In fact, it is affine along any line of the form $x = y \vec{1} + b$, where $b$ is constant: $f(x) = y + \log\sum_i e^{b_i}$.
EDIT: Richkent wants to know if there are any other lines along which $f$ is affine. The answer is no. To see why, let's look at the Hessian of $f(x)$, which has a special structure:
$$\nabla f(x) = g, \quad \nabla^2 f(x) = \mathop{\textrm{diag}}(g) - gg^T, \quad
g \triangleq \frac{1}{\sum_i e^{x_i}} \begin{bmatrix} e^{x_1} \\ \vdots \\ e^{x_n} \end{bmatrix}$$
Notice that $g\succ 0$ and $\vec{1}^Tg = 1$, so $\vec{1}^T \nabla^2f(x) \vec{1} = 0$. This confirms that $f$ is affine along the directions $v=\alpha\vec{1}$.
But note also that $\nabla^2 f$ is a rank-1 modification of a positive definite matrix $\mathop{\textrm{diag}}(g)$. This is important, because it tells us that $\mathop{\textrm{rank}}(\nabla^2 f(x))=n-1$, and therefore that $v=\alpha\vec{1}$ are the only vectors for which $v^T\nabla^2 f(x) v=0$. Thus $f$ is strictly convex along any other directions.
To see this a bit better, let's look at the function $g(t)=f(x+tv)$, where $(x,v)$ are fixed. This is just a slice of $f$ along the line $x+tv$, so of course it is convex. The second derivative of $g$ is
$$g''(t) = v^T \nabla^2 f(x+tv) v.$$
If $v=\alpha\vec{1}$, then
$$g''(t) = v^T\nabla^2 f(x+tv) v=\alpha^2\vec{1}^T\nabla^2(x+tv)\vec{1}=0$$
confirming that $g$ is affine if $v=\alpha\vec{1}$. But if $v\neq\alpha\vec{1}$, then $$g''(t) = v^T\nabla^2 f(x+tv) v>0$$
which means that $g$ is strictly convex.
Best Answer
The answer in general is no. Composition preserves convexity only if the inner function (in your case $g(a,b) = a-b$) is convex and non-decreasing. $g(a,b) = a-b$ is affine, but it's not non-decreasing. This means you can't use the composition argument to prove convexity, but the function might still be convex.
Since this is a relatively simply 2-D function, you can plot it to "see" if it's convex or not. If it "looks" convex, you can try proving convexity using the Hessian test, which will be a 2x2 matrix. This test will also allow you to show it's not convex (in case the Hessian is not PSD).