In analysis, individual inequalities or estimates are usually not so useful per se (though there are some notable exceptions, such as the Sobolev embedding inequality, or the Cauchy-Schwarz inequality), but are instead representative examples of a larger useful class of estimates. (cf. Gowers' "Two cultures of mathematics".)
In this particular case, the classical Hardy inequality exemplifies two useful principles; firstly, that an inverse power weight such as $1/|x|^\alpha$ is "dominated" in some $L^p$ sense by the corresponding derivative $|\nabla|^\alpha$ (or, to put it somewhat facetiously, $\frac{1}{x} = O(\frac{d}{dx} )$; compare with the uncertainty principle $dx \cdot d\xi \gtrsim 1$); and secondly, that a maximal average of a function is often dominated in an $L^p$ sense by the function itself. The first principle is captured by a number of higher-dimensional generalisations of Hardy's inequality (which typically take a shape such as
$$\left\| \frac{f}{|x|^\alpha} \right\| _ {L^p({\bf R}^n)} \leq C_{p,\alpha,n} \| |\nabla|^\alpha f \|_{L^p({\bf R}^n)}$$
under suitable assumptions on $p,n,\alpha,f$) which are fundamental to the analysis of any PDE that involves singular potentials or weights such as $\frac{1}{|x|^\alpha}$. The second principle is captured by a different family of generalisations of Hardy's inequality, namely the maximal inequalities for which the Hardy-Littlewood maximal inequality is the model example. This inequality is the foundation of a large part of real-variable harmonic analysis, and in particular in the analysis of singular integral operators such as the Hilbert transform or pseudo-differential operators.
There are two nice features of Hardy's original inequality that are also worth pointing out. Firstly, it is an $L^p$ inequality with an explicit optimal constant, which is something of a rarity in analysis (there are maybe only a dozen or so other such sharp inequalities known for the fundamental operators in analysis). The other is that the inequality is never actually satisfied with equality (except in the trivial case when the function vanishes); one can construct sequences of near-extremisers that get arbitrarily close to attaining equality, but they do not converge to a limit that actually attains that equality. (The function $f = x^{-1/p}$ formally attains equality for Theorem 2, but there is a logarithmic divergence on both sides.) This is perhaps one of the simplest examples of such a situation, and one which is well worth studying if one is interested in using variational methods to find optimal constants for other inequalities, as one needs to have a good intuition as to when one expects optimisers to actually exist or not.
Sorry to answer my own question, but asking this in public seems to have spurred me into thought.
As auniket suspected, the answer is "yes" in the strongest sense I'd hoped: properties 1-3 do characterize mixed volume. In fact, something slightly stronger is true: $V$ is the unique function $(\mathscr{K}_n)^n \to \mathbb{R}$ satisfying
$V(A, \ldots, A) = Vol(A)$
$V$ is symmetric
$V(A_1 + A'_1, A_2, \ldots, A_n) = V(A_1, A_2, \ldots, A_n) + V(A'_1, A_2, \ldots, A_n)$.
In other words, we don't need multilinearity, just multiadditivity.
The proof is along the lines suggested by auniket.
Fix $n$ and $A_1, \ldots, A_n \in \mathscr{K}_n$. Write $\mathbf{n} = \{1, \ldots, n\}$, and for sets $R$ and $S$, write $\mathrm{Surj}(R, S)$ for the set of surjections $R \to S$.
I claim that for all subsets $S$ of $\mathbf{n}$,
$$
\sum_{f \in \mathrm{Surj}(\mathbf{n}, S)} V(A_{f(1)}, \ldots, A_{f(n)})
$$
is uniquely determined by the properties above. The proof will be by induction on the cardinality of $S$. When $S = \mathbf{n}$, this sum is
$$
n! V(A_1, \ldots, A_n),
$$
so this claim will imply the characterization theorem.
To prove the claim, take $S \subseteq \mathbf{n}$. Then
$$
Vol(\sum_{i \in S} A_i) = \sum_{f: \mathbf{n} \to S} V(A_{f(1)}, \ldots, A_{f(n)})
$$
by the three properties. This in turn is equal to
$$
\sum_{R \subseteq S} \sum_{f \in \mathrm{Surj}(\mathbf{n}, R)} V(A_{f(1)}, \ldots, A_{f(n)}).
$$
By the inductive assumption, all but one of the summands in the first summation - namely, $R = S$ - is uniquely determined. Hence the $S$-summand is uniquely determined too. This completes the induction, and so completes the proof.
The proof makes it clear that $V(A_1, \ldots, A_n)$ is some rational linear combination of ordinary volumes of Minkowski sums of some of the $A_i$s. It must be possible to unwind this proof and get an explicit expression; and that expression must be the one auniket gave (which also appears in Lemma 5.1.3 of Schneider's book Convex Bodies: The Brunn-Minkowski Theory).
This all seems rather easy, and must be well-known, though I'm a bit surprised that this characterization isn't mentioned in some of the things I've read. Incidentally, I now understand why it doesn't appear in the paper of Milman and Schneider mentioned in my question: they explicitly state that they want to avoid assuming property 1.
Best Answer
Denote $b/n=s$. We want a pointwise bound $$f(x)\leqslant f(s)+(x-s)f'(s),\label{1}\tag{$\heartsuit$}$$ then summing \eqref{1} up for $x=a_1,\ldots,a_n$ we get the desired inequality. Note that if $s<a$ (that holds for $n>b/a$) we get \eqref{1} on $[0,a]$ by concavity. For proving \eqref{1} on $[a,b]$, by convexity it suffices to verify \eqref{1} for $x=a$ and $x=b$. For $x=a$ this is already done, for $x=b$ it reads as $$ f(b)\leqslant f(s)+(b-s)f'(s). $$ When $n$ is large, RHS converges to $f(0)+bf'(0)$. Thus it suffices to check that $$ f(b)<f(0)+bf'(0). $$ Assume the contrary: $$ f(b)\geqslant f(0)+bf'(0). $$ We have $f(x)\leqslant f(0)+f'(0)x$ for all $x\in [0,a]$ by concavity. Denote by $c$ the endpoint of the maximal segment $[0,c]$ on which we have $f(x)\leqslant f(0)+f'(0)x$. Then $c\in [a,b]$ and we have $f(c)=f(0)+f'(0)c$ (otherwise $c$ is not maximal). This yields $$ f'(c)=\lim_{x\to c-0}\frac{f(c)-f(x)}{c-x}\geqslant f'(0). $$ Since $f'$ increases on $[a,b]$ by convexity we get $f'(b)\geqslant f'(c)\geqslant f'(0)$, a contradiction.