This is related to so-called hyperbolic polynomials, studied by L. Gaarding in the fifties. More generally, let $\lambda(\xi)$ be the least eigenvalue of $A(\xi)=\sum_\alpha\xi_\alpha A^\alpha$, where $A^\alpha$ are Hermitian matrices and $\xi$ is a real vector. Then $\lambda$ is a concave function. It is generically strictly concave, except in the radial directions of course because of homogeneity $\lambda(s\xi)=s\lambda(\xi)$ for $s>0$. The strict concavity is related to the lack of commutativity of pairs $(A^\alpha,A^\beta)$. For instance, the Pauli matrices yield $\lambda(\xi)=-|\xi|$, which is clearly strictly concave away from rays ${\mathbb R}^+\xi$. On the opposite, if $[A^\alpha,A^\beta]=0$ for every pair, then $\lambda$ is piecewise linear.
Strict concavity occurs for instance when the least eigenvalue is simple for every $\xi\ne0$, or if it has a constant multiplicity. Then $\lambda$ is analytic away of the origin, with a Hessian matrix of rank $n-1$. It turns out that this property implies a so-called Strichartz inequality for the solutions of the system of Partial Differential Equations
$$\frac{\partial u}{\partial t}+\sum_\alpha A^\alpha\frac{\partial u}{\partial x_\alpha}=0.$$
Obviously, the symbol of the system is $\det(\tau I_n+A(\xi))$.Thus the characteristic manifold is related to the eigenvalues of $A(\xi)$, in particular to $\lambda(\xi)$.
The other eigenvalues satisfy more involved inequalities such as Weyl, Lidskii, Ky Fan -type inequalities. For instance, the sum of the $k$ least eigenvalues is concave too. This is a part of Alfred Horn's conjecture, now a theorem thanks to the work of many people, including Knutson & Tao.
The last part of the question, that about Toeplitz-Hausdorff theorem, is unclear. ${\mathbb S}$ is not a singleton, so what means
\begin{align}
\lambda(t)=(1-t)x_1+tx_2, ~~[x_1,x_2]\in \mathbb{S} \qquad ?
\end{align}
The equality certainly holds true for teh particular point $(x_1,x_2)$ obtained by taking $x$ a unit eigenvector associated with $\lambda(t)$, but what else ? To see some deep relations between Toeplitz-Hausdorff and hyperbolic polynomials, have a look to our paper with Th. Gallay, Numerical measure of a complex matrix, in
Comm. Pure Appl. Math. 65 (2012), no. 3, 287–336.
Best Answer
Note that $f(t)=g(A(t))$, where $A(t_1,t_2)=A_0+t_1A_1+t_2A_2$ is affine and $g(B)=\lambda_{min}(B)$ which is concave on $B$. Now, to obtain the supergradient of this function you just need to use the chain rule of convex analysis.
First, it is easy to see that the Jacobian of $A$ is $$ D A(t) = A^{\ast} = [A_1^T;A_2^T]. $$
The maximum eigenvalue $g$ can be described by $$ \lambda_{min}(B) = \min\{u^TBu:\,\, \|u\|_2=1 \}. $$ So its superdifferential is given by the convex hull of rank one matrices given by eigenvectors associated to the minimum eigenvalue, i.e. $$ \partial \lambda_{min}(B) = \bar{co}\{ uu^T:\,\,u^Tu=1,\,\,Bu=\lambda_{min}(B) u \}.$$
Finally, by the chain rule, $$ \partial f(t) = \langle[A_1^T;A_2^T],\partial \lambda_{min}(At)\rangle := \{ (\langle A_1^T,X\rangle,\langle A_2^T,X\rangle):\,\, X\in \bar{co}\{ uu^T:\,\,u^Tu=1,\,\,A(t)u=\lambda_{min}(A(t)) u \}\}.$$ (the inner product above is the trace inner product) Which is a set of linear operators from $\mathbb{R}^2$ to $\mathbb{R}$, as it should be.
You can derive chain rules for general affine maps: see Hiriart-Urruty & Lemarechal: Fundamentals of Convex analysis, Theorems D.4.2.1 and D.5.1 (for the maximum eigenvalue subdifferential).