This is related to so-called hyperbolic polynomials, studied by L. Gaarding in the fifties. More generally, let $\lambda(\xi)$ be the least eigenvalue of $A(\xi)=\sum_\alpha\xi_\alpha A^\alpha$, where $A^\alpha$ are Hermitian matrices and $\xi$ is a real vector. Then $\lambda$ is a concave function. It is generically strictly concave, except in the radial directions of course because of homogeneity $\lambda(s\xi)=s\lambda(\xi)$ for $s>0$. The strict concavity is related to the lack of commutativity of pairs $(A^\alpha,A^\beta)$. For instance, the Pauli matrices yield $\lambda(\xi)=-|\xi|$, which is clearly strictly concave away from rays ${\mathbb R}^+\xi$. On the opposite, if $[A^\alpha,A^\beta]=0$ for every pair, then $\lambda$ is piecewise linear.
Strict concavity occurs for instance when the least eigenvalue is simple for every $\xi\ne0$, or if it has a constant multiplicity. Then $\lambda$ is analytic away of the origin, with a Hessian matrix of rank $n-1$. It turns out that this property implies a so-called Strichartz inequality for the solutions of the system of Partial Differential Equations
$$\frac{\partial u}{\partial t}+\sum_\alpha A^\alpha\frac{\partial u}{\partial x_\alpha}=0.$$
Obviously, the symbol of the system is $\det(\tau I_n+A(\xi))$.Thus the characteristic manifold is related to the eigenvalues of $A(\xi)$, in particular to $\lambda(\xi)$.
The other eigenvalues satisfy more involved inequalities such as Weyl, Lidskii, Ky Fan -type inequalities. For instance, the sum of the $k$ least eigenvalues is concave too. This is a part of Alfred Horn's conjecture, now a theorem thanks to the work of many people, including Knutson & Tao.
The last part of the question, that about Toeplitz-Hausdorff theorem, is unclear. ${\mathbb S}$ is not a singleton, so what means
\begin{align}
\lambda(t)=(1-t)x_1+tx_2, ~~[x_1,x_2]\in \mathbb{S} \qquad ?
\end{align}
The equality certainly holds true for teh particular point $(x_1,x_2)$ obtained by taking $x$ a unit eigenvector associated with $\lambda(t)$, but what else ? To see some deep relations between Toeplitz-Hausdorff and hyperbolic polynomials, have a look to our paper with Th. Gallay, Numerical measure of a complex matrix, in
Comm. Pure Appl. Math. 65 (2012), no. 3, 287–336.
Two iterative algorithms that solve LMI problems is the ellipsoid algorithm and interior-point methods.
Both are described in sections 2.3 and 2.4 of Stephen Boyd's book "Linear Matrix Inequalities in System and Control Theory" [1] and in the references therein.
See also [2] for already implemented solvers. In particular, if you use MATLAB I recommend using the SeDuMi solver with the YALMIP parser [3], since this one allows one to input the LMI programs to the solver in a more intuitive way.
Best Answer
Consider the characteristic polynomial $$ \lambda^N -a_1(t)\lambda^{N-1} + \dots +(-1)^{N} a_N(t) = 0 $$ of your Hermitian matrix $tA_1 +(1-t)A_2$ which has all roots real and whose coefficients are real analytic as functions of $t$. By a theorem of Rellich, the roots can be arranged as real analytic functions of $t$ also. Moreover, the eigenvectors of $tA_1 + (1-t)A_2$ can also be arranged real analytic in $t$. See [Dmitri Alekseevky, Andreas Kriegl, Mark Losik, Peter W. Michor: Choosing roots of polynomials smoothly, Israel J. Math 105 (1998)] [(pdf)]1. (2.5 is wrong in this paper.)
By your assumptions you have at both $t=0$ and $t=1$: At least one positive root, one negative root, and one root zero.
From this follows nothing, and it is easy to come up with simple examples with any outcome.
But if you can ascertain that a positive eigenvector for $A_1$ turns into a negative one for $A_2$, then the corresponding eigenvalue must go through 0 in between.
Maybe this can help you playing with your assumptions.