I think the following is a simple combinatorial argument which constructs the most dominant semistandard $\lambda$-tableau of content $\mu$ whenever $\lambda\trianglerighteq\mu$. (n.b. I haven't followed the reference given in Richard Stanley's comment, so I don't know whether I'm duplicating what's done there.)
In a nutshell, the idea is "put the largest numbers as low as possible". So let $l$ be the length of $\mu$, and start the tableau by putting the $\mu_l$ $l$s in the bottom boxes of columns as far to the left as possible subject to the condition that if any box has an $l$ and there is a box directly to the right, then that box must also have an $l$ in it. (Another way of saying this is: put $l$s at the bottom of the first $\mu_l$ columns, and then slide these $l$s to the ends of their rows.) If we let $j$ be maximal such that $\lambda_j\geq\mu_l$, this means that we are putting $\lambda_x-\lambda_{x+1}$ $l$s at the end of row $x$ for each $x>j$, and $\mu_l-\lambda_{j+1}$ $l$s at the end of row $j$.
To fill in the rest of the tableau, we work recursively. Let $\hat\lambda$ denote the partition whose Young diagram comprises the boxes that are still empty, and let $\hat\mu$ be the partition $(\mu_1,\dots,\mu_{l-1})$. Then as long as $\hat\lambda\trianglerighteq\hat\mu$, we can fill in the rest of the tableau with a semistandard $\hat\lambda$-tableau of content $\hat\mu$, which gives us what we need.
So we need to show that $\hat\lambda_1+\dots+\hat\lambda_x\geq\hat\mu_1+\dots+\hat\mu_x$ for every $x$. For $x<j$ or $x\geq l$ this is immediate from the fact that $\lambda\trianglerighteq\mu$, so take $j\leq x<l$. Observe that $$\lambda_1+\dots+\lambda_x=n-(\lambda_{x+1}+\dots+\lambda_l)\geqslant n-(l-x)\lambda_{x+1}$$while $$\mu_1+\dots+\mu_x=n-(\mu_{x+1}+\dots+\mu_l)\leqslant n-(l-x)\mu_l.$$ So $$(\hat\lambda_1+\dots+\hat\lambda_x)-(\hat\mu_1+\dots+\hat\mu_x)=(\lambda_1+\dots+\lambda_{x+1}-\mu_l)-(\mu_1+\dots+\mu_x)\geqslant(l-x-1)(\mu_l-\lambda_{x+1})\geqslant0$$ as required.
I don't know a fully general result, but your pattern for partitions $\lambda$
of length $\leq n$ with $n$-th entry $\lambda_{n}\geq n-1$ and with $n$
indeterminates persists:
Theorem 1. Let $n$ be a positive integer. Let $\lambda=\left( \lambda
_{1},\lambda_{2},\ldots,\lambda_{n}\right) $ be an integer partition with at
most $n$ parts. Assume that $\lambda_{n}\geq n-1$. Consider polynomials in $n$
indeterminates $x_{1},x_{2},\ldots,x_{n}$. For each nonnegative integer $k$,
we set
\begin{align*}
p_{k}:=x_{1}^{k}+x_{2}^{k}+\cdots+x_{n}^{k}.
\end{align*}
(This is the $k$-th power-sum symmetric polynomial in $x_{1},x_{2}
,\ldots,x_{n}$ when $k>0$. We have $p_{0}=n$.) Define the $n\times n$-matrix
\begin{align*}
P:=\left( p_{\lambda_{i}-i+j}\right) _{1\leq i\leq n,\ 1\leq j\leq
n}=\left(
\begin{array}
[c]{cccc}
p_{\lambda_{1}} & p_{\lambda_{1}+1} & \cdots & p_{\lambda_{1}+n-1}\\
p_{\lambda_{2}-1} & p_{\lambda_{2}} & \cdots & p_{\lambda_{2}+n-2}\\
\vdots & \vdots & \ddots & \vdots\\
p_{\lambda_{n}-n+1} & p_{\lambda_{n}-n+2} & \cdots & p_{\lambda_{n}}
\end{array}
\right) .
\end{align*}
Let $\mu=\left( \mu_{1},\mu_{2},\ldots,\mu_{n}\right) $ be the partition
defined by
\begin{align*}
\mu_{i}=\lambda_{i}-\left( n-1\right) \ \ \ \ \ \ \ \ \ \ \text{for each
}i\in\left\{ 1,2,\ldots,n\right\} .
\end{align*}
(This is indeed a partition, since $\mu_{n}=\underbrace{\lambda_{n}}_{\geq
n-1}-\left( n-1\right) \geq0$.) Let $s_{\mu}$ be the corresponding Schur
polynomial in the $n$ indeterminates $x_{1},x_{2},\ldots,x_{n}$. Furthermore,
let
\begin{align*}
V_{n}:=\prod_{1\leq i<j\leq n}\left( x_{i}-x_{j}\right)
\end{align*}
be the Vandermonde determinant. Then,
\begin{align*}
\det P=\left( -1\right) ^{n\left( n-1\right) /2}s_{\mu}\cdot V_{n}^{2}.
\end{align*}
Proof. Let $A_{\mu}$ be the $n\times n$-matrix
\begin{align*}
\left( x_{j}^{\mu_{i}+n-i}\right) _{1\leq i\leq n,\ 1\leq j\leq n}=\left(
\begin{array}
[c]{cccc}
x_{1}^{\mu_{1}+n-1} & x_{2}^{\mu_{1}+n-1} & \cdots & x_{n}^{\mu_{1}+n-1}\\
x_{1}^{\mu_{2}+n-2} & x_{2}^{\mu_{2}+n-2} & \cdots & x_{n}^{\mu_{2}+n-2}\\
\vdots & \vdots & \ddots & \vdots\\
x_{1}^{\mu_{n}+n-n} & x_{2}^{\mu_{n}+n-n} & \cdots & x_{n}^{\mu_{n}+n-n}
\end{array}
\right) .
\end{align*}
It is then well-known that
\begin{equation}
s_{\mu}=\dfrac{\det\left( A_{\mu}\right) }{V_{n}}
.
\label{darij1.eq.slam=frac}
\tag{1}
\end{equation}
Indeed, this is the alternant formula for Schur polynomials. For a proof, see,
e.g., Corollary 2.6.7 in the lecture notes Darij Grinberg and Victor Reiner,
Hopf Algebras in Combinatorics,
arXiv:1409.8356v7. (The notations in those
notes are not quite ours. Namely, our matrix $A_{\mu}$ is the transpose of the
matrix whose determinant is $a_{\mu+\rho}$ in the notes, whereas our $V_{n}$
is $a_{\rho}$ in these notes. Corollary 2.6.7 has to be applied to $\mu$
instead of $\lambda$.)
Let $B$ be the $n\times n$-matrix
\begin{align*}
\left( x_{i}^{j-1}\right) _{1\leq i\leq n,\ 1\leq j\leq n}=\left(
\begin{array}
[c]{cccc}
1 & x_{1} & \cdots & x_{1}^{n-1}\\
1 & x_{2} & \cdots & x_{2}^{n-1}\\
\vdots & \vdots & \ddots & \vdots\\
1 & x_{n} & \cdots & x_{n}^{n-1}
\end{array}
\right) .
\end{align*}
The Vandermonde determinant formula yields
\begin{align*}
\det B & =\prod_{1\leq i<j\leq n}\underbrace{\left( x_{j}-x_{i}\right)
}_{=-\left( x_{i}-x_{j}\right) }=\prod_{1\leq i<j\leq n}\left( -\left(
x_{i}-x_{j}\right) \right) \\
& =\left( -1\right) ^{n\left( n-1\right) /2}\underbrace{\prod_{1\leq
i<j\leq n}\left( x_{i}-x_{j}\right) }_{=V_{n}}=\left( -1\right) ^{n\left(
n-1\right) /2}V_{n}.
\end{align*}
However, we have
\begin{equation}
A_{\mu}B=P.
\label{darij1.eq.AB=P}
\tag{2}
\end{equation}
(Indeed, for any $i,j\in\left\{ 1,2,\ldots,n\right\} $, the $\left(
i,j\right) $-th entry of the matrix $A_{\mu}B$ is
\begin{align*}
\sum_{k=1}^{n}\underbrace{x_{k}^{\mu_{i}+n-i}x_{k}^{j-1}}_{\substack{=x_{k}
^{\mu_{i}+n-i+j-1}=x_{k}^{\lambda_{i}-i+j}\\\text{(since }\mu_{i}=\lambda
_{i}-\left( n-1\right) \text{ and}\\\text{thus }\mu_{i}+n-i+j-1=\lambda
_{i}-\left( n-1\right) +n-i+j-1=\lambda_{i}-i+j\text{)}}} & =\sum_{k=1}
^{n}x_{k}^{\lambda_{i}-i+j}\\
& =x_{1}^{\lambda_{i}-i+j}+x_{2}^{\lambda_{i}-i+j}+\cdots+x_{n}^{\lambda
_{i}-i+j}=p_{\lambda_{i}-i+j},
\end{align*}
which happens to be precisely the $\left( i,j\right) $-th entry of the
matrix $P$. Thus, \eqref{darij1.eq.AB=P} follows.)
Now, the two matrices $A_{\mu}$ and $B$ are square matrices. Hence,
\begin{align*}
\det\left( A_{\mu}B\right) & =\underbrace{\det\left( A_{\mu}\right)
}_{\substack{=s_{\mu}V_{n}\\\text{(by \eqref{darij1.eq.slam=frac})}}
}\cdot\underbrace{\det B}_{=\left( -1\right) ^{n\left( n-1\right) /2}
V_{n}}\\
& =s_{\mu}V_{n}\cdot\left( -1\right) ^{n\left( n-1\right) /2}V_{n}=\left(
-1\right) ^{n\left( n-1\right) /2}s_{\mu}\cdot V_{n}^{2}.
\end{align*}
In view of \eqref{darij1.eq.AB=P}, we can rewrite this as
\begin{align*}
\det P=\left( -1\right) ^{n\left( n-1\right) /2}s_{\mu}\cdot V_{n}^{2}.
\end{align*}
This proves Theorem 1. $\blacksquare$
The claim of Theorem 1 can further be rewritten by observing that (in $n$
indeterminates $x_{1},x_{2},\ldots,x_{n}$) we have
\begin{align*}
s_{\lambda}=s_{\mu}\cdot\left( x_{1}x_{2}\cdots x_{n}\right) ^{n-1}
\end{align*}
(because the entries of $\lambda$ are the respective entries of $\mu$ plus
$n-1$). The product $x_{1}x_{2}\cdots x_{n}$ can also be rewritten as
$s_{\left( 1^{n}\right) }$, where $\left( 1^{n}\right) $ is the partition
$\left( 1,1,\ldots,1\right) $ with $n$ entries.
Best Answer
This question is listed as a conjecture (conjecture 7.4 in the section "Open questions") in a recent paper of Cuttler, Greene and Skandera.