Ok, I have a functional generalization of your Product-Sum conjecture using a very simple method.
Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be any function with a nonnegative $\binom{n}{2}$th derivative. I claim that we have the following functional inequality:
$\sum_{\pi\in S_n} (-1)^{\sigma(\pi)}f(\sum_i a_ib_{\pi(i)}) \ge 0$.
Plugging in $f(x) = -(-1)^{\binom{n}{2}}\log(x)$, we see that your inequality holds as long as $\sum_i a_ib_{n+1-i} \ge 0$.
To prove the functional inequality, it clearly suffices to prove it in the case that the $b_i$s are all distinct positive integers, so assume from now on that this is the case. Let $x_i = e^{a_i}$. Note first that in the special case in which $f(x) = e^x$, we get that
$\sum_{\pi\in S_n} (-1)^{\sigma(\pi)}\prod_ix_i^{b_{\pi(i)}} = \det((x_i^{b_j})_{i,j})$
which is $\prod_{i\lt j}(x_i-x_j)$ times the Schur polynomial $s_{(b_1-n+1,...,b_n)}(x_1,...,x_n)$, and as is well known Schur polynomials have all of their coefficients nonnegative. For every monomial $x^m = \prod_i x_i^{m_i}$, let $c_m$ be its coefficient in the Schur polynomial $s_{(b_1-n+1,...,b_n)}(x_1,...,x_n)$.
Next, let $S_a$ be the shift operator - i.e., let $S_a(f)(x) = f(x+a)$. Then it is easy to check that we have
$\sum_{\pi\in S_n} (-1)^{\sigma(\pi)}f(\sum_i a_ib_{\pi(i)}) =\sum_mc_m(\prod_{i\lt j}(S_{a_i}-S_{a_j}))(f)(\sum_ia_im_i)$,
which is nonnegative since we have $(\prod_{i\lt j}(S_{a_i}-S_{a_j}))(f)(x) \ge 0$ for any $x$ and any function $f$ with nonnegative $\binom{n}{2}$th derivative.
I don't know a fully general result, but your pattern for partitions $\lambda$
of length $\leq n$ with $n$-th entry $\lambda_{n}\geq n-1$ and with $n$
indeterminates persists:
Theorem 1. Let $n$ be a positive integer. Let $\lambda=\left( \lambda
_{1},\lambda_{2},\ldots,\lambda_{n}\right) $ be an integer partition with at
most $n$ parts. Assume that $\lambda_{n}\geq n-1$. Consider polynomials in $n$
indeterminates $x_{1},x_{2},\ldots,x_{n}$. For each nonnegative integer $k$,
we set
\begin{align*}
p_{k}:=x_{1}^{k}+x_{2}^{k}+\cdots+x_{n}^{k}.
\end{align*}
(This is the $k$-th power-sum symmetric polynomial in $x_{1},x_{2}
,\ldots,x_{n}$ when $k>0$. We have $p_{0}=n$.) Define the $n\times n$-matrix
\begin{align*}
P:=\left( p_{\lambda_{i}-i+j}\right) _{1\leq i\leq n,\ 1\leq j\leq
n}=\left(
\begin{array}
[c]{cccc}
p_{\lambda_{1}} & p_{\lambda_{1}+1} & \cdots & p_{\lambda_{1}+n-1}\\
p_{\lambda_{2}-1} & p_{\lambda_{2}} & \cdots & p_{\lambda_{2}+n-2}\\
\vdots & \vdots & \ddots & \vdots\\
p_{\lambda_{n}-n+1} & p_{\lambda_{n}-n+2} & \cdots & p_{\lambda_{n}}
\end{array}
\right) .
\end{align*}
Let $\mu=\left( \mu_{1},\mu_{2},\ldots,\mu_{n}\right) $ be the partition
defined by
\begin{align*}
\mu_{i}=\lambda_{i}-\left( n-1\right) \ \ \ \ \ \ \ \ \ \ \text{for each
}i\in\left\{ 1,2,\ldots,n\right\} .
\end{align*}
(This is indeed a partition, since $\mu_{n}=\underbrace{\lambda_{n}}_{\geq
n-1}-\left( n-1\right) \geq0$.) Let $s_{\mu}$ be the corresponding Schur
polynomial in the $n$ indeterminates $x_{1},x_{2},\ldots,x_{n}$. Furthermore,
let
\begin{align*}
V_{n}:=\prod_{1\leq i<j\leq n}\left( x_{i}-x_{j}\right)
\end{align*}
be the Vandermonde determinant. Then,
\begin{align*}
\det P=\left( -1\right) ^{n\left( n-1\right) /2}s_{\mu}\cdot V_{n}^{2}.
\end{align*}
Proof. Let $A_{\mu}$ be the $n\times n$-matrix
\begin{align*}
\left( x_{j}^{\mu_{i}+n-i}\right) _{1\leq i\leq n,\ 1\leq j\leq n}=\left(
\begin{array}
[c]{cccc}
x_{1}^{\mu_{1}+n-1} & x_{2}^{\mu_{1}+n-1} & \cdots & x_{n}^{\mu_{1}+n-1}\\
x_{1}^{\mu_{2}+n-2} & x_{2}^{\mu_{2}+n-2} & \cdots & x_{n}^{\mu_{2}+n-2}\\
\vdots & \vdots & \ddots & \vdots\\
x_{1}^{\mu_{n}+n-n} & x_{2}^{\mu_{n}+n-n} & \cdots & x_{n}^{\mu_{n}+n-n}
\end{array}
\right) .
\end{align*}
It is then well-known that
\begin{equation}
s_{\mu}=\dfrac{\det\left( A_{\mu}\right) }{V_{n}}
.
\label{darij1.eq.slam=frac}
\tag{1}
\end{equation}
Indeed, this is the alternant formula for Schur polynomials. For a proof, see,
e.g., Corollary 2.6.7 in the lecture notes Darij Grinberg and Victor Reiner,
Hopf Algebras in Combinatorics,
arXiv:1409.8356v7. (The notations in those
notes are not quite ours. Namely, our matrix $A_{\mu}$ is the transpose of the
matrix whose determinant is $a_{\mu+\rho}$ in the notes, whereas our $V_{n}$
is $a_{\rho}$ in these notes. Corollary 2.6.7 has to be applied to $\mu$
instead of $\lambda$.)
Let $B$ be the $n\times n$-matrix
\begin{align*}
\left( x_{i}^{j-1}\right) _{1\leq i\leq n,\ 1\leq j\leq n}=\left(
\begin{array}
[c]{cccc}
1 & x_{1} & \cdots & x_{1}^{n-1}\\
1 & x_{2} & \cdots & x_{2}^{n-1}\\
\vdots & \vdots & \ddots & \vdots\\
1 & x_{n} & \cdots & x_{n}^{n-1}
\end{array}
\right) .
\end{align*}
The Vandermonde determinant formula yields
\begin{align*}
\det B & =\prod_{1\leq i<j\leq n}\underbrace{\left( x_{j}-x_{i}\right)
}_{=-\left( x_{i}-x_{j}\right) }=\prod_{1\leq i<j\leq n}\left( -\left(
x_{i}-x_{j}\right) \right) \\
& =\left( -1\right) ^{n\left( n-1\right) /2}\underbrace{\prod_{1\leq
i<j\leq n}\left( x_{i}-x_{j}\right) }_{=V_{n}}=\left( -1\right) ^{n\left(
n-1\right) /2}V_{n}.
\end{align*}
However, we have
\begin{equation}
A_{\mu}B=P.
\label{darij1.eq.AB=P}
\tag{2}
\end{equation}
(Indeed, for any $i,j\in\left\{ 1,2,\ldots,n\right\} $, the $\left(
i,j\right) $-th entry of the matrix $A_{\mu}B$ is
\begin{align*}
\sum_{k=1}^{n}\underbrace{x_{k}^{\mu_{i}+n-i}x_{k}^{j-1}}_{\substack{=x_{k}
^{\mu_{i}+n-i+j-1}=x_{k}^{\lambda_{i}-i+j}\\\text{(since }\mu_{i}=\lambda
_{i}-\left( n-1\right) \text{ and}\\\text{thus }\mu_{i}+n-i+j-1=\lambda
_{i}-\left( n-1\right) +n-i+j-1=\lambda_{i}-i+j\text{)}}} & =\sum_{k=1}
^{n}x_{k}^{\lambda_{i}-i+j}\\
& =x_{1}^{\lambda_{i}-i+j}+x_{2}^{\lambda_{i}-i+j}+\cdots+x_{n}^{\lambda
_{i}-i+j}=p_{\lambda_{i}-i+j},
\end{align*}
which happens to be precisely the $\left( i,j\right) $-th entry of the
matrix $P$. Thus, \eqref{darij1.eq.AB=P} follows.)
Now, the two matrices $A_{\mu}$ and $B$ are square matrices. Hence,
\begin{align*}
\det\left( A_{\mu}B\right) & =\underbrace{\det\left( A_{\mu}\right)
}_{\substack{=s_{\mu}V_{n}\\\text{(by \eqref{darij1.eq.slam=frac})}}
}\cdot\underbrace{\det B}_{=\left( -1\right) ^{n\left( n-1\right) /2}
V_{n}}\\
& =s_{\mu}V_{n}\cdot\left( -1\right) ^{n\left( n-1\right) /2}V_{n}=\left(
-1\right) ^{n\left( n-1\right) /2}s_{\mu}\cdot V_{n}^{2}.
\end{align*}
In view of \eqref{darij1.eq.AB=P}, we can rewrite this as
\begin{align*}
\det P=\left( -1\right) ^{n\left( n-1\right) /2}s_{\mu}\cdot V_{n}^{2}.
\end{align*}
This proves Theorem 1. $\blacksquare$
The claim of Theorem 1 can further be rewritten by observing that (in $n$
indeterminates $x_{1},x_{2},\ldots,x_{n}$) we have
\begin{align*}
s_{\lambda}=s_{\mu}\cdot\left( x_{1}x_{2}\cdots x_{n}\right) ^{n-1}
\end{align*}
(because the entries of $\lambda$ are the respective entries of $\mu$ plus
$n-1$). The product $x_{1}x_{2}\cdots x_{n}$ can also be rewritten as
$s_{\left( 1^{n}\right) }$, where $\left( 1^{n}\right) $ is the partition
$\left( 1,1,\ldots,1\right) $ with $n$ entries.
Best Answer
Here is how I think it should go. First
$\mathrm{Sym}\left(V\oplus \wedge^2 V\right)\cong \mathrm{Sym}\left(V\right)\otimes \mathrm{Sym}\left(\wedge^2 V\right)$
Then we have
$\mathrm{Sym}\left(\wedge^2 V\right)\cong \sum\limits_{\lambda} \mathrm{Schur}_{\lambda}\left(V\right)$
where the sum is over partitions such that all parts of the conjugate partition are even. This came up in Symmetric tensor products of irreducible representations
The tensor product $\mathrm{Sym}\left(V\right)\otimes \mathrm{Schur}_{\lambda}\left(V\right)$ is known by Pieri's rule.
Now given a partition we take a maximal subdiagram such that every column has an even number of boxes. The complement is skew shape with at most one box in each column.
Further comment In response to the request for representation theoretic proofs of the results used see
MR1606831 (99b:20073) Goodman, Roe ; Wallach, Nolan R. Representations and invariants of the classical groups. Encyclopedia of Mathematics and its Applications, 68. Cambridge University Press, Cambridge, 1998. xvi+685 pp. ISBN: 0-521-58273-3; 0-521-66348-2
In particular see 9.2.2 Reciprocity rules for Pieri's rule and see 5.2.6 to see the decomposition of $\mathrm{Sym}\left(\wedge^2 V\right)$. The highest weight vectors are constructed using Pfaffians.