Probably a simple general formula for your matrix $C_n=(C_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$, where $\mathcal P_n$ denotes the partitions of $n$ (ordered in decreasing lexicographic ordering) does not exist. However a number of things can be said, notably your above guesses can be confirmed. One thing that your examples suggest but which is false is that the matrix is "upper-left triangular", which fails from $n=6$ on; the reason is that the true triangularity is given by $ C_{\lambda,\mu}\neq0\implies \lambda^{tr}\leq\mu$ in the dominance order where $\lambda^{tr}$ is the transpose (conjugate) partition of $\lambda$, and lexicographic order for $n\geq6$ does not assign complementary positions to $\lambda$ and $\lambda^{tr}$ (nor does any ordering for $n$ sufficiently large).
As you guessed it is easier to study the inverse matrix $C_n^{-1}=(M_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$, whose entry $M_{\lambda,\mu}$ gives the coefficient of $m_\mu$ in $e_\lambda$. This nonnegative number equals number of $0$-$1$ matrices with row sums $\lambda_1,\lambda_2,\ldots$ and column sums $\mu_1,\mu_2,\ldots$, and in particular $M$ is a symmetric matrix (Stanley EC2, Proposition 7.4.1, Corollary 7.4.2). The argument for this combinatorial interpretation is basically the balls-into-boxes you suggested: each term in $e_\lambda$ comes from choosing a monomial in each factor $e_{\lambda_i}$, and this choice can be represented by putting in row $i$ a $1$ (ball) in the selected columns and zeros elsewhere; the product monomial is found by summing up the columns (and by symmetry only those monomials with weakly decreasing exponents (column sums) need to be considered). Since $M_n$ is symmetric and $C_n$ is its inverse, an easy argument shows that $C_n$ is also symmetric.
You suggested that the sum of all entries of $C_n$ is $0$. This fails for
$n=0,1$, but is true for all larger values, which can be seen as follows. By
the definition of $C_n$, if one takes its column sums (equivalently, if one
left-multiplies by $(1~1\cdots1)$), then one gets the
coefficients the $e_\mu$ in $h_n=\sum_{\lambda\in\mathcal P_n}m_\lambda$, the
$n$-th complete homogeneous symmetric function (sum of all distinct monomials
of degree $n$). One would like to know the sum of those coefficients. Now the
relation between elementary and complete homogeneous symmetric functions is
given by the generating series identity
$$
(1-e_1X+e_2X^2-e_3X^3+\cdots)(1+h_1X+h_2X^2+\cdots)=1
$$
or equivalently $\sum_{i=0}^n(-1)^ne_ih_{n-i}=0^n$. You can prove that the
mentioned sum is $0$ for $n\geq2$ by induction from the latter equation. A more
conceptual way is to use the generating series identity and the algebraic
independence of the $e_i$ for $i\geq1$, which means there is a ring morphism
sending all of them to $1$, and the obvious fact that
$$
(1-X+X^2-X^3+\cdots)^{-1}=1+X
$$
This shows that the mentioned morphism sends $h_0$ and $h_1$ to $1$ and all
other $h_i$ to $0$; this value is precisely the sum of coefficients of all
$e_\lambda$ we were after.
Finally for the computation of the matrix $C_n$, it also seems best to view it
as the inverse of the matrix $(M_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$,
which becomes unitriangular after permuting its columns according to the
transposition of partitions. However $M_{\lambda,\mu}$ is best computed not by
counting $0$-$1$ matrices (their numbers grow up to $n!$ in the bottom-right
corner), but rather via
$$
M_{\lambda,\mu}=
\sum_{\lambda\leq\nu\leq\mu^{tr}}K_{\nu,\lambda,}K_{\nu^{tr},\mu}
$$
where $K_{\nu,\lambda}$ designates a Kostka number, the number of
semi-standard Young tableaux of shape $\nu$ and weight (content) $\lambda$.
This identity is proved bijectively by the asymmetric RSK-correspondence,
a bijective correspondence between $0$-$1$ matrices and pairs of
semi-standard tableaux of transpose shapes, their weights being given
respectively by the column sums and the row sums of the matrix. The Kostka
numbers involved are generally quite a bit smaller than $M_{\lambda,\mu}$, and
moreover there are ways to compute them without counting; one method is to
interpret them as weight multiplicities for $GL_n$ and use a formula like
Freudenthal's recurrence for them. (The LiE program which I maintain does so,
and will do this computation easily; one can get results online up to $n=9$ from
this site, if one takes into account the fact that a partition is represented by the
sequence of differences of successive parts: $[4,2,2,1,0,0,0,0,0]$ is
represented as $[2,0,1,1,0,0,0,0]$.)
One could either compute $M_{\lambda,\mu}$ this way and invert the result, or
invert the matrix of Kostka numbers and deduce from above formula, which can
be interpreted as a matrix product, the formula
$$
C_{\lambda,\mu}=
\sum_{\lambda^{tr}\leq\nu\leq\mu}K'_{\lambda,\nu^{tr}}K'_{\mu,\nu}
$$
where $(K'_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$ is the inverse of the
Kostka matrix $(K_{\lambda,\mu})_{\lambda,\mu\in\mathcal P_n}$. I don't know
very much about these "inverse Kostka numbers", but you will find some
information about them in the answers to this MO question; I'm not sure this allows them to be computed more efficiently than by inverting
the Kostka matrix.
What I observed is that the actual problem lies in the definition of discriminant of a monic polynomial. Below is a way to prove the desired proposition by redefining the discriminant of a monic polynomial properly in the following way $:$
Let us first state the following theorem due to Jacobi without proof (the proof is very simple thoough!)
Theorem $:$ Let $V = V(X_1,X_2, \cdots , X_n) = \prod\limits_{1 \leq i < j \leq n} (X_j - X_i) \in K[X_1,X_2, \cdots , X_n),$ the Vandermonde's determinant in $n$ unknowns $X_1,X_2, \cdots , X_n.$ Then for any $\sigma \in S_n$ $$\sigma (V) = \text{sgn} (\sigma)\ V$$ where $\text {sgn} (\sigma)$ is defined as follows $:$
$$
\text {sgn} (\sigma) = \left\{
\begin{array}{ll}
1 & \quad \text {if}\ \sigma\ \text {is even} \\
-1 & \quad \text {if}\ \sigma\ \text{is odd}
\end{array}
\right.
$$
With the help of the above theorem it is easy to see that $D(f_n),$ the discriminant of the general monic polynomial of degree $n,$ is fixed by every permutation $\sigma \in S_n.$ Because $D(f_n) = V^2 = \prod\limits_{1 \leq i < j \leq n} (X_j - X_i)^2 \in K[X_1,X_2, \cdots , X_n].$ So for any $\sigma \in S_n$ when it extends to an automorphism of $K(X_1,X_2, \cdots ,X_n)$ defined by $X_i \mapsto X_{\sigma(i)}$ for all $i=1,2,\cdots , n$ and leaving all elements of $K$ fixed then we have $\sigma (D(f_n)) = \sigma (V^2) = {\sigma (V)}^2 = V^2,$ because for any permuatation $\sigma \in S_n$ we have ${\text {sgn}(\sigma)}^2 = 1.$ This shows that $D(f_n)$ is a symmetric polynomial in $X_1,X_2, \cdots , X_n.$ So by Fundamental theorem of Symmetric Polynomials (also known as Newton's theorem) it follows that $\exists$ $D \in K[X_1,X_2, \cdots , X_n]$ such that $D(f_n) = D(S_1,S_2, \cdots , S_n)$ where $S_i$ is the $i$-th elementary symmetric polynomial in $X_1,X_2, \cdots , X_n.$
Now let $f = X^n + a_1 X^{n-1} + \cdots + a_n \in K[X]$ be a monic polynomial. Let us denote discriminant of $f$ by $\text {Disc} (f)$ (for avoiding confusion with $D$ I already defined). Then $\text {Disc} (f)$ is defined as follows $:$ $$\text {Disc} (f) : = D(-a_1, \cdots , (-1)^i a_i, \cdots , (-1)^na_n).$$
With the help of the revised definition of Discriminant of a Monic Polynomial it is now very easy to prove the desired proposition.
Let $x_1,x_2, \cdots , x_n$ be the zeros of $f$ lying in some finite field extension $L|K.$ Then we first note that $$S_r (x_1,x_2, \cdots , x_n) = (-1)^r a_r$$ for $r=1,2, \cdots , n.$ Then we have $$\begin{align*} \prod\limits_{1 \leq i < j \leq n} (x_j - x_i)^2 & = D(f_n) (x_1,x_2, \cdots , x_n)\\ & = D(S_1(x_1,x_2, \cdots , x_n), S_2(x_1,x_2, \cdots , x_n), \cdots , S_n(x_1,x_2, \cdots , x_n))\\ & = D(-a_1, \cdots , (-1)^i a_i , \cdots , (-1)^na_n)\\ & = \text {Disc} (f). \end{align*}$$
So we have $\text {Disc} (f) = \prod\limits_{1 \leq i < j \leq n} (x_j - x_i)^2 = {V(x_1,x_2, \cdots , x_n)}^2,$ as required.
This completes the proof of the proposition.
QED
Best Answer
The implementation in Sage is for symmetric functions, rather than symmetric polynomials. A symmetric function is a formal power series in a countable number of variables.
Symmetric functions are indexed by partitions, but the length of these partitions is not bounded. Consequently, ending zeros are ignored. For instance, $$s_{(1,0,0,...)} = s_{(1)} = x_1 + x_2 + x_3 + \cdots.$$ In Sage, this is
s[1]
.If you want to work exclusively with polynomials in, say, $n$ variables, you could work with symmetric functions and ignore any $s_\lambda$ indexed by a partition of length greater or equal to $n$ (the length being the number of nonzero entries of the partition). Alternatively, one can see the explicit polynomial with the command
f.expand(n)
. For example,prints