Note that $\Delta_n(x_1^m,\ldots,x_n^m)=\det(x_i^{k_j})$, where $k_j:=(j-1)m$. Two key observations are the following (both work for arbitrary non-negative integers $k_1<k_2<\ldots<k_n$, if $\sum c_j=\sum k_j+K$):
- (algebraic) $$\left[x_1^{c_1}x_2^{c_2}\ldots x_n^{c_n}\right](x_1+\ldots+x_n)^K\det(x_i^{k_j})=\frac{(K-\sum k_j)!}{\prod c_j!}\det(c_i^{\underline{k_j}}),\quad\quad\quad(\heartsuit)$$
where we use Knuth's notation $x^{\underline{n}}=x(x-1)\ldots(x-n+1)$. To prove $(\heartsuit)$, replace the polynomial $F:=(x_1+\ldots+x_n)^K\det(x_i^{k_j})$ to $$\tilde{F}=
\left(x_1+\ldots+x_n-\sum k_j\right)^{\underline{K}}\det(x_i^{\underline{k_j}})$$
(which has of course the same highest degree coefficients as $F$)
and apply this formula for the sets $A_i=\{0,1,\ldots,c_i\}$. Note that if $a_i\in A_i$, and $\tilde{F}(a_1,\ldots,a_n)\ne 0$, expanding the determinant we see that there must exist a permutation $\pi$ for which $a_i(a_i-1)\ldots (a_i-k_{\pi_i}+1)\ne 0$, i.e., $a_i\geqslant k_{\pi_i}$, in particular $\sum a_j\geqslant \sum k_j$. Further, $\sum a_j-\sum k_j$ is a non-negative integer which can not be equal to $0,1,\ldots,K-1$, thus $\sum a_j\geqslant \sum k_j+K=\sum c_j$. But $a_j\leqslant c_j$ for all $j$, thus this all is possible only if $a_j=c_j$ for all $j$. Therefore, in the RHS of the cited formula there is (at most) unique non-zero term, and this immediately gives $(\heartsuit)$.
- (combinatorial) If $c_1<c_2<\ldots<c_n$ are non-negative integers and $\sum c_j=\sum k_j+K$, then both parts of $(\heartsuit)$, which we denote by $\theta(c_1,\ldots,c_n)$ are equal to the number of paths from the point $(k_1,\ldots,k_n)$ to the point $(c_1,\ldots,c_n)$, for which each of $K$ paths constitutes of increasing exactly one coordinate by 1, and all intermediate points belong to a domain $\{(t_1,\ldots,t_n):t_1<t_2<\ldots<t_n\}$. This may be proved by induction on $K$ for fixed $k_1,\ldots,k_n$. The base case $K=0$ is clear. For the induction step, note that $\Theta$ satisfies a recurrence
$$
\Theta(c_1,\ldots,c_n)=\Theta(c_1-1,c_2,\ldots,c_n)+
\Theta(c_1,c_2-1,\ldots,c_n)+\ldots+\Theta(c_1,c_2,\ldots,c_n-1),
$$
which corresponds to considering the summand taken from the last bracket $x_1+\ldots+x_n$. And in RHS we may remove all summands in which $\Theta$ has two equal arguments (say, if $c_5-1=c_4$, remove the fifth summand), because they are anyway equal to 0, as is seen from RHS of $(\heartsuit)$ (or from general properties of antisymmetric polynomials, if you prefer.) Now exactly the same recurrence is enjoyed by the number of paths (consider the last step). This proves the induction step.
So, your question reduces either to computing the determinants, or computing the number of such paths. In combinatorics, such paths are traditionally viewed as skew standard Young tableaux: consider the Young diagrams $\lambda$ with row lengths $c_1\leqslant c_2-1\leqslant c_3-2\leqslant \ldots \leqslant c_n-(n-1)$ and $\mu$ with row lengths $k_1\leqslant k_2-1\leqslant k_3-2\leqslant \ldots \leqslant k_n-(n-1)$. Then the path corresponds to getting $\mu$ from $\lambda$ be consecutively adding the boxes, with a condition that on each step we get a Young diagram again. If you write the numbers $1,2,\ldots,K$ consequently when adding the boxes, you get what is called a skew standard Young tableaux.
Now $(\heartsuit)$ reads as Feit's determinantal formula for the number of skew standard Young tableaux, case $m=1$, that is, $\lambda=\emptyset$, corresponds to some variant of a hook length formula, and the case $c_j=\ell+j$ (up to permutation it is your second case) corresponds to counting skew standard Young tableaux of a shape which is a reverse Young diagram ($\mu$ is a rectangle), that is, again to hook length formula.
An interesting and rather enigmatic story is that there are other shapes for which the number of skew standard Young tableaux may be computed as a product formula. Some such cases were found in a series of papers by Igor Pak, Greta Panova and Alejandro Morales which you can find on Igor's homepage.
Best Answer
In the formula above, $\binom{.}{.}$ stands for the generalized binomial coefficients.
Now, to tackle the problem as stated, you need to apply Proposition 1 and invoke the inclusion-exclusion principle, in the following sense:
For $i=1,...,k$, set as
If we denote:
... and generally:
then we get -applying Prop. 1- that: $$ {\small N(q_1)=\binom{N+(k-1)-1-m_1-n_2-...-n_k}{k-1} \ \ \ or \ \ \ N(q_1)=0 } $$
$$ {\small N(q_2 q_3)=\binom{N+(k-2)-1-n_1-m_2-m_3-n_4-...-n_k}{k-1} \ \ \ or \ \ \ N(q_2 q_3)=0} $$ ... and generally: $$ {\small N(q_{i_1}q_{i_2}... q_{i_s})=\binom{N+(k-s)-1-\sum_{i\notin I} n_i-\sum_{i\in I} m_i}{k-1} \ \ \ or \ \ \ N(q_{i_1}q_{i_2}... q_{i_s})=0 } $$ where $s$ is the number of properties, $I=\{i_1, i_2, ..., i_s\}\subseteq \{1,2,...,k\}$ and the $N(..)$ function takes zero values whenever the upper index becomes negative.
Now all you need to do to obtain a compact formula for the number of solutions satisfying your constraints, is to apply the inclusion-exclusion principle to determine the number of solutions produced by Proposition 1, which have none of the properties $q_i$ for $i=1,2,...,k$.
This is given by
where in the above formula $s$ is the number of properties and $$ \sum_{k\ \geq j > i \geq 1}=\sum_{i=1}^{k-1}\sum_{j=i+1}^{k} $$ ... etc.
Example: As an example of application of the previous method, consider the following special case of the OP:
The method described above gives: $$ {\small \binom{N-1}{k-1}-\binom{k}{1}\binom{N-\alpha-1}{k-1}+\binom{k}{2}\binom{N-2\alpha-1}{k-1}-\binom{k}{3}\binom{N-3\alpha-1}{k-1}+\binom{k}{4}\binom{N-4\alpha-1}{k-1}-\cdots } $$ where $\binom{..}{..}$ stands for the generalized binomial coefficients and the summation halts when zero terms appear.