As I see your problem, you have $K$ individuals completing $N$ trials, that result in binary outcomes (success or failure). So you are dealing with $N\times K$ random variables $X_{ij}$. You are interested in computing probabilities of success for each trial $p_i$.
So the first thing to notice is that you assume in here that participants are exchangeable, so there is no more or less skilled participants - is this assumption correct for your data? The first thing that comes to my mind is that for the kind of data as yours Item Response Theory models would be better suited. Using such models you could estimate model assuming that the tasks vary in their difficulty and that the participants vary in their skills (e.g. using simple Rasch model).
But let's stick to what you said and assume that participants are exchangeable and you are interested only in probabilities per task. As others already noticed, you are not dealing here with Poisson binomial distribution, since we use such distribution for sums of successes from $N$ independent Bernoulli trials with different probabilities $p_1,\dots,p_N$. For this you would have to introduce new random variable defined as $Y_j = \sum_{i=1}^N X_{ij}$, i.e. total number of successes per participant. As noted by Xi'an, parameters $p_i$ are not identifiable in here and if you have data on result of each trial by each participant, it is better to think about it as of Bernoulli variables parametrized by $p_i$.
From what you are saying, you would like to test if Poisson binomial distribution fits the data better then ordinary binomial. As I read it, you want to test if the trials differ in probabilities of success, versus if probability of success is the same for each trial (since $p_1 = p_2 = \dots = p_N$ is an ordinary binomial distribution). Saying it other way around, your null hypothesis would be that not only participants, but also trials are exchangeable, so identifying particular trials tells us nothing about the data since they all have the same probability of success. If we have null hypothesis stated like this, it instantly leads to permutation test, where you would randomly "shuffle" your $N\times K$ matrix and compare the statistic computed on such permuted data to statistic computed on unshuffled data. For the statistic to compare I would use combined variance's
$$ \sum_{i=1}^N \hat p_i(1-\hat p_i) $$
where $\hat p_i$ is probability of success estimated from the data for the $i$-th participant (columnwise means). In case of equal $\hat p_i$'s would reduce to $N \hat p(1-\hat p)$.
To illustrate it I conducted a simulation with three different scenarios: (a) all $p_i = 0.5$, (b) they come from Beta(1000,1000) distribution, (c) they come from uniform distribution. In the first case $p_i$'s are all equal; in the second case they are "random", but grouped around common mean; and in the third case they are totally "random". The plot below shows distributions of test statistic under null hypothesis (i.e. computed on shuffled data), red lines show the variance computed on unshuffled data. As you can see, the combined variances of unshuffled data crosses with the null distribution in the first case (test is not significant) and slightly approach the distribution in the second case (significant difference from the null). In the third case the red line is even not visible on the plot since it is that far from the null distribution (significant difference).
![Results of simulation](https://i.stack.imgur.com/He7BJ.png)
So while the test correctly identified "all the same $p_i$'s" scenario (a), but didn't find the "similar but not the same" scenario (b) to fulfill the criteria of equality. The question is if you want to be that strict about it? Nonetheless, this is a direct implementation of test for your hypothesis. It compares the basic criteria that would enable you to distinguish ordinary binomial from Poisson binomial (their variances).
There is of course lot's of other possibilities, more or less appropriate depending on your problem, e.g.: comparing the individual confidence intervals, pairwise $z$-tests, ANOVA, using some kind of logistic regression model etc. However, as I said before, this sounds rather like a problem for Item Response Theory models and assuming equal skills of participants sounds risky.
Best Answer
Yes, certainly. By this I mean a sum of dependent but not identically distributed Bernoulli variables can have a Binomial distribution.
Let $\mathbf X$ refer to a random vector of zeros and ones of length $n,$ so that there are $2^n$ possible values of $\mathbf X.$ For each such possible vector $\mathbf x$ let $p_{\mathbf x}$ be the probability that $\mathbf X = \mathbf x.$
Observe
The components of $\mathbf X$ (written $X_i,$ $i=1,2,\ldots, n$) are Bernoulli variables with probabilities $p_i.$ These probabilities are the sums of the $p_{\mathbf x}$ over all $\mathbf x$ for which $x_i=1.$
If there is a number $q$ for which the sum of $\mathbf X$ has a Binomial distribution, then (by the definition) the sum of all $p_{\mathbf x}$ where $\mathbf x$ has exactly $k$ components equal to $1$ must equal the Binomial probability: $$\sum_{\mathbf x:\, |\mathbf x|=k}p_{\mathbf x} = \binom{n}{k}q^k(1-q)^{n-k}.$$
The latter constitutes $n+1$ linear constraints on the $2^n$ values of the $p_{\mathbf x}$ and therefore has dimension at least $2^n - (n+1).$ If not all of the $X_i$ have the same distribution, the former translates to the complement of a finite set of linear constraints of the form $p_i\ne p_j$ for all $i\ne j.$ Finally, all $p_i$ must lie within the interval $[0,1].$
Generically, then, given $q$ there is a space of solutions of dimension $2^n-(n+1)$ if there is any solution at all.
Rather than analyze this in more detail, let's move on to the simplest possible example, where $n=2.$ ($n=1$ won't work because $2^1-(1+1)=0$ is not flexible enough.) Here is a tabulation of the possibilities.
$$\begin{array}[rrrrr] \text{} & x_1 & x_2 & p_{\mathbf x} & x_1+x_2 & \text{Binomial probability}\\ \hline & 0 & 0 & p_{00} & 0 & (1-q)^2\\ & 0 & 1 & p_{01} & 1 & (1-q)q\\ & 1 & 0 & p_{10} & 1 & q(1-q) \\ & 1 & 1 & p_{11} & 2 & q^2 \end{array}$$
Collecting lines according to the value of $x_1+x_2$ and equating the probabilities $p$ with the Binomial probability gives three equations,
$$\begin{aligned} p_{00} &= (1-q)^2\\ p_{01}+p_{10} &= 2q(1-q)\\ p_{11} &= q^2. \end{aligned}$$
Evidently $q$ determines $p_{00}$ (top line) and $p_{11}$ (bottom line) directly, leaving one more equation
$$p_{01} + p_{10} = 2q(1-q).$$
We seek solutions for which the $p_{*}$ lie in $[0,1].$ They can be parameterized by a number $t$ between $0$ and $2q(1-q).$ The full set of solutions therefore is
$$p = (p_{00}, p_{01}, p_{10}, p_{11}) = ((1-q)^2,\ t,\ 2q(1-q)-t,\ q^2).$$
Moreover, for any $q$ in the interval $(0,1)$ there are infinitely many solutions. This is our space of $2^2 - (2+1) = 1$ dimensions, parameterized by $t$ and contingent on the choice of $q,$ thereby giving a two-parameter family of possibilities.
The general case (for arbitrarily large $n$) works the same way but has many more solutions.
I will close by remarking that when the $X_i$ are independent, a quick analysis of their characteristic functions shows that their sum can be Binomial$(n,q)$ for some $q$ only when the $X_i$ are identically distributed. Thus, dependence is essential in the preceding construction.