You can perfectly use the mean $p$-value.
Fisher’s method set sets a threshold $s_\alpha$ on $-2 \sum_{i=1}^n \log p_i$, such that if the null hypothesis $H_0$ : all $p$-values are $\sim U(0,1)$ holds, then $-2 \sum_i \log p_i$ exceeds $s_\alpha$ with probability $\alpha$. $H_0$ is rejected when this happens.
Usually one takes $\alpha = 0.05$ and $s_\alpha$ is given by a quantile of $\chi^2(2n)$. Equivalently, one can work on the product $\prod_i p_i$ which is lower than $e^{-s_\alpha/2}$ with probability $\alpha$.
Here is, for $n=2$, a graph showing the rejection zone (in red) (here we use $s_\alpha = 9.49$. The rejection zone has area = 0.05.
Now you can chose to work on ${1\over n} \sum_{i=1}^n p_i$ instead, or equivalently on $\sum_i p_i$. You just need to find a threshold $t_\alpha$ such that $\sum p_i$ is below $t_\alpha$ with probability $\alpha$; exact computation $t_\alpha$ is tedious – for $n$ big enough you can rely on central limit theorem; for $n = 2$, $t_\alpha = (2\alpha)^{1\over 2}$. The following graph shows the rejection zone (area = 0.05 again).
As you can imagine, many other shapes for the rejection zone are possibles, and have been proposed. It is not a priori clear which is better – i.e. which has greater power.
Let‘s assume that $p_1$, $p_2$ come from a bilateral $z$-test with non-centrality parameter 1 :
> p1 <- pchisq( rnorm(1e4, 1, 1)**2, df=1, lower.tail=FALSE )
> p2 <- pchisq( rnorm(1e4, 1, 1)**2, df=1, lower.tail=FALSE )
Let's have a look on the scatterplot with in red the points for which the null hypothesis is rejected.
The power of Fisher’s product method is approximately
> sum(p1*p2<exp(-9.49/2))/1e4
[1] 0.2245
The power of the method based on the sum of $p$-values is approximately
> sum(p1+p2<sqrt(0.1))/1e4
[1] 0.1963
So Fisher’s method wins – at least in this case.
There is a whole field of statistics called meta-analysis that deals with this topic. The idea is how to combine the information from different studies. I would not do the mean or the median of the p-values, but there are ways to combine them; but be aware of publication bias, it could be that more studies were done than you know about, but only the ones that were significant were published and therefore seen by you, if you ignore the unpublished studies then it will bias your results.
If the null hypothesis is that there is no effect in any of the studies (the alternative is then that there is a difference that can be seen by at least one study) then here are a couple of approaches (but you really should read up on the official literature):
If the null is true then all the p-values are from a uniform distribution and the probability of being significant is 0.05 (or other alpha level), you can treat this as a binomial with the null being p=0.05 and the alternative being p > 0.05 and see if you have more significant p-values that can be explained by chance, so 5 or 6 significant p-values out of 100 studies can be explained by chance, but 20 significant studies out of 100 would be unlikely by chance and indicate that something is going on. If you have all the p-values you can also compare that to a uniform distribution (KS test or other).
If you take the negative log of each p-value and sum those values then under the null hypothesis this will follow a chi-squared distribution with 2 times (number of p-values) degrees of freedom. Compare this value to the appropriate chi-square to see if there is significant. This can be nice for combining a several p-values from under powered studies that are not significant, but are nearly so.
There are other options depending on what information you have available from each study, search the literature and learn more. The classic text on the topic is Statistical Methods for Meta-Analysis.
Best Answer
Irrespective of the discussion in the comments about how these $p$-values of $0$ arose there are methods for combining $p$-values which can be calculated if $p=0$.
As the OP indicated neither Fisher's method nor Stouffer's works.
The method of Edgington based on the sum of $p$, the closely related mean $p$ method, the method using logit of $p$, Tippett's method based on the minimum $p$ and variants of Wilkinson's method of which Tippett is a special case can all be calculated. Whether that is a sensible thing to do depends on the scientific question of course.
All the methods mentioned are available in the R package metap which, disclaimer, I wrote and maintain.