McNemar-Bowker test of symmetry of k X k contingency table is inherently 2-sided: the alternative hypothesis is undirected. So, in general case it cannot be used to test a one sided alternative that subdiagonal frequencies are larger/smaller than superdiagonal frequencies. But since in your case the differences are consistently in favour of subdiagonal frequencies you can use the test for the directional inference.
The Bowker test is chi-square asymptotic-based and hence is for "large sample" - I've read somewhere (sorry don't remember where, so I'm not quite sure) that the sum in any two symmetric cells, if it is not 0 (the test ignores 0-0 cell pairs altogether), should be at least 10. Clearly, this isn't your case - you have only one pair of symmetric cells with the large sum. There exists an exact version of the test (see) but not in SPSS. But you can bypass the problem if you merge "Once", "Twice", "Three+" categories. Then you'll have the dichotomous case for which Bowker test becomes McNemar test with exact p-value easily computed (SPSS does it).
You might want also to consider some alternative tests of symmetry of a contingency table. Because it is questionnable whether your inquiry is isomorphic to what McNemar-Bowker tests. It tests if every off-diagonal cell is equal (in population) to the cell symmetric to it. Might it be that comparing the subdiagonal and the superdiagonal sums is more apt here?
I think you're looking at this the wrong way. You're trying to compare the proportion of insects left after applying insecticide. The 'before' aren't a random sample, but the experimental setup. That is:
\begin{array}{l|c|c|c}
& &\text{count left }&\\
&n \text{ exposed} &\text{after insecticide}&\text{proportion left}\\ \hline
\text{Species A}&30&12&12/30\\ \hline
\text{Species B}&30&7&7/30\\ \hline
\text{Species C}&30&6&6/30\\ \hline
\text{Species D}&30&2&2/30\\ \hline
\text{Species E}&30&4&4/30\\ \hline
\end{array}
This is in effect a straight chi-square test, or you could use a binomial GLM.
To present as a chi-squared test you'd write two columns, the number remaining and the number dead (or missing or gone or whatever it is that happened), for each species and do a test of independence in the two-way table, which serves as a test of equality of proportion.
Edit - Like so:
\begin{array}{l|r|r|r}
&\text{Survived}&\text{Died}&n \text{ exposed}\\ \hline
\text{Species A}&12&18&30\\ \hline
\text{Species B}&7&23&30\\ \hline
\text{Species C}&6&24&30\\ \hline
\text{Species D}&2&28&30\\ \hline
\text{Species E}&4&26&30\\ \hline
\text{Total}&31&119&150
\end{array}
Edit2: Here's a chi-squared test done in R; as you see it agrees with the values in Nick Cox's comment.
alive=c(12,7,6,2,4)
dead=30-alive
chisq.test(cbind(alive,dead))
Pearson's Chi-squared test
data: cbind(alive, dead)
X-squared = 11.5478, df = 4, p-value = 0.02105
Edit 3: answering followup questions from comments:
I would like to know if there is a test which allows me to make post-hoc comparisons between the species
The issues are much as they are with ANOVA
(i) If you have orthogonal contrasts: You can partition the chisquare into the orthogonal contrasts to test those. These contrasts are usually obvious a priori, and specified in advance.
(ii) If you want all pairwise comparisons (I assume you meant this option): You can do a series of 2-species comparisons with, if you wish, the typical sorts of adjustments for multiple testing (Bonferroni is trivial to do, for example, but conservative; you might use Keppel's modification of Bonferroni or a number of other options). You could alternatively look at multiple comparisons via simultaneous confidence intervals (Agresti et al. 2008 Simultaneous confidence intervals for comparing binomial parameters. Biometrics no.64 p. 1270-1275.)
Note that for some 2x2 comparisons, the expected counts are low; e.g. for D vs E the expected counts in two cells are only 3. This is not as big of a problem as it's made out to be (a variety of less conservative rules from the last 4 or 5 decades would say it's fine), but you can always either simulate the discrete distribution of the test statistic, or you can do an exact calculation of the p-value by complete enumeration of the tail. Personally, for those expecteds I wouldn't bother, they're absolutely fine.
(iii) If you're more interested in "which groups stand out" ('what made this significant?'), the usual approach would be to look at some form of standardized residual (such as a Pearson residual) or a contribution to chi-square. An alternative would be to collapse the tables to do 2x2 comparisons of each one against all the others.
Best Answer
It may help you to read my answers to What is the difference between McNemar's test and the chi-squared test, and how do you know when to use each? here, and here. The short version is that McNemar's test is actually a binomial test of whether the two off-diagonal cell counts (often denoted cells b & c) diverge from an expected null ratio of $1$ to $1$. However, it is also possible to test them as $(b-c)^2/(b+c)$. The latter quotient would be distributed as a chi-squared random variable. In other words, what SPSS seems to be reporting in your "nonparmetric" output is the test statistic for McNemar's test, not a chi-squared test of independence.
As @ttnphns notes in the comments, the two possible ways of conducting McNemar's test are asymptotically equivalent. When you sample is small, though, the binomial will be more accurate.
With regard to what you should report, that would depend on which method you used. I would report the full $2\times 2$ table, or at least the b & c cell counts, and then either the observed proportion $b/(b+c)$ with the binomial $p$-value, or the chi-squared test statistic with its corresponding $p$-value.