One solution would be to use a bootstrap test as an approximation to a permutation test. Permutation tests are exact and most powerful; in this case there are too many permutations to calculate every one of them, so you'd approximate the test with the bootstrap.
Basically, you:
1) Calculate your test statistic, label it $T_0$, on the actual data, say for illustrative purposes the same chi-square statistic you've already calculated,
2) Construct 1,000 or 10,000 or so ("many") random contingency tables under the assumption the null hypothesis is true, and for each one calculate the chi-square statistic, label them $T_1 \dots T_B$.
3) Compare your test statistic's value $T_0$ with the the test statistic values $T_1 \dots T_B$ from the randomly-generated contingency tables, and see what fraction are more extreme than $T_0$; this gives you a bootstrap p-value.
We are approximating the distribution of the test statistic under the null hypothesis by randomly generating a lot of values for the test statistic under the null hypothesis; this lets us estimate the p-value associated with the value of the statistic we actually observed.
I can't help you with the SPSS part of this, unfortunately.
Here's a reference which I've found helpful in the past: Permutation, Parametric, and Bootstrap Tests of Hypotheses (Good).
The two tests (logistic regression and chi-square) are equivalent and a power analysis should give the same answer.
You are assuming that a value of 0.15 for f2 and w are the same effect size, they're not. A small value of w is 0.1, a small value of f2 is 0.02.
cohen.ES(test=c("chisq"), size=c("small"))
cohen.ES(test=c("f2"), size=c("small"))
Edit: Elaborated on the similarity of the two approaches.
IF you give the same data to logistic regression and a chi-square test (strictly: without Yates' correction), you get the same result. Here's an example
> set.seed(1234)
> x <- rbinom(100, 1, 0.2)
> y <- rbinom(100, 1, 0.2)
> chisq.test(table(x, y), correct=FALSE)
Pearson's Chi-squared test #'
data: table(x, y)
X-squared = 0.155, df = 1, p-value = **0.694**
Warning message:
In chisq.test(table(x, y), correct = FALSE) :
Chi-squared approximation may be incorrect
> summary(glm(y ~ x, family="binomial"))
Call:
glm(formula = y ~ x, family = "binomial")
Deviance Residuals:
Min 1Q Median 3Q Max
-0.753 -0.753 -0.753 -0.668 1.794
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.114 0.251 -4.43 9.4e-06 ***
x -0.272 0.693 -0.39 **0.69**
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 110.22 on 99 degrees of freedom
Residual deviance: 110.06 on 98 degrees of freedom
AIC: 114.1
Number of Fisher Scoring iterations: 4
The p-values are the same, so the power should be the same. I can't remember the formulas for the two different versions of the effect size. Effect size measures are a little weird because in the old days you wanted to minimize the number of tables that you put into books (so we have, for example, $f^2$ instead of $R^2$, when there's a direct relationship between them, and $R^2$ is what everyone understands).
Best Answer
@whuber is right that only the expected counts matter. Reading some of the threads returned by his search may help you; I discuss the issue here: For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5? You can also see the point made under "Expected cell count" on the Wikipedia page.
Note that the expected cell count is the probability times the N. Your effects are so small that huge N will be required. Here are screen shots of G*Power using your ratio of n's, or equal n's:
Your lowest probability is $3.3\%$, and your lowest count is $6187$, meaning that the expected count would be $204\gg 5$. Thus, @rvl is right: you won't have to worry about that assumption. (You will have to worry about needing so much data to have reasonable power, I suspect.)