(Note, by $n$, I usually mean the total sample size, so I interpret your last sentence to be 'where $\bf{.5}$$n$ equals the size of the smaller sample'.)
No, not quite. Consider this simulation (conducted with R):
set.seed(9)
power1010 = vector(length=10000)
power9010 = vector(length=10000)
for(i in 1:10000){
n1a = rnorm(10, mean=0, sd=1)
n2a = rnorm(10, mean=.5, sd=1)
n1c = rnorm(90, mean=0, sd=1)
n2c = rnorm(10, mean=.5, sd=1)
power1010[i] = t.test(n1a, n2a, var.equal=T)$p.value
power9010[i] = t.test(n1c, n2c, var.equal=T)$p.value
}
mean(power1010<.05)
[1] 0.184
mean(power9010<.05)
[1] 0.323
What we see here is that when the total sample size is $20$, with equal group sizes, $n_1=n_2=10$, power is $18\%$; but when the total sample size is $100$, but the smaller group has $n_2=10$, power is $32\%$. Thus power can increase when the size of the larger group goes up even though the smaller sample size stays the same.
This answer is adapted from my answer here: How should one interpret the comparison of means from different sample sizes?, which you will probably want to read for more on this topic.
While you can compute the z-statistic, actually an ordinary Welch t-test will do that just fine - in R that's t.test
with all its default options.
The form of test statistic is the same in both cases. The only difference is in which table is used, and if the size of the smaller group is large enough, the tests will give almost identical p-values.
The Welch test will handle very large sample sizes.
e.g. in R:
> x=rnorm(1e7,1.00001,1)
> y=rnorm(1e7,1.00002,2)
> t.test(x,y)
Welch Two Sample t-test
data: x and y
t = 0.9052, df = 14708415, p-value = 0.3654
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.0007458214 0.0020259201
sample estimates:
mean of x mean of y
0.999757 0.999117
I don't see a problem
> # compare:
> 2*pnorm((-abs(mean(y)-mean(x))/sqrt(var(y)/length(y)+var(x)/length(x))))
[1] 0.3653657
The p-values turn out to be the same to all the places shown in the second figure.
If that's not what you want, you need to more carefully explain what you do want.
Example with very different $n$:
> x=rnorm(1e7,1.00001,1)
> y=rnorm(1e2,1.002,2)
> t.test(x,y)
Welch Two Sample t-test
data: x and y
t = 0.7382, df = 99.001, p-value = 0.4622
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.2409398 0.5264124
sample estimates:
mean of x mean of y
0.9998066 0.8570703
> 2*pnorm((-abs(mean(y)-mean(x))/sqrt(var(y)/length(y)+var(x)/length(x))))
[1] 0.4604087
Once we're at 99df for the Welch, we start to notice a small difference in p-value from the asympotic result, but since we're at 99d.f., we're not really in the 'consider it as converged to normal' region.
Best Answer
Theoretically, if the assumption of equal variances is satisfied and the dependent variable is normally distributed you could run a t-test despite the unequal sample sizes between the two comparison groups. I suggest you read these two posts (1 & 2) which deal with similar issues. If the assumption of normality is not satisfied then a Mann-Whitney test is appropriate as you have already suggested. You don't provide more information regarding your research questions and data availability to actually recommend a better statistical test.