The solution is a simple google away: http://en.wikipedia.org/wiki/Statistical_hypothesis_testing
So you would like to test the following null hypothesis against the given alternative
$H_0:p_1=p_2$ versus $H_A:p_1\neq p_2$
So you just need to calculate the test statistic which is
$$z=\frac{\hat p_1-\hat p_2}{\sqrt{\hat p(1-\hat p)\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}$$
where $\hat p=\frac{n_1\hat p_1+n_2\hat p_2}{n_1+n_2}$.
So now, in your problem, $\hat p_1=.634$, $\hat p_2=.612$, $n_1=2455$ and $n_2=2730.$
Once you calculate the test statistic, you just need to calculate the corresponding critical region value to compare your test statistic too. For example, if you are testing this hypothesis at the 95% confidence level then you need to compare the absolute value of your test statistic against the critical region value of $z_{\alpha/2}=1.96$ (for this two tailed test).
Now, if $|z|>z_{\alpha/2}$ then you may reject the null hypothesis, otherwise you must fail to reject the null hypothesis.
Well this solution works for the case when you are comparing two groups, but it does not generalize to the case where you want to compare 3 groups.
You could however use a Chi Squared test to test if all three groups have equal proportions as suggested by @Eric in his comment above: " Does this question help? stats.stackexchange.com/questions/25299/ … – Eric"
if in the control and treatment groups, the proportions of success are both 0.003, then what is the minimal sample size for statistical testing of the two equal proportions
When you are doing hypothesis testing then the null hypothesis, when it is true, will be rejected by the significance level $\alpha$ that you choose, or when the null hypothesis is not true, it will be rejected by a rate that is ideally much higher than the significance level.
What is important is not only the case "the proportions of success are both 0.003", but instead also the cases when those proportions are different. The more different the proportions are, the more probable it becomes that you will observe a significant difference and reject the null hypothesis.
In order to determine what size of sample is neccesary to take, you could express something like the probability to observe a significant difference, given a true difference (of some specific effect size), as function of the sample sizes. So to compute the sample size you need 1) an idea of a relevant minimal difference/effect 2) a level of desired power/probability.
It is important to specify this minimal difference, since in practice the null hypothesis is almost never true. Some way or another the different treatment might have a tiny miniscule effect (not of the kind of size that was theoretically expected) and given a large enough sample you might show that the two groups are different by a tiny minuscule amount.
When doing hypothesis testing, we often challenge the null hypothesis (there is no effect) in order to show whether there is an effect or not. But what researchers might actually be interested in is to challenge the alternative hypothesis (there is an effect) in order to show whether the hypothesized effect is true or not.
Note: There is a difference between 'not rejecting the null hypothesis' and 'rejecting the alternative hypothesis'.
Two ways to deal with this type of problem are two one-sided t-tests (TOST) and likelihood ratio test. In both cases you explicitly specify both the hypotheses (null/alternative).
To the point: To do the computations of sample size you can approximate the variables as normal distributed. In a simple way you use the 0.003 as an initial value by which you can compute the variance, but a more difficult case is when the proportions turn out to be smaller than initially expected (which reduces the number of successes and you actually wish to have a certain number of successes rather than a certain number of total sample).
Best Answer
If I'm understanding your question right (and a couple simple assumptions are met), the second link you have gives you what you're looking for. So a simple probability fact is that if X and Y are independent, X~Bin(n,p) and Y~Bin(m,p) then X+Y~Bin(m+n,p). That means that if the trials in each of your groups are independent then
$\sum_{i=1}^{|G_{1}|}X_{i}$ ~Bin($\sum_{i=1}^{|G_{1}|}N_{i}^{1}$, p) and $\sum_{i=1}^{|G_{2}|}Y_{i}$ ~ Bin($\sum_{i=1}^{|G_{2}|}N_{i}^{2}$,p)
where the G's are the groups, the N_i are the n for each binomial trial (superscript tells the group) and |G| is the size of G. From there, you're back at the problem of estimating proportions and confidence intervals in binomial families. The big things to make sure of are that the trials within groups are independent and that you have no reason to believe the probabilities of success are different between trials within groups.