Your assertion that the correct value of the quantile should be 1.96 (if we assume the normal approximation is accurate) is completely correct.
However, the suggested value of 2 is a common approximation; it's only 2% larger, and that small additional margin by rounding the value up may be a good idea since several approximations are involved in doing the calculations.
That is, while you know how to compute 1.96 correctly, just use 2 anyway, like it says.
Secondly, if I told you the population proportion, could you compute the standard deviation?
Edit after chat discussion with OP:
As you figured out, the standard error is maximized when $p=0.5$. By changing the scale on the y-axis (a simple monotonic transformation), it's perhaps easier to see:
What remains is to figure out the margin or error in terms of the standard deviation.
Your question says to take it to be twice the standard error, so all you have left to do is find the smallest value of $n$ that has $2\sqrt{0.25/n}\leq 0.01$. Even if you're not able to do the algebraic manipulation to solve for $n$, you can find this by trial and error.
[n=1 will be too wide. What happens at n=10? 100? 1000? etc ... if you go past it, you can then try the middle of the interval until you hit it exactly or you get two consecutive numbers that give a margin of error either side of the right answer]
Sometimes the outliers end up being of more interest than the rest of the data. The discovery of penicillin was from studying the outlier.
Can you verify that the outliers are due to technical problems? If you can show that they are impossible values then you have justification for not including them, or you may find something even more interesting when trying to figure out the unusual values.
The general recommendation these days is to not discard outliers without good, external reasons. If you can throw out any values that you do not like, then you can make the remaining data say anything that you want, which is not good science.
If you still do not like the outliers, but cannot show the errors that caused them then you could analyze the data both with and without outliers and show how similar/different the results are. There are also "robust" methods that are less affected by outliers that you could consider using (though you may need to consult with a statistician for those).
Best Answer
With before-after data, I presume this is a paired design, and that consequently the test actually being performed is a two-tailed paired t-test. You should clarify to be sure.
If you really only have the means to 3 figures, then 11.8 could represent anything between 11.75+ and 11.85-, while 11.9 could represent anything between 11.85+ and 11.95-.
As such the true difference in means is actually anything between about 0 and 0.2, but more likely to be near 0.1 than the end-values.
Let's take the actual difference in sample means to be $d$.
Then a one-sample t-test statistic would be $\frac{d}{s_d/\sqrt{n}}$, and I presume you're after the standard deviation of the differences, $s_d$.
With 26 d.f., the (absolute value of) a two-tailed t-value that gives a p-value of 0.540 will be 0.621. So we have:
$s_d=d\sqrt{27}/0.621 = 8.37 d$
Now if $d$ was actually 0.1, that would imply $s_d$ is 0.837, but with the information given in the question it might be anything between 0 and 1.674.
If you're able to get $d$ more accurately than this, you can get $s_d$ to similar percentage accuracy.
Even a tiny bit of information - for example, knowing that the original observations must be integer - could help narrow it down (in that case, it would imply that the difference in means would be restricted to lie between 0.037 and 0.148).