The first sentence of the current 2015 editorial to which the OP links, reads:
The Basic and Applied Social Psychology (BASP) 2014 Editorial
*emphasized* that the null hypothesis significance testing procedure
(NHSTP) is invalid...
(my emphasis)
In other words, for the editors it is an already proven scientific fact that "null hypothesis significance testing" is invalid, and the 2014 editorial only emphasized so, while the current 2015 editorial just implements this fact.
The misuse (even maliciously so) of NHSTP is indeed well discussed and documented. And it is not unheard of in human history that "things get banned" because it has been found that after all said and done, they were misused more than put to good use (but shouldn't we statistically test that?). It can be a "second-best" solution, to cut what on average (inferential statistics) has come to losses, rather than gains, and so we predict (inferential statistics) that it will be detrimental also in the future.
But the zeal revealed behind the wording of the above first sentence, makes this look -exactly, as a zealot approach rather than a cool-headed decision to cut the hand that tends to steal rather than offer. If one reads the one-year older editorial mentioned in the above quote (DOI:10.1080/01973533.2014.865505), one will see that this is only part of a re-hauling of the Journal's policies by a new Editor.
Scrolling down the editorial, they write
...On the contrary, we believe that the p<.05 bar is too easy to pass and
sometimes serves as an excuse for lower quality research.
So it appears that their conclusion related to their discipline is that null-hypotheses are rejected "too-often", and so alleged findings may acquire spurious statistical significance. This is not the same argument as the "invalid" dictum in the first sentence.
So, to answer to the question, it is obvious that for the editors of the journal, their decision is not only wise but already late in being implemented: they appear to think that they cut out what part of statistics has become harmful, keeping the beneficial parts -they don't seem to believe that there is anything here that needs replacing with something "equivalent".
Epistemologically, this is an instance where scholars of a social science partially retract back from an attempt to make their discipline more objective in its methods and results by using quantitative methods, because they have arrived at the conclusion (how?) that, in the end, the attempt created "more bad than good". I would say that this is a very important matter, in principle possible to have happened, and one that would require years of work to demonstrate it "beyond reasonable doubt" and really help your discipline. But just one or two editorials and papers published will most probably (inferential statistics) just ignite a civil war.
The final sentence of the 2015 editorial reads:
We hope and anticipate that banning the NHSTP will have the effect of
increasing the quality of submitted manuscripts by liberating authors
from the stultified structure of NHSTP thinking thereby eliminating an
important obstacle to creative thinking. The NHSTP has dominated
psychology for decades; we hope that by instituting the first NHSTP
ban, we demonstrate that psychology does not need the crutch of the
NHSTP, and that other journals follow suit.
There are different methods for calculating confidence intervals for proportions without using bootstrapping.
For a multinomial proportion, you might try the methods in the DescTools
package.
### Adapted from http://rcompanion.org/handbook/H_02.html
if(!require(DescTools)){install.packages("DescTools")}
library(DescTools)
SA = 10
A = 9
N = 20
D = 5
SD = 1
observed = c(SA, A, N, D, SD)
MultinomCI(observed,
conf.level=0.95,
method="sisonglaz")
### Methods: "sisonglaz", "cplus1", "goodman"
### est lwr.ci upr.ci
### [1,] 0.22222222 0.08888889 0.3807871
### [2,] 0.20000000 0.06666667 0.3585648
### [3,] 0.44444444 0.31111111 0.6030093
### [4,] 0.11111111 0.00000000 0.2696759
### [5,] 0.02222222 0.00000000 0.1807871
Best Answer
If you wish to control the familywise type I error rate, then you need to adjust for multiplicity. In particular, if you wish to emphasize p-value, statistical significance or whether CIs exclude some value, to claim that some of multiple comparisons you are looking at are "statistically significant", then that is often a situation where that might be something you wish to do.
The Bonferroni correction is one of the most simple (and most conservative) adjustments for multiplicity. You can easily adjust your CIs to match the adjustment (e.g. with 2 hypotheses you calculate 97.5% confidence intervals instead of 95% CIs). Other adjustments are uniformly more powerful (e.g. Bonferroni-Holm), but make it hard to find matching CIs.
There are of course approaches for dealing with multiplicity other than controlling the familywise type I error rate, e.g. using shrinkage in Bayesian hierarchical models instead.