The first sentence of the current 2015 editorial to which the OP links, reads:
The Basic and Applied Social Psychology (BASP) 2014 Editorial
*emphasized* that the null hypothesis significance testing procedure
(NHSTP) is invalid...
(my emphasis)
In other words, for the editors it is an already proven scientific fact that "null hypothesis significance testing" is invalid, and the 2014 editorial only emphasized so, while the current 2015 editorial just implements this fact.
The misuse (even maliciously so) of NHSTP is indeed well discussed and documented. And it is not unheard of in human history that "things get banned" because it has been found that after all said and done, they were misused more than put to good use (but shouldn't we statistically test that?). It can be a "second-best" solution, to cut what on average (inferential statistics) has come to losses, rather than gains, and so we predict (inferential statistics) that it will be detrimental also in the future.
But the zeal revealed behind the wording of the above first sentence, makes this look -exactly, as a zealot approach rather than a cool-headed decision to cut the hand that tends to steal rather than offer. If one reads the one-year older editorial mentioned in the above quote (DOI:10.1080/01973533.2014.865505), one will see that this is only part of a re-hauling of the Journal's policies by a new Editor.
Scrolling down the editorial, they write
...On the contrary, we believe that the p<.05 bar is too easy to pass and
sometimes serves as an excuse for lower quality research.
So it appears that their conclusion related to their discipline is that null-hypotheses are rejected "too-often", and so alleged findings may acquire spurious statistical significance. This is not the same argument as the "invalid" dictum in the first sentence.
So, to answer to the question, it is obvious that for the editors of the journal, their decision is not only wise but already late in being implemented: they appear to think that they cut out what part of statistics has become harmful, keeping the beneficial parts -they don't seem to believe that there is anything here that needs replacing with something "equivalent".
Epistemologically, this is an instance where scholars of a social science partially retract back from an attempt to make their discipline more objective in its methods and results by using quantitative methods, because they have arrived at the conclusion (how?) that, in the end, the attempt created "more bad than good". I would say that this is a very important matter, in principle possible to have happened, and one that would require years of work to demonstrate it "beyond reasonable doubt" and really help your discipline. But just one or two editorials and papers published will most probably (inferential statistics) just ignite a civil war.
The final sentence of the 2015 editorial reads:
We hope and anticipate that banning the NHSTP will have the effect of
increasing the quality of submitted manuscripts by liberating authors
from the stultified structure of NHSTP thinking thereby eliminating an
important obstacle to creative thinking. The NHSTP has dominated
psychology for decades; we hope that by instituting the first NHSTP
ban, we demonstrate that psychology does not need the crutch of the
NHSTP, and that other journals follow suit.
I will focus this answer on the specific question of what are the alternatives to $p$-values.
There are 21 discussion papers published along with the ASA statement (as Supplemental Materials): by Naomi Altman, Douglas Altman,
Daniel J. Benjamin, Yoav Benjamini, Jim Berger, Don Berry, John Carlin, George Cobb, Andrew Gelman, Steve Goodman, Sander Greenland, John Ioannidis, Joseph Horowitz, Valen
Johnson, Michael Lavine, Michael Lew, Rod Little, Deborah Mayo, Michele Millar, Charles
Poole, Ken Rothman, Stephen Senn, Dalene Stangl, Philip Stark and Steve Ziliak (some of them wrote together; I list all for future searches). These people probably cover all existing opinions about $p$-values and statistical inference.
I have looked through all 21 papers.
Unfortunately, most of them do not discuss any real alternatives, even though the majority are about the limitations, misunderstandings, and various other problems with $p$-values (for a defense of $p$-values, see Benjamini, Mayo, and Senn). This already suggests that alternatives, if any, are not easy to find and/or to defend.
So let us look at the list of "other approaches" given in the ASA statement itself (as quoted in your question):
[Other approaches] include methods that
emphasize estimation over testing, such as confidence, credibility, or prediction intervals;
Bayesian methods; alternative measures of evidence, such as likelihood ratios or Bayes Factors;
and other approaches such as decision-theoretic modeling and false discovery rates.
- Confidence intervals
Confidence intervals are a frequentist tool that goes hand-in-hand with $p$-values; reporting a confidence interval (or some equivalent, e.g., mean $\pm$ standard error of the mean) together with the $p$-value is almost always a good idea.
Some people (not among the ASA disputants) suggest that confidence intervals should replace the $p$-values. One of the most outspoken proponents of this approach is Geoff Cumming who calls it new statistics (a name that I find appalling). See e.g. this blog post by Ulrich Schimmack for a detailed critique: A Critical Review of Cumming’s (2014) New Statistics: Reselling Old Statistics as New Statistics. See also We cannot afford to study effect size in the lab blog post by Uri Simonsohn for a related point.
See also this thread (and my answer therein) about the similiar suggestion by Norm Matloff where I argue that when reporting CIs one would still like to have the $p$-values reported as well: What is a good, convincing example in which p-values are useful?
Some other people (not among the ASA disputants either), however, argue that confidence intervals, being a frequentist tool, are as misguided as $p$-values and should also be disposed of. See, e.g., Morey et al. 2015, The Fallacy of Placing Confidence in Confidence Intervals linked by @Tim here in the comments. This is a very old debate.
- Bayesian methods
(I don't like how the ASA statement formulates the list. Credible intervals and Bayes factors are listed separately from "Bayesian methods", but they are obviously Bayesian tools. So I count them together here.)
There is a huge and very opinionated literature on the Bayesian vs. frequentist debate. See, e.g., this recent thread for some thoughts: When (if ever) is a frequentist approach substantively better than a Bayesian? Bayesian analysis makes total sense if one has good informative priors, and everybody would be only happy to compute and report $p(\theta|\text{data})$ or $p(H_0:\theta=0|\text{data})$ instead of $p(\text{data at least as extreme}|H_0)$—but alas, people usually do not have good priors. An experimenter records 20 rats doing something in one condition and 20 rats doing the same thing in another condition; the prediction is that the performance of the former rats will exceed the performance of the latter rats, but nobody would be willing or indeed able to state a clear prior over the performance differences. (But see @FrankHarrell's answer where he advocates using "skeptical priors".)
Die-hard Bayesians suggest to use Bayesian methods even if one does not have any informative priors. One recent example is Krushke, 2012, Bayesian estimation supersedes the $t$-test, humbly abbreviated as BEST. The idea is to use a Bayesian model with weak uninformative priors to compute the posterior for the effect of interest (such as, e.g., a group difference). The practical difference with frequentist reasoning seems usually to be minor, and as far as I can see this approach remains unpopular. See What is an "uninformative prior"? Can we ever have one with truly no information? for the discussion of what is "uninformative" (answer: there is no such thing, hence the controversy).
An alternative approach, going back to Harold Jeffreys, is based on Bayesian testing (as opposed to Bayesian estimation) and uses Bayes factors. One of the more eloquent and prolific proponents is Eric-Jan Wagenmakers, who has published a lot on this topic in recent years. Two features of this approach are worth emphasizing here. First, see Wetzels et al., 2012, A Default Bayesian Hypothesis Test for ANOVA Designs for an illustration of just how strongly the outcome of such a Bayesian test can depend on the specific choice of the alternative hypothesis $H_1$ and the parameter distribution ("prior") it posits. Second, once a "reasonable" prior is chosen (Wagenmakers advertises Jeffreys' so called "default" priors), resulting Bayes factors often turn out to be quite consistent with the standard $p$-values, see e.g. this figure from this preprint by Marsman & Wagenmakers:
So while Wagenmakers et al. keep insisting that $p$-values are deeply flawed and Bayes factors are the way to go, one cannot but wonder... (To be fair, the point of Wetzels et al. 2011 is that for $p$-values close to $0.05$ Bayes factors only indicate very weak evidence against the null; but note that this can be easily dealt with in a frequentist paradigm simply by using a more stringent $\alpha$, something that a lot of people are advocating anyway.)
One of the more popular papers by Wagenmakers et al. in the defense of Bayes factors is 2011, Why psychologists must change the way they analyze their data: The case of psi where he argues that infamous Bem's paper on predicting the future would not have reached their faulty conclusions if only they had used Bayes factors instead of $p$-values. See this thoughtful blog post by Ulrich Schimmack for a detailed (and IMHO convincing) counter-argument: Why Psychologists Should Not Change The Way They Analyze Their Data: The Devil is in the Default Prior.
See also The Default Bayesian Test is Prejudiced Against Small Effects blog post by Uri Simonsohn.
For completeness, I mention that Wagenmakers 2007, A practical solution to the pervasive
problems of $p$-values suggested to use BIC as an approximation to Bayes factor to replace the $p$-values. BIC does not depend on the prior and hence, despite its name, is not really Bayesian; I am not sure what to think about this proposal. It seems that more recently Wagenmakers is more in favour of Bayesian tests with uninformative Jeffreys' priors, see above.
For further discussion of Bayes estimation vs. Bayesian testing, see Bayesian parameter estimation or Bayesian hypothesis testing? and links therein.
- Minimum Bayes factors
Among the ASA disputants, this is explicitly suggested by Benjamin & Berger and by Valen Johnson (the only two papers that are all about suggesting a concrete alternative). Their specific suggestions are a bit different but they are similar in spirit.
The ideas of Berger go back to the Berger & Sellke 1987 and there is a number of papers by Berger, Sellke, and collaborators up until last year elaborating on this work. The idea is that under a spike and slab prior where point null $\mu=0$ hypothesis gets probability $0.5$ and all other values of $\mu$ get probability $0.5$ spread symmetrically around $0$ ("local alternative"), then the minimal posterior $p(H_0)$ over all local alternatives, i.e. the minimal Bayes factor, is much higher than the $p$-value. This is the basis of the (much contested) claim that $p$-values "overstate the evidence" against the null. The suggestion is to use a lower bound on Bayes factor in favour of the null instead of the $p$-value; under some broad assumptions this lower bound turns out to be given by $-ep\log(p)$, i.e., the $p$-value is effectively multiplied by $-e\log(p)$ which is a factor of around $10$ to $20$ for the common range of $p$-values. This approach has been endorsed by Steven Goodman too.
Later update: See a nice cartoon explaining these ideas in a simple way.
Even later update: See Held & Ott, 2018, On $p$-Values and Bayes Factors for a comprehensive review and further analysis of converting $p$-values to minimum Bayes factors. Here is one table from there:
Valen Johnson suggested something similar in his PNAS 2013 paper; his suggestion approximately boils down to multiplying $p$-values by $\sqrt{-4\pi\log(p)}$ which is around $5$ to $10$.
For a brief critique of Johnson's paper, see Andrew Gelman's and @Xi'an's reply in PNAS. For the counter-argument to Berger & Sellke 1987, see Casella & Berger 1987 (different Berger!). Among the APA discussion papers, Stephen Senn argues explicitly against any of these approaches:
Error probabilities are not posterior probabilities. Certainly, there is much more to statistical analysis than $P$-values but they should be left alone rather than being deformed in some way to become second class Bayesian posterior probabilities.
See also references in Senn's paper, including the ones to Mayo's blog.
- ASA statement lists "decision-theoretic modeling and false discovery rates" as another alternative. I have no idea what they are talking about, and I was happy to see this stated in the discussion paper by Stark:
The "other approaches" section ignores the fact that the assumptions of
some of those methods are identical to those of $p$-values. Indeed, some of
the methods use $p$-values as input (e.g., the False Discovery Rate).
I am highly skeptical that there is anything that can replace $p$-values in actual scientific practice such that the problems that are often associated with $p$-values (replication crisis, $p$-hacking, etc.) would go away. Any fixed decision procedure, e.g. a Bayesian one, can probably be "hacked" in the same way as $p$-values can be $p$-hacked (for some discussion and demonstration of this see this 2014 blog post by Uri Simonsohn).
To quote from Andrew Gelman's discussion paper:
In summary, I agree with most of the ASA’s statement on $p$-values but I feel that the problems
are deeper, and that the solution is not to reform $p$-values or to replace them with some other
statistical summary or threshold, but rather to move toward a greater acceptance of uncertainty
and embracing of variation.
And from Stephen Senn:
In short, the problem is less with $P$-values per se but with making an idol of them. Substituting another false god will not help.
And here is how Cohen put it into his well-known and highly-cited (3.5k citations) 1994 paper The Earth is round ($p<0.05$) where he argued very strongly against $p$-values:
[...] don't look for a magic alternative to NHST, some other objective mechanical ritual to replace it. It doesn't exist.
Best Answer
Here are some thoughts: