I think it's important to clearly separate the hypothesis and its corresponding test. For the following, I assume a balanced, between-subjects CRF-$pq$ design (equal cell sizes, Kirk's notation: Completely Randomized Factorial design).
$Y_{ijk}$ is observation $i$ in treatment $j$ of factor $A$ and treatment $k$ of factor $B$ with $1 \leq i \leq n$, $1 \leq j \leq p$ and $1 \leq k \leq q$. The model is $Y_{ijk} = \mu_{jk} + \epsilon_{i(jk)}, \quad \epsilon_{i(jk)} \sim N(0, \sigma_{\epsilon}^2)$
Design:
$\begin{array}{r|ccccc|l}
~ & B 1 & \ldots & B k & \ldots & B q & ~\\\hline
A 1 & \mu_{11} & \ldots & \mu_{1k} & \ldots & \mu_{1q} & \mu_{1.}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
A j & \mu_{j1} & \ldots & \mu_{jk} & \ldots & \mu_{jq} & \mu_{j.}\\
\ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\
A p & \mu_{p1} & \ldots & \mu_{pk} & \ldots & \mu_{pq} & \mu_{p.}\\\hline
~ & \mu_{.1} & \ldots & \mu_{.k} & \ldots & \mu_{.q} & \mu
\end{array}$
$\mu_{jk}$ is the expected value in cell $jk$, $\epsilon_{i(jk)}$ is the error associated with the measurement of person $i$ in that cell. The $()$ notation indicates that the indices $jk$ are fixed for any given person $i$ because that person is observed in only one condition. A few definitions for the effects:
$\mu_{j.} = \frac{1}{q} \sum_{k=1}^{q} \mu_{jk}$ (average expected value for treatment $j$ of factor $A$)
$\mu_{.k} = \frac{1}{p} \sum_{j=1}^{p} \mu_{jk}$ (average expected value for treatment $k$ of factor $B$)
$\alpha_{j} = \mu_{j.} - \mu$ (effect of treatment $j$ of factor $A$, $\sum_{j=1}^{p} \alpha_{j} = 0$)
$\beta_{k} = \mu_{.k} - \mu$ (effect of treatment $k$ of factor $B$, $\sum_{k=1}^{q} \beta_{k} = 0$)
$(\alpha \beta)_{jk} = \mu_{jk} - (\mu + \alpha_{j} + \beta_{k}) = \mu_{jk} - \mu_{j.} - \mu_{.k} + \mu$
(interaction effect for the combination of treatment $j$ of factor $A$ with treatment $k$ of factor $B$, $\sum_{j=1}^{p} (\alpha \beta)_{jk} = 0 \, \wedge \, \sum_{k=1}^{q} (\alpha \beta)_{jk} = 0)$
$\alpha_{j}^{(k)} = \mu_{jk} - \mu_{.k}$
(conditional main effect for treatment $j$ of factor $A$ within fixed treatment $k$ of factor $B$, $\sum_{j=1}^{p} \alpha_{j}^{(k)} = 0 \, \wedge \, \frac{1}{q} \sum_{k=1}^{q} \alpha_{j}^{(k)} = \alpha_{j} \quad \forall \, j, k)$
$\beta_{k}^{(j)} = \mu_{jk} - \mu_{j.}$
(conditional main effect for treatment $k$ of factor $B$ within fixed treatment $j$ of factor $A$, $\sum_{k=1}^{q} \beta_{k}^{(j)} = 0 \, \wedge \, \frac{1}{p} \sum_{j=1}^{p} \beta_{k}^{(j)} = \beta_{k} \quad \forall \, j, k)$
With these definitions, the model can also be written as:
$Y_{ijk} = \mu + \alpha_{j} + \beta_{k} + (\alpha \beta)_{jk} + \epsilon_{i(jk)}$
This allows us to express the null hypothesis of no interaction in several equivalent ways:
$H_{0_{I}}: \sum_{j}\sum_{k} (\alpha \beta)^{2}_{jk} = 0$
(all individual interaction terms are $0$, such that $\mu_{jk} = \mu + \alpha_{j} + \beta_{k} \, \forall j, k$. This means that treatment effects of both factors - as defined above - are additive everywhere.)
$H_{0_{I}}: \alpha_{j}^{(k)} - \alpha_{j}^{(k')} = 0 \quad \forall \, j \, \wedge \, \forall \, k, k' \quad (k \neq k')$
(all conditional main effects for any treatment $j$ of factor $A$ are the same, and therefore equal $\alpha_{j}$. This is essentially Dason's answer.)
$H_{0_{I}}: \beta_{k}^{(j)} - \beta_{k}^{(j')} = 0 \quad \forall \, j, j' \, \wedge \, \forall \, k \quad (j \neq j')$
(all conditional main effects for any treatment $k$ of factor $B$ are the same, and therefore equal $\beta_{k}$.)
$H_{0_{I}}$: In a diagramm which shows the expected values $\mu_{jk}$ with the levels of factor $A$ on the $x$-axis and the levels of factor $B$ drawn as separate lines, the $q$ different lines are parallel.
The connection between the research hypothesis and the choice of null an alternative is not writ in stone. I can't see any particular reason why one could not say (just casting your phrase in plain English because that way I won't get tangled up):
"We think the treatment should reduce reaction time" ...
... but then formulate a two-sided alternative, if that was appropriate. I don't think any great song and dance is required to use a two-tailed test if you're clear that you want your hypothesis test to have power in both tails.
That is, I see no problem with discussing the properties of the hypothesis test as if the alternative were not the same thing as your research hypothesis, and then simply interpreting the results of the test back in terms of the research hypothesis.
Of course, I don't control how pointlessly dogmatic any particular journal, editor or referee may be. [Indeed, in my experience, my thoughts seems rarely to influence people whose mind is set on something being the case.]
The same attitude carries through to ANOVA; it's not 'saving' you, since a multigroup test can be made "directional" (in an ANOVA-like situation, whether or not you still call it ANOVA) --
With one-factor comparisons ($k$ groups), you have $k!$ possible orderings of the means. If you are interested in some particular ordered alternative, you can specify it clearly up front and simply use a test sensitive to that alternative (you could specify a contrast, for example, though there are other approaches to ordered alternatives).
So if a research hypothesis was "forcing" you to do one tailed, it would, I think, equally "force" you to do some equivalent with more groups, since that's possible.
Best Answer
It's correct and reasonable, but ANOVA looks at square of the effect, so you have to go back to the one-sided t-test. But with two treatments, the t-test and ANOVA are the same thing; the ANOVA F statistic is just $t^2$.