The p-value of your significance test can be interpreted as the probability of observing the value of the relevant statistic as or more extreme than the value you actually observed, given that the null hypothesis is true. (note that the p-value makes no reference to what values of the statistic are likely under the alternative hypothesis)
EDIT: in mathematical terminology, this can be written as:
$$p-value = Pr(T > T_{obs} | H_{0})$$
where $T$ is some function of the data (the "statistic") and $T_{obs}$ is the actual value of $T$ observed; $H_{0}$ denotes the conditions implied by the null hypothesis on the sampling distribution of $T$.
You can never be sure that you're assumptions hold true, only whether or not the data you observed is consistent with your assumptions. A p-value gives a rough measure of this consistency.
A p-value does not give the probability that the same data will be observed, only the probability that the value of the statistic is as or more extreme to the value observed, given the null hypothesis.
Browne-Forsythe simply performs ANOVA on $z_{ij}=|y_{ij} - \tilde{y}_j|$, where $y_{ij}$ is the $i$th observation in group $j$. Groups with larger spread in $y$ will have larger mean $z$. (Levene is similar, but the $z$'s are defined in terms of the deviations from the group mean instead of the group median.)
If you only have two groups (where a one-tailed test has meaning), you'd simply replace that ANOVA in either of those tests with a plain two-sample t-test on the $z$s ... except one tailed.
Of course, you'd have to specify the direction a priori (before seeing the data).
If you already conclude that the Browne-Forsythe or Levene $F$ is appropriate in the two-group, two-tailed case, then the corresponding $t$-test is necessarily appropriate in the two-group version (it rejects exactly the same cases as the $F$ when working two-tailed) - consequently the only remaining consideration is whether it works as well two tailed as it does one tailed. Simple considerations of symmetry in arguments should suffice for that.
So if you think Browne-Forsythe or Levene are okay, then just do a $t$-test. Nothing to it.
[Caveat Emptor: Using such a test prior to an ANOVA to decide whether on not to apply some adjustment for heteroskedasticity or more robust procedure is not advisable. Better to assume the variances are unequal at the outset.]
Best Answer
NIST & Wikipedia both cite Brown & Forsythe's 1974 paper in saying that the version of Levene's test using the median performs better for skewed distributions.
You can't infer that the test performed well or badly from the p-value you get unless you know whether the samples did in fact come from populations with unequal variances, & then you'd have to repeat many times to find the distribution of the p-value. Which is just what Brown & Forsythe did to justify their claim.