What makes a test statistic "extreme" depends on your alternative, which imposes an ordering (or at least a partial order) on the sample space - you seek to reject those cases most consistent (in the sense being measured by a test statistic) with the alternative.
When you don't really have an alternative to give you a something to be most consistent with, you're essentially left with the likelihood to give the ordering, most often seen in Fisher's exact test. There, the probability of the outcomes (the 2x2 tables) under the null orders the test statistic (so that 'extreme' is 'low probability').
If you were in a situation where the far left (or far right, or both) of your bimodal null distribution was associated with the kind of alternative you were interested in, you wouldn't seek to reject a test statistic of 60. But if you're in a situation where you don't have an alternative like that, then 60 is unsual - it has low likelihood; a value of 60 is inconsistent with your model and would lead you to reject.
[This would be seen by some as one central difference between Fisherian and Neyman-Pearson hypothesis testing. By introducing an explicit alternative, and a ratio of likelihoods, a low likelihood under the null won't necessarily cause you to reject in a Neyman-Pearson framework (as long as it performs relatively well compared too the alternative), while for Fisher, you don't really have an alternative, and the likelihood under the null is the thing you're interested in.]
I'm not suggesting either approach is right or wrong here - you go ahead and work out for yourself what kind of alternatives you seek power against, whether it's a specific one, or just anything that's unlikely enough under the null. Once you know what you want, the rest (including what 'at least as extreme' means) pretty much follows from that.
Suppose $\boldsymbol X = (X_1, X_2, \ldots, X_n)$ is a sample drawn from a normal distribution with unknown mean $\mu$ and known variance $\sigma^2$. The sample mean $\bar X$ is therefore normal with mean $\mu$ and variance $\sigma^2/n$. On this much, I think there can be no possibility of disagreement.
Now, you propose that our test statistic is $$Z = \frac{\bar X - \mu}{\sigma/\sqrt{n}} \sim \operatorname{Normal}(0,1).$$ Right? BUT THIS IS NOT A STATISTIC. Why? Because $\mu$ is an unknown parameter. A statistic is a function of the sample that does not depend on any unknown parameters. Therefore, an assumption must be made about $\mu$ in order for $Z$ to be a statistic. One such assumption is to write $$H_0 : \mu = \mu_0, \quad \text{vs.} \quad H_1 : \mu \ne \mu_0,$$ under which $$Z \mid H_0 = \frac{\bar X - \mu_0}{\sigma/\sqrt{n}} \sim \operatorname{Normal}(0,1),$$ which is a statistic.
By contrast, you propose to use $\mu = \bar X$ itself. In that case, $Z = 0$ identically, and it is not even a random variable, let alone normally distributed. There is nothing to test.
Best Answer
The absolute value is taken merely to give a concise way to define extremes in both directions. So |T|>=|t$_0$| simply means T>=|t$_0$| or T<=-|t$_0$|.