The most obvious experimental signature of tachyons would be motion at speeds greater than $c$. Negative results were reported by Murthy and later in 1988 by Clay, who studied showers of particles created in the earth's atmosphere by cosmic rays, looking for precursor particles that arrived before the first gamma rays. One could also look for particles with spacelike energy-momentum vectors. Alvager and Erman, in a 1965 experiment, studied the radioactive decay of thulium-170, and found that no such particles were emitted at the level of 1 per 10,000 decays.
Some subatomic particles, such as dark matter and neutrinos, don't interact strongly with matter, and are therefore difficult to detect directly. It's possible that tachyons exist but don't interact strongly with matter, in which case they would not have been detectable in the experiments described above. In this scenario, it might still be possible to infer their existence indirectly through missing energy-momentum in nuclear reactions. This is how the neutrino was first discovered. An accelerator experiment by Baltay in 1970 searched for reactions in which the missing energy-momentum was spacelike, and found no such events. They put an upper limit of 1 in 1,000 on the probability of such reactions under their experimental conditions.
For a long time after the discovery of the neutrino, very little was known about its mass, so it was consistent with the experimental evidence to imagine that one or more species of neutrinos were tachyons, and Chodos et al. made such speculations in 1985. In a 2011 experiment at CERN, neutrinos were believed to have been seen moving at a speed slightly greater than $c$. The experiment turned out to be a mistake, but if it had been correct, then it would have proved that neutrinos were tachyons. An experiment called KATRIN, currently nearing the start of operation at Karlsruhe, will provide the first direct measurement of the mass of the neutrino, by measuring very precisely the missing energy-momentum in the decay of hydrogen-3.
References
Alvager and Kreisler, "Quest for Faster-Than-Light Particles," Phys. Rev. 171 (1968) 1357, doi:10.1103/PhysRev.171.1357, https://sci-hub.tw/10.1103/PhysRev.171.1357
Baltay, C., G. Feinberg, N. Yeh, and R. Linsker, 1970: Search for uncharged faster-than-light particles. Phys. Rev. D, 1, 759-770, doi:10.1103/PhysRevD.1.759, https://sci-hub.tw/10.1103/PhysRevD.1.759
Chodos and Kostelecky, "Nuclear Null Tests for Spacelike Neutrinos," https://arxiv.org/abs/hep-ph/9409404
Clay, A search for tachyons in cosmic ray showers, http://adsabs.harvard.edu/full/1988AuJPh..41...93C Australian Journal of Physics (ISSN 0004-9506), vol. 41, no. 1, 1988, p. 93-99.
There are many competing limits on the maximum energy an accelerator like the LHC (i.e. a synchrotron, a type of circular accelerator) can reach. The main two are energy loss due to bremsstrahlung (also called synchrotron radiation in this context, but that's a much less fun name to say) and the bending power of the magnets.
The bending power of the magnets isn't that interesting. There's a maximum magnetic field that we can acquire with current technology, and the strength of it fundamentally limits how small the circle can be. Larger magnetic fields means the particles curve more and let you build a collider at higher energy with the same size. Unfortunately, superconducting magnets are limited in field: a given material has a maximum achievable field strength. You can't just make a larger one to get a larger field - you need to develop a whole new material to make them from.
Bremsstrahlung
Bremsstrahlung is German for "braking radiation." Whenever a charged particle is accelerated, it emits some radiation. For acceleration perpendicular to the path (for instance, if its traveling in a circle), the power loss is given by:
$$P=\frac{q^2 a^2\gamma^4}{6\pi\epsilon_0c^3}$$
$q$ is the charge, $a$ is the acceleration, $\gamma$ is the Lorentz factor, $\epsilon_0$ is the permittivity of free space, and $c$ is the speed of light.
In high energy, we usually simplify things by setting various constants equal to one. In those units, this is
$$ P=\frac{2\alpha a^2\gamma^4}{3}$$
This is instantaneous power loss. We're usually more interested in power loss over a whole cycle around the detector. The particles are going essentially at the speed of light, so the time to go around once is just $\frac{2\pi r}{c}$. We can simplify some more: $\gamma=\frac{E}{m}$, and $a=\frac{v^2}{r}$. All together, this gives:
$$ E_{\rm loop} = \frac{4\pi\alpha E^4}{3m^4r}$$
The main things to note from this are:
- As we increase the energy, the power loss increases very quickly
- Increasing the mass of the particles is very effective at decreasing the power loss
- Increasing the radius of the accelerator helps, but not as much as increasing the energy hurts.
To put these numbers in perspective, if the LHC were running with electrons and positrons instead of protons, at the same energy and everything, each $6.5~\rm TeV$ electron would need to have $37\,000~\rm TeV$ of energy added per loop. All told, assuming perfect efficiency in the accelerator part, the LHC would consume about $20~\rm PW$, or about 1000 times the world's energy usage just to keep the particles in a circle (this isn't even including the actually accelerating them part). Needless to say, this is not practical. (And of course, even if we had the energy, we don't have the technology.)
Anyway, this is the main reason particle colliders need to be large: the smaller we make them, the more energy they burn just to stay on. Naturally, the cost of a collider goes up with size. So this becomes a relatively simple optimization problem: larger means higher-up front costs but lower operating costs. For any target energy, there is an optimal size that costs the least over the long run.
This is also why the LHC is a hadron collider. Protons are much heavier than electrons, and so the loss is much less. Electrons are so light that circular colliders are out of the question entirely on the energy frontier. If the next collider were to be another synchrotron, it would probably either collide protons or possibly muons.
The problem with using protons is that they're composite particles, which makes the collisions much messier than using a lepton collider. It also makes the effective energy available less than it would be for an equivalent lepton collider.
The next collider
There are several different proposals for future colliders floating around in the high-energy physics community. A sample of them follows.
One is a linear electron-positron collider. This would have allow us to make very high-precision measurements of Higgs physics, like previous experiments did for electroweak physics, and open up other precision physics as well. This collider would need to be a linear accelerator for the reasons described above. A linear accelerator has some significant downsides to it: in particular, you only have one chance to accelerate the particles, as they don't come around again. So they tend to need to be pretty long. And once you accelerate them, most of them miss each other and are lost. You don't get many chances to collide them like you do at the LHC.
Another proposal is basically "the LHC, but bigger." A $100~\rm TeV$ or so proton collider synchrotron.
One very interesting proposal is a muon collider. Muons have the advantage of being leptons, so they have clean collisions, but they are much heavier than electrons, so you can reasonably put them in a synchrotron. As an added bonus, muon collisions have a much higher chance of producing Higgs bosons than electrons do. The main difficulty here is that muons are fairly short-lived (around $2.2~\rm\mu s$), so they would need to be accelerated very quickly before they decay. But very cool, if it can be done!
The Future
If we want to explore the highest energies, there's really no way around bigger colliders:
- For a fixed "strongest magnet," synchrotrons fundamentally need to be bigger to get to higher energy. And even assuming we could get magnets of unlimited strength, as we increase the energy there's a point where it's cheaper to just scrap the whole thing and build a bigger one.
- Linear accelerators are limited in the energy they can reach by their size and available accelerator technology. There is research into better acceleration techniques (such as plasma wakefield accelerators), but getting them much better will require a fundamental change in the technology.
There is interesting research that can be done into precision measurements of particle physics at low energy, but for discovering new particles higher energy accelerators will probably always be desirable.
Best Answer
The main reason for going underground is that the earth above provides some radiation shielding. An accelerator where everything is working properly is (outside the beam pipe) a relatively low-radiation environment. However if you have a steering or focusing magnet malfunction, so that the beam spills out of the pipe, you can briefly generate lots of prompt radiation.
The amount of shielding that you need depends on the energy of the accelerator. For example,
The 12 GeV electron accelerator at JLab is seven or eight meters underground --- just a couple of flights of stairs.
The 1 GeV proton machine at the Spallation Neutron Source is actually at ground level, but there's an earthen berm above it.
The (shuttered) 25 MV tandem accelerator at ORNL actually did most of its acceleration in a tower aboveground, and the various beam pathways are in a single above-ground building.
The lower the energy of your accelerator is, the less you need earthen shielding for safety reasons.
Another answer points out that background-limited experiments go underground to reduce cosmic ray backgrounds. This is a reason to put your detectors underground, but not necessarily a reason to put your accelerator underground.