This is the proverbial sixty-four thousand dollar question for fundamental physics. It may be helpful to split it down into steps.
- What are the possible consistent theories of quantum gravity?
- Which of these can (or must) be extended to include matter and guage fields?
- Which of these can be made to include the standard model as a low energy limit?
Once we have answered these questions the theoretical program to understand the foundations of physics is essentially complete and the rest is stamp collecting and experiment. That is not going to happen today but let's see where we are.
String Theory works very well as a perturbative theory of gravitons that appears to be finite at all orders, but there is no full proof that it is a complete theory of quantum gravity. It requires matter and gauge fields with supersymmetry to avoid anomalies. The size of gauge groups suggests that it could potentially include the standard model. It is too strong a claim to say that it does incorporate the standard model. A popular view is that it has a vast landscape of solutions which is sufficiently diverse to suggest that the standard model is covered, but crucial elements such as supersymmetry breaking and the cosmological constant problem are not yet resolved.
Supergravity theories are potentially alternative non-string theories that could provide a perturbative theory of quantum gravity. Indications are that they are finite up to about seven loops due to hidden E7 symmetry but they are likely to have problems at higher loops unless there are further hidden symmetries. These theories have multiplets of gauge groups and matter. The 4D theories do not have sufficiently large gauge groups for the standard model but compactified higher dimensional supergravity does. A more subtle problem is to include the right chiral structure and this may be possible only with the methods of M-theory.
It has long been the conventional wisdom that supergravity theories can only be made complete by adding strings. Recent work using twistor methods on 4D supergravity seems to support this idea (e.g. Skinner etc.)
Loop Quantum Gravity is an attempt to quantise gravity using the canonical formualism and it leads to a description of quantum gravity in terms of loops and spin network states which evolve in time. Although this is regarded as an alternative to string theory and supergravity it does not give a picture of a purtabative limit which would make it possible to compare with these approaches. It is possible that ST/SUGRA and LQG are looking at similar things from a different angle. In fact the recent progress on N=8 supergravity as a twistor string theory has some features that are similar to LQG. Both involve 2D worldsheet objects and network like objects.
The main distinctions are that LQG does not have supersymmetry and N=8 SUGRA does not use knots. Even then there has been some progress on a supersymmetric version of LQG and the Yangian symmetries used in N=8 SUGRA should be amenable to a q-deformation that brings in knots. It remains to be seen if these theories can be unified.
It is worth saying that all these approaches involve trying to quantise gravity in different ways. Although quantisation is not a completely unique procedure it is normal to expect that different ways of quantising the same thing should lead to related results, If something like supersymmetry or strings or knots are needed to get consistency in one approach the chances are that they will be needed in another.
I have not mentioned other approaches to quantum gravity such as spin foams, group field theory, random graphs, causal sets, shape dynamics, non-commutative geometry, ultra-violet fixed points etc. Some of these are related to the other main approaches but are less well developed. It should also be mentioned that there are always attempts to unify gravity and the standard model classically e.g. Garrett's E8 TOE, Weinstein's Geometric Unity etc. These may tell us something interesting or not, but it is only when you try to quantise gravity that strong constraints apply so there is no reason to think they should be related to the attempts to quantise gravity.
So in conclusion all approaches that have made any kind if real progress with quantising gravity look like they may be related. Much more has been revealed so far from this need to quantise consistently than from directly trying to unify gravity with the standard model. This may not be so surprising when you consider the enormous difference in energy scales between the two.
When Newton's mechanics was new, people expected a theory of the solar system to produce better descriptions for the stuff left unexplained by Ptolmey: the equant distances, the main-cycle periods, and epicycle locations. Newton's theory didn't do much there--- it just traded in the Ptolmey parameters for the orbital parameters of the planets. But the result predicted the distances to the planets (in terms of the astronomical unit), and to the sun, and these distances could be determined by triangulation. Further, the theory explained the much later observation of stellar abberation, and gave a value for the speed of light. If your idea of what a theory should predict was blinkered by having been brought up in a Ptolmeian universe, you might have considered the Kepler/Newton's theory as observationally inconsequential, since it did not modify the Ptolmeic values for the observed locations of the planets in any deep way.
The points you bring up in string theory are similar. String theory tells you that you must relate the standard model to a microscopic configuration of extra dimensions and geometry, including perhaps some matter branes and almost certainly some orbifolds. These predictions are predicated on knowing the structure of the standard model, much like the Newtonian model is predicated on the structure of the Ptolmeic one. But the result is that you get a complete self-consistent gravitational model with nothing left unknown, so the number of predictions is vastly greater than the number of inputs, even in the worst case scenario you can imagine.
The idea that we will have $10^{40}$ standard model like vacua with different values of the electron mass, muon mass, and so on, is extremely pessimistic. I was taking this position to show that even in this worst of all possible worlds, string theory is predictive. It is difficult to see how you could make so many standard-model like vacua, when we have such a hard time constructing just one.
There are some predictions that we can make from string theory without knowing hardly anything at all about the vacuum, just from the observation of a high Planck scale and the general principles of degree-of-freedom counting. These predictions are generally weak. For example, if we find that the Higgs is in a technicolor sector, with a high Planck scale, and the technicolor gauge group is SO(1000) or SP(10000), it is flat out impossible to account for this using string theory. The size of the gauge group in a heterotic compactification is two E8s, and no bigger. In order to violate this bound on the number of gauge generators, you need a lot of branes in some kind of type II theory, and then you won't be able to stabilize the compactification at a small enough scale, because of all the gauge flux from the branes will push the volume to be too big, so that the GUT scale will fall too low, and the problems of large-extra dimensions will reappear.
In a similar vein, if you discover a new ultra-weak gauge field in nature, like a new charge that protons carry that electrons don't, you will falsify string theory. You can't make small charges without small masses, and this constraint is the most stringent of several model constraints on low-energy strings in the swampland program.
These types of things are too general--- they rule out stuff that nobody seriously proposes (although at least one group in the 90s did propose ultra-weak gauge charges as a way to stabilize the proton). But it is notable that the type of things we observe at low energies are relatively small gauge groups compared to what could be theoretically, and in generational echoes of a relatively limited number of representations, all of the most basic sort. This is the kind of stuff that naturally appears in compactified string theory, without any fine adjustments.
Still, in order to make real predictions, you need to know the details of the standard model vacuum we live in, the shape of the compactification and all the crud in it. The reason we don't know yet is mostly because we are limited in our ability to explore non-supersymmetric compactifications.
When the low-energy theory is supersymmetric, it often has parameters, moduli, which you can vary while keeping the theory supersymmetric. These are usually geometrical things, the most famous example is the size of the compactification circles in a toroidal compactification of type II strings. In such a vacuum, there would be parameters that we would need to determine experimentally. But these universes are generally empty. If you put nonsupersymmetric random stuff into a toroidal compactification, it is no longer stable. Our universe has many dimensions already collapsed, and only a few dimensions growing quickly.
The cosmological constant in our universe is an important clue, because it is, as far as we can see, just a miraculous cancellation between QCD pion-condensate energy and QCD gluon condensate energy, Higgs condensate energy, zero-point energy density in the low-energy fields, dark matter field zero-point energy (and any dark-matter condensates), and some energy which comes from the Planckian configuration of the extra dimensions. If the low cosmological constant is accidental to 110 decimal places, which I find extremely unlikely, the observation of its close-to-zero value gives 110 decimal places of model-restricting data. It is much more likely that the cosmological constant is cancelled out at the Higgs scale, or perhaps it is an SO(16)xSO(16) type thing where the cosmological constant is so close zero because the theory is an orbifold-like projection of a close-to-SUSY theory. In this case, you might have only 20 decimal places of data from the cosmological constant, or even fewer.
To answer your specific concerns:
for i: we can make string compactifications that contain the MSSM emerging from an SO(10) or SU(5) GUT at low energies and no exotic matter. although these types of compactifications are generally still too supersymmetric, they have the right number of generations, and the right gauge groups and particle content. These ideas are found here: http://arxiv.org/abs/hep-th/0512177 , and the general scheme is based on work of Candelas, Horowitz, Strominger, Witten from 1985, in supersymmetric geometric compactifications of the heterotic string theories of Gross,Harvey,Martinec,Rohm.
Kachru and collaborators in the last decade explained how you can break SUSY at low enegies using only gauge fluxes in the extra dimensions. Orbifold-type can even break SUSY at high energy, leaving only non-SUSY stuff, and the classic example is the SO(16)xSO(16) strings of Alvarez-Gaume, Ginsparg, Moore, Vafa (see here: http://arxiv.org/abs/hep-th/9707160 ). This suggests very strongly that we can find the standard model in a natural compactification, and more so, we can find several different embeddings.
for ii--- the answer is not so different from other theories. The low energy "laws" are not immutable laws, they are modified by knocking the extra dimension stuff around, they can be changed. There are no immutable field theory laws in string theory--- the field theory is an effective field theory describing the fluctuations of a moderately complex system, about as complicated as a typical non-biological high-Tc superconducting ceramic. So the laws are random only inasmuch as the crud at high energy can be rearranged consistently, which is not that much.
for iii--- you must remember that I am talking about the worst case scenario. We have a lot of clues in the standard model, like the small electron mass, and the SUSY (or lack thereof) that we will (or will not) find at LHC. These clues are qualitative things that cut down the number of possibilities drastically. It is very unlikely to me that we will have to do a computer aided search through $10^{40}$ vacua, but if push comes to shove, we should be able to do that too.
Historical note about Ptolmey, Aristarchus, Archimedes and Appolonius
To be strictly honest about the history I incidentally mentioned, I should say that I believe the preponderance of the evidence suggests that Aristarchus, Archimedes, and Appolonius developed a heliocentric model with elliptical orbits, or perhaps only off-center circular orbits with nonuniform motion, already in the 3rd century BC, but they couldn't convince anybody else, precisely because the theory couldn't really make new predictions with measurements that were available at the time, and it made counterintuitive and, to some denominations heretical, predictions that the Earth was moving frictionlessly through a Democritus style void. The reason one should believe this about those ancient folks is we know for sure, from the Sand Reckoner, that Archimedes was a fan of Aristarchus heliocentric model, that Appolonius and Archimedes felt a strong motivation to make detailed study of conic sections--- they knew what we would today call the defining algebraic equation of the parabola, the ellipse, and hyperbola. It was Appolonius who introduced the idea of epicycle and deferent, I believe as an Earth-centered approximation to a nonuniform conic orbit in a heliocentric model. It is certain to my mind that Appolonius, a contemporary of Archimedes, was a heliocentrist.
Further, Ptolmey's deferent/epicycle/equant system is explained in the Almaghest with an introduction which is replete with an anachronistic Heisenberg-like positivism: Ptolmey notes that his system is observationally equivalent to some other models, which is an obvious reference to the heliocentric model, but we can't determine distances to the planets, so we will never know which model is right in an abstract sense, so we might as well use the more convenient model which treats the Earth as stationary, and makes orbits relative to the Earth. The Ptolmey deferent/equant/epicycle model is elegant. If you only heard about it by hearsay, you would never know this--- there were no "epicycles upon epicycles" in Ptolmey, only Copernicus did stuff like that, and then only so as to match Ptolmey in accuracy, only using uniform circular orbits centered on the sun, not off-center circles with equants, or area-law ellipses. Ptolmey's system can be obtained by taking a heliocentric model with off-center circular orbits and an equal-areas law and putting your finger on the Earth, and demanding that everything revolve around the Earth. This back-derivation from a preexisting heliocentric off-center circle model is more plausible to me than saying that Ptolmey came up with this from scratch, especially since Appolonius is responsible for the basic idea.
Barring an unlikely discovery of new surviving works by Archimedes, Appolonius, or Aristarchus, one way to check if this idea is true is to look for clues in ancient texts, to see if there was a mention of off-center circles in the heliocentric model, or of nonuniform circular motion. Aristotle is too early, he dies before Aristarchus is active, but his school continues, and might have opposed the scientific ideas floating around with later texts.
Best Answer
String theory includes every self-consistent conceivable quantum gravity situation, including 11 dimensional M-theory vacuum, and various compactifications with SUSY (and zero cosmological constant), and so on. It can't pick out the standard model uniquely, or uniquely predict the parameters of the standard model, anymore than Newtonian mechanics can predict the ratio of the orbit of Jupiter to that of Saturn. This doesn't make string theory a bad theory. Newtonian mechanics is still incredibly predictive for the solar system.
String theory is maximally predictive, it predicts as much as can be predicted, and no more. This should be enough to make severe testable predictions, even for experiments strictly at low energies--- because the theory has no adjustable parameters. Unless we are extremely unfortunate, and a bazillion standard model vacua exist, with the right dark-matter and cosmological constant, we should be able to discriminate between all the possibilities by just going through them conceptually until we find the right one, or rule them all out.
What "no adjustable parameters" means is that if you want to get the standard model out, you need to make a consistent geometrical or string-geometrical ansatz for how the universe looks at small distances, and then you get the standard model for certain geometries. If we could do extremely high energy experiments, like make Planckian black holes, we could explore this geometry directly, and then string theory would predict relations between the geometry and low-energy particle physics.
We can't explore the geometry directly, but we are lucky in that these geometries at short distances are not infinitely rich. They are tightly constrained, so you don't have infinite freedom. You can't stuff too much structure without making the size of the small dimensions wrong, you can't put arbitrary stuff, you are limited by constraints of forcing the low-energy stuff to be connected to high energy stuff.
Most phenomenological string work since the 1990s does not take any of these constraints into account, because they aren't present if you go to large extra dimensions.
You don't have infinitely many different vacua which are qualitatively like our universe, you only have a finite (very large) number, on the order of the number of sentences that fit on a napkin.
You can go through all the vacua, and find the one that fits our universe, or fail to find it. The vacua which are like our universe are not supersymmetric, and will not have any continuously adjustible parameters. You might say "it is hopeless to search through these possibilities", but consider that the number of possible solar systems is greater, and we only have data that is available from Earth.
There is no more way of predicting which compactification will come out of the big-bang than of predicting how a plate will smash (although you possibly can make statistics). But there are some constraints on how a plate smashes--- you can't get more pieces than the plate had originally: if you have a big piece, you have to have fewer small piece elsewhere. This procedure is most tightly constrained by the assumption of low-energy supersymmetry, which requires analytic manifolds of a type studied by mathematicians, the Calabi-Yaus, and so observation of low-energy SUSY would be a tremendous clue for the geometry.
Of course, the real world might not be supersymmetric until the quntum gravity scale, it might have a SUSY breaking which makes a non-SUSY low-energy spectrum. We know such vacua exist, but they generally have a big cosmological constant. But the example of SO(16) SO(16) heterotic strings shows that there are simple examples where you get a non-SUSY low energy vacuum without work.
If your intuition is from field theory, you think that you can just make up whatever you want. This is just not so in string theory. You can't make up anything without geoemtry, and you only have so much geometry to go around. The theory should be able to, from the qualitative structure of the standard model, plus the SUSY, plus say 2-decimal place data on 20 parameters (that's enough to discrimnate between 10^40 possibilities which are qualitatively identical to the SM), it should predict the rest of the decimal places with absolutely no adjustible anything. Further, finding the right vacuum will predict as much as can be predicted about every experiment you can perform.
This is the best we can do. The idea that we can predict the standard model uniquely was only suggested in string propaganda from the 1980s, which nobody in the field really took seriously, which claimed that the string vacuum will be unique and identical to ours. This was the 1980s fib that string theorists pushed, because they could tell people "We will predict the SM parameters". This is mostly true, but not by predicting them from scratch, but from the clues they give us to the microscopic geometry (which is certainly enough when the extra dimensions are small).