How about the study of minimal surfaces (physical applications in soap films etc.)? In fact one might argue the Lagrangian formulation of minimal surfaces (the problem of Plateau) is one of the oldest "classical field theory" problems, and led to the revival of calculus of variations in the early twentieth century (see esp. the works of Morrey).
Slightly related is the general study of continuum mechanics and (non-linear) elasticity. Which is kind of like fluid mechanics except for deformations of solids.
Another well-known application of the general frame work is the study of harmonic maps and wave maps (also known as non-linear sigma model in physics). The study of such systems led to developments of the techniques of compensated compactness and multilinear product estimates in partial differential equations (see, e.g. works of Helein, Klainerman, Tao, Krieger, and many others). The regularity properties of the harmonic maps are still under active study (Li and Tian, Nguyen, Weinstein, and others). And in physics, the sigma models find application from particle physics (as a model for equivariant Yang-Mills equation) to general relativity (stationary solutions in Einstein-vacuum or Einstein-Maxwell theories).
The sigma models are also generalized by Tony Skyrme in his namesake quasilinear model (both hyperbolic and elliptic), which is not yet well understood. This model has found applications from nucleon physics to condensed matter, and now to topological material science. The study of the stationary problem (and its generalization in the Fadeev-Skyrme model) led to interesting developments in topology and geometry (since the model admits topological solitons), see for example the work of Kapitansky.
I don't know anything about the space of all distributions dual to smooth test functions, but do know a fair bit about computable measure theory (from a certain perspective).
First, you mention that you have a computable algorithm which generates a probability distribution. I believe you are saying that you have a computable algorithm from $[0,1]$ (or technically the space of infinite binary sequences) to some set $U$ where $U$ is the space of distributions of some type.
Say your map is $f$. How are you describing the element $f(x) \in U$? In computable analysis, there is a standard way to talk about these things. We can describe each element of $U$ with an infinite code (although each element has more than one code). Then $f$ works as follows: It reads the bits of $x$; from those bits, it starts to write out the code for the $f(x)$. The more bits of $x$ known, the more bits of the code for $f(x)$ known.
(Note, not every space has such a nice encoding. If the space isn't separable, there isn't a good way to describe each object while still preserving the important properties, namely the topology. Is say, in your example above, the space of distributions that are dual to smooth test functions, is it a separable space--maybe in a weak topology? Does the encoding you use for elements of $U$ generate the same topology?)
The important property of such a computable map is that it must be continuous (in the topology generated by the encoding, but these usually coincide with the topology of the space). Since $f$ is continuous, we know we can induce a Borel measure on $U$ as follows. If $S$ is an open set then $f^{-1}(S)$ is open and $\mu(f^{-1}(S))$ is known. Similarly, with any Borel sets, hence you have a Borel measure.
Borel measures are sufficient for most applications I can think of (you can integrate continuous functions and from them, define and integrate the L^p functions), but once again, I don't know anything about your applications.
Also, if the function $f$ doesn't always converge to a point in $U$, but only does so almost everywhere, the function $f$ is not continuous, but it is still fairly nice and I believe stuff can be said about the measure, although I need to think about it.
Update: If $f$ converges with probability one, then the set of input points that $f$ converges on is a measure one $G_{\delta}$ set, in particular it is Borel. The function remains continuous on that domain (in the restricted topology). Hence there is still an induced Borel measure on the target space. (Take a Borel set; map it back. It is Borel on the restricted domain, and hence Borel on [0,1]).
Update: Also, I am assuming that your algorithm directly computes the output from the input. I will give an example what I mean. Say one want to compute a real number. To compute it directly, I should be able to ask the algorithm to give me that number within $n$ decimal places with an error bound of $1/10^n$. An indirect algorithm works as follows: The computer just gives me a sequence of approximations that converge to the number. The computer may say $0,0,0,...$ so I think it converges to 0, but at some point it starts to change to $1,1,1,...$. I can never be sure if my approximation is close to the final answer. Even if your algorithm is of the indirect type, it doesn't matter for your applications. It will still generate a Borel map, albeit a more complex one than continuous, and hence it will generate a Borel measure on the target space. (The almost everywhere concerns are similar; they also go up in complexity, but are still Borel.) Without knowing more about your application it is difficult for me to say much specific to your case.
Am I correct in my understanding of your construction, especially the computable side of it? For example, is this the way you describe the computable map from $[0,1]$ to $U$?
On a more general note, much of measure theory has been developed in a set theoretic framework. This isn't very helpful with computable concerns. But using various other definitions of measures, one is able to once again talk about measure theory with an eye to what can and cannot be computed.
I hope this helps, and that I didn't just trivialize your question.
Best Answer
Physicists here. The input for a physical theory is always some topological space and some structure (such as a metric) that depends on the specific context. The dynamics are invariant under the isometries thereof. For example, the theory of Special Relativity deals with a manifold of the form $\mathbb R^n$, and with a (pseudo)metric $\operatorname{diag}(-1,+1,+1,\dots,+1)$. The dynamics are invariant under the so-called Poincaré transformations, i.e., the group of isometries of the metric above.
We typically think of gravity as a manifestation of a non-trivial geometry, i.e., a generalization of Special Relativity where the manifold and the metric are no longer necessarily of the form above. There are two layers for a theory that includes gravity:
Gravity as a background field, where the manifold and the metric are fixed, and the dynamics correspond to other degrees of freedom propagating in this manifold, and
Gravity as a dynamical field, where the metric (and possibly the topological space itself) is determined by some dynamical equations. The system is to be determined by solving a self-consistent set of equations that include the metric, and the rest of degrees of freedom, each influencing each other.
The former doesn't have a specific name as far as I know; we just call it "dynamics in curved spacetime". The latter is known as a "theory of gravity", the prototypical example being General Relativity and its extensions. Here the metric is determined by a set of PDEs. This system of equations is invariant under diffeomorphisms, as couldn't be otherwise. This is regarded as a generalization of the statement that the dynamics are to be invariant under the isometries of the metric, but now we allow any possible map, not only an isometry (because there is no fixed metric to begin with). This is also known as general covariance.
The epithet "quantum" refers to the fact that the dynamics are, well, quantum. There is no perfectly convincing definition of what it means to be quantum (cf. this physics.SE post), but the general sentiment is that the state of the system is described by a vector in some Hilbert space (as opposed to a classical system, where the state is described by some point in some fibre bundle over your manifold).
A "quantum theory of gravity" is, thus, a model of a system where we include gravity (non-trivial geometry/topology) in a quantum mechanical way. Whatever the model is, it is to be general covariant. A standard way to construct such a model proceeds as follows:
First construct a quantum mechanical model that depends on a fixed background metric. We know how to do this, at least in a formal way (that is perfectly good for our purposes).
Integrate the previous object with respect to all metrics, whatever that may mean.
The latter step guarantees that the result is general covariant. Unfortunately, we don't really know how to do that in practice; any attempt has failed.
Witten (https://projecteuclid.org/euclid.cmp/1104178138) proposed an alternative method to construct a quantum theory of gravity: instead of integrating over all metrics, set up a model that does not depend on a metric at all, from the very beginning. The dynamical variables are typically differential forms, and we only admit operations that do not require a metric (exterior differentiation). Models that are metric-independent are known as topological, because they only depend on the manifold as a topological space (typically with some extra structure, such as a framing or a spin structure, etc.).
So, to sum up: a theory of gravity is a theory where the physical manifold is a dynamical variable itself. One can accomplish this by introducing a metric and allowing it to interact with (and feel the back-reaction from) other degrees of freedom. Another way is to not introduce a metric at all, and use degrees of freedom that can be defined without reference to a metric, such as differential forms. Making the theory "quantum" is still an open problem, and we don't really know what we want here: what does it even mean to have a quantum theory of gravity? what should we ask of such a model? Integrating over metrics is very problematic, while topological gravity is perfectly well-defined, even if very unrealistic from a physical point of view. Perhaps we should use it as a toy model to explore what are the properties of quantum theories of geometry/topology without the noise caused by other more realistic models.