(1) Depends on what you mean by Hamiltonian and Lagrangian mechanics.
If you mean the classical mechanics aspect as in, say, Vladimir Arnold's "Mathematical Methods in ..." book, then the answer is no. Hamiltonian and Lagrangian mechanics in that sense has a lot more to do with ordinary differential equations and symplectic geometry than with functional analysis. In fact, if you consider Lagrangian mechanics in that sense as an "example" of calculus of variations, I'd tell you that you are missing out on the full power of the variational principle.
Now, if you consider instead classical field theory (as in physics, not as in algebraic number theory) derived from an action principle, otherwise known as Lagrangian field theory, then yes, calculus of variations is what it's all about, and functional analysis is King in the Hamiltonian formulation of Lagrangian field theory.
Now, you may also consider quantum mechanics as "Hamiltonian mechanics", either through first quantization or through considering the evolution as an ordinary differential equation in a Hilbert space. Then through this (somewhat stretched) definition, you can argue that there is a connection between Hamiltonian mechanics and functional analysis, just because to understand ODEs on a Hilbert space it is necessary to understand operators on the space.
(2) Mechanics aside, functional analysis is deeply connected to the calculus of variations. In the past forty years or so, most of the development in this direction (that I know of) are within the community of nonlinear elasticity, in which objects of study are regularity properties, and existence of solutions, to stationary points of certain "energy functionals". The methods involved found most applications in elliptic type operators. For evolutionary equations, functional analysis plays less well with the calculus of variations for two reasons: (i) the action is often not bounded from below and (ii) reasonable spaces of functions often have poor integrability, so it is rather difficult to define appropriate function spaces to study. (Which is not to say that they are not done, just less developed.)
(3) See Eric's answer and my comment about Reed and Simon about connection of functional analysis and quantum mechanics.
I don't know anything about the space of all distributions dual to smooth test functions, but do know a fair bit about computable measure theory (from a certain perspective).
First, you mention that you have a computable algorithm which generates a probability distribution. I believe you are saying that you have a computable algorithm from $[0,1]$ (or technically the space of infinite binary sequences) to some set $U$ where $U$ is the space of distributions of some type.
Say your map is $f$. How are you describing the element $f(x) \in U$? In computable analysis, there is a standard way to talk about these things. We can describe each element of $U$ with an infinite code (although each element has more than one code). Then $f$ works as follows: It reads the bits of $x$; from those bits, it starts to write out the code for the $f(x)$. The more bits of $x$ known, the more bits of the code for $f(x)$ known.
(Note, not every space has such a nice encoding. If the space isn't separable, there isn't a good way to describe each object while still preserving the important properties, namely the topology. Is say, in your example above, the space of distributions that are dual to smooth test functions, is it a separable space--maybe in a weak topology? Does the encoding you use for elements of $U$ generate the same topology?)
The important property of such a computable map is that it must be continuous (in the topology generated by the encoding, but these usually coincide with the topology of the space). Since $f$ is continuous, we know we can induce a Borel measure on $U$ as follows. If $S$ is an open set then $f^{-1}(S)$ is open and $\mu(f^{-1}(S))$ is known. Similarly, with any Borel sets, hence you have a Borel measure.
Borel measures are sufficient for most applications I can think of (you can integrate continuous functions and from them, define and integrate the L^p functions), but once again, I don't know anything about your applications.
Also, if the function $f$ doesn't always converge to a point in $U$, but only does so almost everywhere, the function $f$ is not continuous, but it is still fairly nice and I believe stuff can be said about the measure, although I need to think about it.
Update: If $f$ converges with probability one, then the set of input points that $f$ converges on is a measure one $G_{\delta}$ set, in particular it is Borel. The function remains continuous on that domain (in the restricted topology). Hence there is still an induced Borel measure on the target space. (Take a Borel set; map it back. It is Borel on the restricted domain, and hence Borel on [0,1]).
Update: Also, I am assuming that your algorithm directly computes the output from the input. I will give an example what I mean. Say one want to compute a real number. To compute it directly, I should be able to ask the algorithm to give me that number within $n$ decimal places with an error bound of $1/10^n$. An indirect algorithm works as follows: The computer just gives me a sequence of approximations that converge to the number. The computer may say $0,0,0,...$ so I think it converges to 0, but at some point it starts to change to $1,1,1,...$. I can never be sure if my approximation is close to the final answer. Even if your algorithm is of the indirect type, it doesn't matter for your applications. It will still generate a Borel map, albeit a more complex one than continuous, and hence it will generate a Borel measure on the target space. (The almost everywhere concerns are similar; they also go up in complexity, but are still Borel.) Without knowing more about your application it is difficult for me to say much specific to your case.
Am I correct in my understanding of your construction, especially the computable side of it? For example, is this the way you describe the computable map from $[0,1]$ to $U$?
On a more general note, much of measure theory has been developed in a set theoretic framework. This isn't very helpful with computable concerns. But using various other definitions of measures, one is able to once again talk about measure theory with an eye to what can and cannot be computed.
I hope this helps, and that I didn't just trivialize your question.
Best Answer
I'm can't claim to have studied the relevant history in a lot of detail, but count me a skeptic of Landsman's claim. Let's take this little paper and the companion that it cites as a test case, which I hope we can all agree is "real physics". The authors are clearly well versed in the calculus of variations and the representation theory of Lie groups. Both of these subjects are heavily intertwined with functional analysis - functional analysis is even foundational for the former. Are we to believe that these physicists were entirely ignorant of the subject? Or is the argument that functional analysis only influenced them indirectly through its contact with those mathematical applications?
I think Landsman's argument makes an error common among pure mathematicians about how mathematics is actually applied to the sciences. We tend to think about theorems, because those are the main objects of study in our work, but for consumers of mathematics it is the definitions that are important. The role of theorems is to validate the correctness and importance of definitions, and sometimes provide tools for manipulating them. The definitions of functional analysis - (un)bounded linear operators, Hilbert spaces, states, and so on - appear all over the place in quantum mechanics. And many of the big open problems in theoretical physics call primarily for definitions rather than theorems: Is there a measure space on which path integrals make sense? What is the correct notion of Dirac operator on the loop space of a manifold? Is there a gauge theory which includes both gravity and the standard model? And so on.