The path integral has many applications:
Mathematical Finance:
In mathematical finance one is faced with the problem of finding the price for an "option."
An option is a contract between a buyer and a seller that gives the buyer the right but not the obligation to buy or sell a specified asset, the underlying, on or before a specified future date, the option's expiration date, at a given price, the strike price. For example, an option may give the buyer the right but not the obligation to buy a stock at some future date at a price set when the contract is settled.
One method of finding the price of such an option involves path integrals. The price of the underlying asset varies with time between when the contract is settled and the expiration date. The set of all possible paths of the underlying in this time interval is the space over which the path integral is evaluated. The integral over all such paths is taken to determine the average pay off the seller will make to the buyer for the settled strike
price. This average price is then discounted, adjusted for for interest, to arrive at the current value of the option.
Statistical Mechanics:
In statistical mechanics the path integral is used in more-or-less the same manner as it is used in quantum field theory. The main difference being a factor of $i$.
One has a given physical system at a given temperature $T$ with an internal energy $U(\phi)$ dependent upon the configuration $\phi$ of the system. The probability that the system is in a given configuration $\phi$ is proportional to
$e^{-U(\phi)/k_B T}$,
where $k_B$ is a constant called the Boltzmann constant. The path integral is then used to determine the average value of any quantity $A(\phi)$ of physical interest
$\left< A \right> := Z^{-1} \int D \phi A(\phi) e^{-U(\phi)/k_B T}$,
where the integral is taken over all configurations and $Z$, the partition function, is used to properly normalize the answer.
Physically Correct Rendering:
Rendering is a process of generating an image from a model through execution of a computer program.
The model contains various lights and surfaces. The properties of a given surface are described by a material. A material describes how light interacts with the surface. The surface may be mirrored, matte, diffuse or any other number of things. To determine the color of a given pixel in the produced image one must trace all possible paths form the lights of the model to the surface point in question. The path integral is used to implement this process through various techniques such as path tracing, photon mapping, and Metropolis light transport.
Topological Quantum Field Theory:
In topological quantum field theory the path integral is used in the exact same manner as it is used in quantum field theory.
Basically, anywhere one uses Monte Carlo methods one is using the path integral.
I would think that this presentation by Jaroslav Trnka, given here in Utrecht, goes at least some way towards a mathematical definition of the amplituhedron.
To skip the physical motivation, start at page 13; jargon abbreviations such as NMHV = "next-to-maximally helicity-violating" can be ignored (they relate only to the physical significance of the construction). The construction of the amplituhedron $P_{n,k,m}$ is summarized on page 23. What follows on later pages is the description how to associate a form $\Omega_{n,k,m}$ to the space $P_{n,k,m}$ and use this to calculate the required physical quantity (a scattering amplitude).
My attempt to parse a definition of the amplituhedron from Trnka's presentation:
For given integers $k,n,m$ (with $n\geq k+m$) take a $k\times n$ real matrix $C\in G_{+}(k,n)$ and a $(k+m)\times n$ real matrix $Z\in G_{+}(k+m,n)$. Here $G_+(k,n)$ is the positive Grassmannian space of $k\times n$ matrices with all $k\times k$ minors $>0$. (The $k\times n$ matrices are identified modulo the simultaneous action of a $k\times k$ matrix on each of the column vectors.)
Associated to these two positive Grassmannians is the $k\times (k+m)$ real matrix $Y$ having matrix elements
$$Y_{\alpha}^{\beta}=\sum_{p=1}^{n}C_{p}^{\alpha}Z_{p}^{\beta}.$$
By varying $C\in G_{+}(k,n)$ at fixed $Z\in G_{+}(k+m,n)$, the matrix $Y$ varies over a space $P_{n,k,m}$. This space is called the amplituhedron.
Best Answer
Let me try to expand a little bit on Ofer's answer, in particular on points 1-3.
These functions (or rather distributions in general) are essentially the multilinear functionals of the driving noise that appear when one looks at the corresponding Picard iteration. For example, if we consider the equation (formally) given by $$\partial_t \Phi = \Delta \Phi - \Phi^3 + \xi,\tag{$*$}$$ write $P$ for the heat kernel, and write $X$ for one of the space-time coordinate functions, then we would try to locally expand the solution as a linear combination of the functions / distributions $1$, $X$, $P \star \xi$, $P\star (P\star \xi)^2$, $P\star (P\star \xi)^3$, $P\star (X\cdot (P\star \xi)^2)$, etc. The squares / cubes appearing here are of course ill-defined as soon as $d \ge 2$, so that one has to give them a suitable meaning.
Each of these distributions naturally comes with a degree according to the rule that $\deg \xi = -{d + 2\over 2}$, $\deg (P\star \tau) = \deg \tau + 2$, and the degree is additive for products. One then remarks that, given any space-time point $z_0$ and any of these distributions, we can subtract a (generically unique) $z_0$-dependent linear combination of distributions of lower degree, so that the resulting distribution behaves near $z_0$ in a way that reflects its degree, just like what we do with the usual Taylor polynomials. To be consistent with existing notation, let's denote by $\Pi_{z_0}$ this recentering procedure, so for example $(\Pi_{z_0} X)(z) = z-z_0$. In our example, $\Pi_{z_0} \tau$ will be self-similar of degree $\deg \tau$ when zooming in around $z_0$.
We can now say that a distribution $\eta$ has "regularity $\gamma$" if, for every point $z_0$, we can find coefficients $\eta_\tau(z_0)$ such that the approximation $$ \eta \approx \sum_{\deg \tau < \gamma} \eta_\tau(z_0)\,\Pi_{z_0}\tau $$ holds "up to order $\gamma$ near $z_0$". The "amazing fact" referred to in the slide is that even in situations where $\xi$ is very irregular, the solution to $(*)$ has arbitrarily high regularity in this sense, so that it can be considered as "smooth". There are now several review articles around detailing this construction, for example https://arxiv.org/pdf/1508.05261v1.pdf.
Regarding the role of the noise, I already alluded to the fact that the squares / cubes / etc appearing in these expressions may be ill-posed, so that if you start with an arbitrary space-time distribution $\xi$ of (parabolic) regularity $-{d+2\over 2}$, there is simply no canonical way to define $(P\star \xi)^2$ as soon as $d \ge 2$. There is a general theorem saying that there is always a consistent way of defining these objects, yielding a solution theory for which all I said above is true, but this is not very satisfactory since it relies on many arbitrary choices. (In the case $d=2$ it relies on the choice of two arbitrary distributions with certain regularity properties, and quite a bit more in dimension $3$.) If however $\xi$ is a stationary generalised random field then, under rather mild assumptions, there is a way of defining these objects which is "almost canonical" in the sense that the freedom in the construction boils down to finitely many constants, as recently shown in https://arxiv.org/abs/1612.08138.