I don't quite understand your question, but if you're asking whether category theorists should worry about set-theoretic problems the answer seems to be "sometimes". I'm not an expert in this area, but it seems that people tend to avoid universal constructions like limits over large diagrams, and in other cases, people assume the existence of strongly inaccessible cardinals. This seems to avoid standard contradictions, but I must confess that I've never checked such arguments.
I don't know many references for this question. Lurie discusses some constructions in section 5.4 of Higher topos theory.
The path integral has many applications:
Mathematical Finance:
In mathematical finance one is faced with the problem of finding the price for an "option."
An option is a contract between a buyer and a seller that gives the buyer the right but not the obligation to buy or sell a specified asset, the underlying, on or before a specified future date, the option's expiration date, at a given price, the strike price. For example, an option may give the buyer the right but not the obligation to buy a stock at some future date at a price set when the contract is settled.
One method of finding the price of such an option involves path integrals. The price of the underlying asset varies with time between when the contract is settled and the expiration date. The set of all possible paths of the underlying in this time interval is the space over which the path integral is evaluated. The integral over all such paths is taken to determine the average pay off the seller will make to the buyer for the settled strike
price. This average price is then discounted, adjusted for for interest, to arrive at the current value of the option.
Statistical Mechanics:
In statistical mechanics the path integral is used in more-or-less the same manner as it is used in quantum field theory. The main difference being a factor of $i$.
One has a given physical system at a given temperature $T$ with an internal energy $U(\phi)$ dependent upon the configuration $\phi$ of the system. The probability that the system is in a given configuration $\phi$ is proportional to
$e^{-U(\phi)/k_B T}$,
where $k_B$ is a constant called the Boltzmann constant. The path integral is then used to determine the average value of any quantity $A(\phi)$ of physical interest
$\left< A \right> := Z^{-1} \int D \phi A(\phi) e^{-U(\phi)/k_B T}$,
where the integral is taken over all configurations and $Z$, the partition function, is used to properly normalize the answer.
Physically Correct Rendering:
Rendering is a process of generating an image from a model through execution of a computer program.
The model contains various lights and surfaces. The properties of a given surface are described by a material. A material describes how light interacts with the surface. The surface may be mirrored, matte, diffuse or any other number of things. To determine the color of a given pixel in the produced image one must trace all possible paths form the lights of the model to the surface point in question. The path integral is used to implement this process through various techniques such as path tracing, photon mapping, and Metropolis light transport.
Topological Quantum Field Theory:
In topological quantum field theory the path integral is used in the exact same manner as it is used in quantum field theory.
Basically, anywhere one uses Monte Carlo methods one is using the path integral.
Best Answer
It's not accurate to say that no theory of integration on infinite-dimensional spaces exists. The Euclidean-signature Feynman measure has been constructed -- as a measure on a space of distributions -- in a number of non-trivial cases, mainly by the Constructive QFT school in the 70s.
The mathematical constructions reflect the physical ideas of effective quantum field theory: One obtains the measure on the space of field histories as the limit of a sequence/net of "regularized" integrals, which encode how the effective "long distance" degrees of freedom interact with each other after one averages out the short distance degrees of freedom in various ways. (You can imagine here that long/short distance refers to some wavelet basis, and that we get the sequence of regularized integrals by varying the way we divide the wavelet basis into short distance and long distance components.)
I don't think the main problem in the subject is that we need some new notion of integration. The Feynman measures we mathematicians can construct exhibit all the richness of the "higher categories" axioms, and moreover, the numerical computations in lattice gauge theory and in statistical physics indicates that the existing framework is at the least a very good approximation.
The problem, rather, is that we need a better way of constructing examples. At the moment, you have to guess which family of regularized integrals you ought to study when you try to construct any particular example. (In Glimm & Jaffe's book, for example, they simply replace the interaction Lagrangian with the corresponding "normally ordered" Lagrangian. In lattice gauge theory, they use short-distance continuum perturbation theory to figure out what the lattice action should be.)
Then -- and this is the really hard and physically interesting part -- you have to have enough analytic control on the family to say which observables (functions on the space of distributions) are integrable with respect to the limiting continuum measure. This is where you earn the million dollars, so to speak.