[Math] Theoretical results on neural networks

big-listmachine learningreference-requestsoft-question

With this question I'd like to have a recollection of theoretical rigorous results on neural networks.

I'd like to have results that have been settled, as opposed to hypothesis. As an example, this paper proposes, under certain assumptions, that any two sets of weights $w_1,w_2$ that achieve a global minimum on the loss are connected by a path along on which the loss is constant and equal to the global minimum; in other words, the global minima are connected. That is a very interesting result, for which it would be nice to have proof, but the paper does not provide one.

An interesting result was this question I found here about the universal approximation theorem on neural networks in general topological groups other than just $\mathbb{R}^n$, and they do provides proofs. This is the kind of result I am looking for. So, if you know of results like this, please share.

Edit 1: As some have pointed out, the question is indeed broad. That is kind of goal of the question though. I'd like for people to share papers/results that I (and others) are not aware. The only requisite is that it is well-settled result, and not a hypothesis.

Best Answer

Your question is a bit too broad, but here is something you may want to read, if you are interested in the Mathematical Analysis of Deep Learning: The Modern Mathematics of Deep Learning, by Berner et al., arXiv:2105.04026.

Related Question