[Math] Is there research on Machine Learning techniques to discover conjectures (theorems) in a wide range of mathematics beyond mathematical logic

learning-theorylo.logicproof-assistantsreference-request

Although there already exists active research area, so-called, automated theorem proving, mostly work on logic and elementary geometry.

Rather than only logic and elementary geometry, are there existing research results which by using machine learning techniques(possibly generative learning models) to discover new mathematical conjectures (if not yet proved), in wider branches of mathematics such as differential geometry, harmonic analysis etc…?

If this type of intellectual task is too difficult to study by far, then for a boiled-down version, can a machine learning system process a compact, well-formatted notes (to largely reduce natural language processing part) about for instance real analysis/algebraic topology, and then to ask it to solve some exercises ? Note that the focus and interest here are more about "exploring" new possible conjectures via current(or possible near future) state-of-the-art machine learning techniques, instead of proof techniques and logic-based knowledge representation which actually already are concerned and many works done in classical AI and automated theorem proving.

So, are there some knowing published research out there, particularly by generative model in machine learning techniques.

If there are not even known papers yet, then fruitful and interesting proposals and discussions are also highly welcomed.

As a probably not so good example I would like to propose, can a machine learning system "re-discover" Cauchy-Schwarz inequality if we "teach" it some basic operations, axioms and certain level of definitions, lemmas etc, with either provided or generated examples(numerical/theoretical) ? e.g. if artificial neural networks are used as training tools, what might be useful features in order eventually output a Cauchy-Schwarz inequality in final layer.

Best Answer

One example that is close to, if not exactly of this type, is Veit Elser's demonstration that machine learning techniques can learn how to do fast matrix multiplication from examples of matrix products. As far as I know this method has so far just reproduced some known results as opposed to providing new ones. I say the example is not quite of the type requested because the system here benefits from the fact that this is a very structured problem in a specific domain; it is not going after a wide range of mathematics.