[Math] Automatic vs numerical differentiation of a function known from samples

machine learningna.numerical-analysisst.statistics

Suppose I have $n$ samples $(x_i, f(x_i))_{i=1}^n$ from an unknown function $f$. I need to approximate (estimate) the derivative $f'(x^*)$ at some new test point $x^*$, that is not necessarily one of the $x_i$. I am assuming nothing about the $x_i$: They can be regularly or irregularly sampled, have large gaps, etc.

Naively, numeric differentiation seems like the only option. The "other" option would be automatic differentiation, but from my understanding of this you have to actually know $f$ in order to use auto-differentiation.

Suprisingly, in all of the auto-diff tutorials I have come across, this assumption is never mentioned.

So, now I wonder if there is some way to apply auto-diff to my problem that I haven't come up with. (Or, some other method altogether besides numerical differentiation.) General references on this problem are welcome as well!

Best Answer

Automatic differentiation needs the structure of the function ( computation graph, or preferably a straight line program).

In your case, I am not sure how numeric differentiation helps to get a reliable result. If your parameter space is high-dimensional, you are completely screwed. If not, you can interpolate the function by a smooth function (InterpolatingFunction[] in Mathematica) and then differentiate said smooth function to get a number out. Whether or not that number has anything to do with reality is anyone's guess. To differentiate the smooth function, you can use automatic differentiation.