Solved – Gradient descent: compute partial derivative of arbitrary cost function by hand or through software

gradient descentmachine learningregression

I'm a software engineer, and I'm working my way through Stanford professor Andrew Ng's online course on machine learning. I'm able to follow most of the math related to gradient descent for linear regression.

One thing I am not clear about is whether there is a typical (best practice) approach to computing the partial derivative of an arbitrary cost function: Are we supposed to compute this derivative by hand, or is there some software that will do it for us? If we use software, would it be like Mathematica, where we enter a symbolic equation, and the software returns a symbolic derivative that we then implement in code?

For concreteness, in gradient descent for linear regression, the linear coefficient update rule is the following with a partial derivative:

enter image description here

The cost function for linear regression is the following:

enter image description here

And so below is the resulting update rule with the partial derivative expanded out:

enter image description here

Are we always supposed to compute that partial derivative by hand, even for arbitrarily complex cost functions?

Best Answer

There are several options available to you:

  1. Try to compute the derivatives by hand and then implement them in code.

  2. Use a symbolic computation package like Maple, Mathematica, Wolfram Alpha, etc. to find the derivatives. Some of these packages will translate the resulting formulas directly into code.

  3. Use an automatic differentiation tool that takes a program for computing the cost function and (using compiler like techniques) produces a program that computes the derivatives as well as the cost function.

  4. Use finite difference formulas to approximate the derivatives.

For anything other than the simplest problems (like ordinary least squares), option 1 is a poor choice. Most experts on optimization will tell you that it is very common for users of optimization software to supply incorrect derivative formulas to optimization routines. This typically leads to slow convergence or no convergence at all.

Option 2 is a good one for most relatively simple cost functions. It doesn't require really exotic tools.

Option 3 really shines when the cost function is the result of a fairly complicated function for which you have the source code. However, AD tools are specialized and not many users of optimization software are familiar with them.

Option 4 is sometimes a necessary choice. If you have a "black box" function that you can't get source code for (or that is so badly written that AD tools can't handle it), finite difference approximations can save the day. However, using finite difference approximations has a significant cost in run time and in the accuracy of the derivatives and ultimately the solutions obtained.

For most machine learning applications, options 1 and 2 are perfectly adequate. The loss functions (least squares, logistic regression, etc.) and penalties (one-norm, two-norm, elastic net, etc.) are simple enough that the derivatives are easy to find.

Options 3 and 4 come into play more often in engineering optimization where the objective functions are more complicated.