Solved – Example how maximizing and minimizing a function can be equivalent

machine learningoptimization

I don't understand how sometimes given an optimization problem, a function could get its optimal solution by minimizing or sometimes just by reformulation it becomes maximizing. Can you please give me an example for this?

Best Answer

Consider a function $f$ which for the purpose of this discussion, I am going to take to be positive. The aim with optimization is to find the values of the argument(s) that make the function achieve its 'best' value (if you're maximizing, to find the argmax).

Here's an example showing an $f$ we want to maximize, along with $\log f$, $-f$ and $1/f$. Now, any time you move an $x$ with some value of $f$ to another $x$ with a higher value of $f$, any strictly monotonic increasing transformation of $f$ (e.g. $\log$) will also increase, and any monotonic decreasing transformation of $f$ will decrease (e.g. $-f$, $1/f$). That is, if $f(x_2)>f(x_1)$, then $\log f(x_2)> \log f(x_1)$, and $-f(x_2)< -f(x_1)$, and so on (as long as they're all finite).

As long as all the transformed values must also be finite, if $f$ achieves some optimum at $x^*$, any monotonic increasing function will, too. So $f$ and $\log f$ will have maxima at the same $x$-value, and $-f$ and $1/f$ with have minima there:

enter image description here

The value of $x$ that causes $f$ to be at its maximum is marked in ($x=5$), and the corresponding position is marked for $\log f$, $-f$ and $1/f$; the same value of $x$ ($x=5$) is the argmax or argmin as appropriate.

(In this example, $f$ happened to be a logistic density, with mean $5$.)