[Math] Logarithmic scale vs. plotting the log on linear scale

logarithms

I have a rather trivial question. Recently I started working with log-log scales and I am confused about one thing. Suppose you have an $n$ powered function. To obtain a linear plot, you have two options:

  • (1) you could plot $log(x)$ and $log(y)$ values on the normal linear scale (aka the normal number line)
  • (2) plot them as the $x$ and $y$ values are on a log-log scale.

I have tried both methods and I have obtained the same exact plot from both of them (as expected). The only difference (as expected again) was the numbers.

So my question is this: what is the real difference between these two methods? Why do we have two methods for linearizing functions? Does the log-log scale exist only to allow us to preserve the original ($x$,$y$) values?

Best Answer

The point of plotting the stuff on the log scale on both $x$ and $y$ is that, as you mentioned, the original $(x,y)$ values are preserved, and that you can quickly look to see if the data follows a power law distribution if the plot is approximately linear. Plotting the data on the log-log scale is preferred.