This is somewhat of an art, but there are some standard, straightforward things one can always attempt.
The first thing to do is re-express the dependent variable ($y$) to make the residuals normal. That's not really applicable in this example, where the points appear to fall along a smooth nonlinear curve with very little scatter. So we proceed to the next step.
The next thing is to re-express the independent variable ($r$) to linearize the relationship. There is a simple, easy way to do this. Pick three representative points along the curve, preferably at both ends and the middle. From the first figure I read off the ordered pairs $(r,y)$ = $(10,7)$, $(90,0)$, and $(180,-2)$. Without any information other than that $r$ appears always to be positive, a good choice is to explore the Box-Cox transformations $r \to (r^p-1)/p$ for various powers $p$, usually chosen to be multiples of $1/2$ or $1/3$ and typically between $-1$ and $1$. (The limiting value as $p$ approaches $0$ is $\log(r)$.) This transformation will create an approximate linear relationship provided the slope between the first two points equals the slope between the second pair.
For example, the slopes of the untransformed data are $(0-7)/(90-10)$ = -$0.088$ and $(-2-0)/(180-90)$ = $-0.022$. These are quite different: one is about four times the other. Trying $p=-1/2$ gives slopes of $(0-7)/(\frac{90^{-1/2}-1}{-1/2}-\frac{10^{-1/2}-1}{-1/2})$, etc., which work out to $-16.6$ and $-32.4$: now one of them is only twice the other, which is an improvement. Continuing in this fashion (a spreadsheet is convenient), I find that $p \approx 0$ works well: the slopes are now $-7.3$ and $-6.6$, almost the same value. Consequently, you should try a model of the form $y = \alpha + \beta \log(r)$. Then repeat: fit a line, examine the residuals, identify a transformation of $y$ to make them approximately symmetric, and iterate.
John Tukey provides details and many examples in his classic book Exploratory Data Analysis (Addison-Wesley, 1977). He gives similar (but slightly more involved) procedures to identify variance-stabilizing transformations of $y$. One sample dataset he supplies as an exercise concerns century-old data about mercury vapor pressures measured at various temperatures. Following this procedure enables one to rediscover the Clausius-Clapeyron relation; the residuals to the final fit can be interpreted in terms of quantum-mechanical effects occurring at atomic distances!
Not only do distributions of untransformed ratios have odd shapes not matching the assumptions of traditional statistical analysis, but there is no good interpretation of a difference in two ratios. As an aside if you can find an example where the difference in two ratios is meaningful, when the ratios do not represent proportions of a whole, please describe such a situation.
As a variable used in statistical analysis, ratios have the significant problem of being asymmetric measures, i.e., it matters greatly which value is in the denominator. This asymmetry makes it almost meaningless to add or subtract ratios. Log ratios are symmetric, and can be added and subtracted.
One can spend a good deal of time worrying about what distribution a test statistic has or correcting for the distribution's "strangeness" but it is important to first choose an effect measure that has the right mathematical and practical properties. Ratios are almost always meant to be compared by taking the ratio of ratios, or its log (i.e., double difference in logs of original measurements).
Best Answer
The answer depends on your analysis. If your goal is to treat the bins as an ordinal variable, then there would be no point in transforming the data. However, if you wish to treat the variable as interval or ratio (perhaps you wish to use it as the dependent variable in regression), you could convert the variable into the mean of each range, and then log-transform. For example, an observation in the 500,000-749,999 range would be: $log$((500,000+749,000)/2). In that case, the log-transformation might help make the residuals more approximately normal, which is an assumption of regression.