WHY does the difference between the fractions get smaller

fractions

$\frac{51}{101} – \frac{50}{100} > \frac{52}{102} – \frac{51}{101}$

I get that by adding 1 to both the numerator and the denominator of the same fraction that has a numerator smaller than the denominator, the fraction gets bigger since proportionally the numerator grows quicker than the denominator.

But why does the difference between the fractions get smaller? I underline the term "why", since the focus here is talking about the logical principle, not about examples (even though those can be useful as a supplement to explain the principle).

Best Answer

$\frac{50}{100},\frac{51}{101},\frac{52}{102}$

The numerator grows bigger in proportion to itself compared to the denominator. Meaning the numerator was multiplied by a bigger number than the denominator. Hence the fractions grow in size as we add more terms to the sequence.

However, in proportion of multiplication the factor reduces, as for example from $1.0099$ to $1.0096$ since adding one to bigger numbers becomes less significant.

Bonus: The sequence is approaching one. I can't put up the plot here but for this specific sequence plot $f(x)=\dfrac{x}{x+50}$ and see what $\displaystyle \lim_{x \to \infty} f(x)$ is.

In addition, if you check the slope of the graph either by calculus or looking, it becomes less steep for bigger values of $x$ which also kinda answers your question.

Related Question