While finding the length of a curve, we assume an infinitesimal right triangle, of width $dx$ and height $dy$, so arc length is ${\sqrt{dx^2 + dy^2}}$. But my question is that actually the curve is not having such a triangle the curve is continuously changing according to function, not linearly. So, there will be an error in the length of a small part of the arc. Though this error may be neglected for small part but when we integrate over the whole curve the error will be finite. So, why don't we encounter the error that arises due to this part? Please explain in easy language I am just a beginner in calculus.
Length of a curve and calculus
calculus
Related Solutions
In his book Infinite Powers, Steven Strogatz mentions this exact issue which you are describing. He considers the difference between 'completed infinity' and 'potential infinity'. I will get to your particular issue soon, but first, I will describe a similar problem that helps pinpoint what is going on.
As you know, division by zero is not allowed in mathematics. However, many people share in the false belief that $$ 1/0 = \infty $$ Why is this? A trained mathematician might revile at the above statement, but actually, it is very plausible; wrong, but plausible. Take a look at this: \begin{align} 1/0.1&=10 \\ 1/0.01&=100 \\ 1/0.001&=1000 \\ 1/0.0001&=10000 \\ 1/0.00001&=100000 \\ &\,\,\,\vdots \end{align} As $x$ approaches $0$, $1/x$ approaches infinity. Therefore, $1/0=\infty$. But wait, that's not what we have shown. What we shown is that as $x$ gets closer and closer to $0$, $1/x$ gets larger and larger. There is a crucial distinction between these two statements that is often glossed over in introductory calculus courses. Once you get to the case $1/0$, all kinds of paradoxes emerge, and so it is with good reason that $1/0$ is undefined. However, considering what $1/0$ may or may not be is not an entirely unfruitful exercise. To the contrary, it might reveal to us one of the most important tools in calculus: the limit. Let's take a look at the graph of $y=1/x$: As you can see, $1/x$ shoots off into the distance as $x$ gets closer and closer to $0$. We can't say what $1/0$ is; what we can say is that as $x$ gets closer and closer to $0$, $1/x$ gets larger and larger. This is formally written as $$ \lim_{x \to 0}\frac{1}{x}=\infty $$ But wait, that's not quite right either! $1/x$ only approaches infinity* when $x$ approaches $0$ from the positive end. What if $x$ is a negative number that is getting closer and closer to $0$? Then, $1/x$ approaches $-$infinity. Perhaps we could write that $$ \lim_{x \to 0}\frac{1}{x}=\pm\infty $$ but mathematicians like it if limits have a single, definite value. Therefore, both of the above statements are incorrect, and what we should be writing is this: $$ \lim_{x \to 0^+}\frac{1}{x}=\infty \text{ and } \lim_{x \to 0^-}\frac{1}{x}=-\infty $$ (The little '$+$' right next to the $0$ indicates that we are considering the case where $x$ is a positive number that is approaching $0$. Likewise, the '$-$' means $x$ is a negative number that is approaching $0$.)
Don't worry if all of the little details don't make complete sense to you. Just try to remember these two key facts:
- In calculus, mathematicians work with limits.
- When trying to compute limits such as $1/x$ as $x$ approaches $0$, the fact that $1/0$ is undefined is neither here nor there. We are not trying to work out what happens when we 'get to zero'. Rather, we are looking at what happens as we move closer and closer to $0$, both from the positive and negative direction.**
Now let's compare what we have learnt about limits with what we think we know about infinitesimals. The biggest problem with the concept of an infinitesimal in my mind is that they suggest that there is a 'smallest possible number'. Actually, when we are working with the standard real numbers, there is no such thing. This should be intuitively obvious: however low you go, you can always go lower. You might also be sympathetic to the idea that $$ 1/\text{infinitesimal}=\infty $$ Again, this is problematic, not least because it treats infinity as if it were a number. Therefore, we should be extremely cautious when someone mentions the words 'infinitesimal', or 'infinitely small'. Often when they do, they are using these terms as a mere shorthand for the limits we worked with earlier. For instance, if I write 'as $x$ becomes infinitely small, $1/x$ becomes infinitely large', then this would be rather sloppy, but it would also be generally understood by those well versed in the fundamentals of calculus. (I would warn against using such language though.)
Other times when people mention infinitesimals they are talking about nonstandard analysis, in which the idea of an infinitesimal is formalised. But let's not get sidetracked. As far as I am concerned, 'infinitesimals' do not exist. This should be your view too. While infinitesimals may be intuitively appealing, we should always confirm that our intuitions are in line with reality. Otherwise, we are asking for trouble.
Finally, we get to your question. If I understand correctly, you are asking about $$ \int_a^b f(x) \, dx $$ As you have already correctly pointed out, integrals are based upon splitting up a curve into many small rectangles, each with a certain width. Let's call this width $\Delta x$. We can approximate the area under the graph as $$ \sum_{a+\Delta x}^b f(x) \, \Delta x $$ Do not despair if the above expression looks unfamiliar to you. All it means is that each rectangle has a fixed width $\Delta x$. The length of each rectangle is dependent on the height of the curve at each point—hence why the length is $f(x)$, where $x$ is a variable that goes from $a$ to $b$. And of course, the approximation comes from summing the areas of the rectangles. To visualise this, here is an animation taken from Wikipedia: As the animation rightly suggests, the approximations become better as $\Delta x$ approaches $0$. This is where $dx$ steps in. You can imagine that if the rectangles have an infinitely small width, then the rectangles get the area exactly right. Historically, $dx$ was indeed used in this way to represent an infinitesimal change in $x$. However, modern standards of rigour have rendered this interpretation of $dx$ largely obsolete. Because of all the paradoxes they can create, infinitesimals are best avoided in formal mathematics, at least within the context of 'standard' calculus. Instead, $dx$ should be seen as part of a shorthand for a limit expression. For example, if $y=f(x)$, then $dy/dx$ is a shorthand for $$ \lim_{\Delta x \to 0}\frac{\Delta y}{\Delta x}=\lim_{\Delta x \to 0}\frac{f(x+\Delta x)-f(x)}{\Delta x} $$ In this case of integrals, we can imagine that $$ \int_a^b f(x) \, dx $$ represents the sum of infinitely many infinitely small rectangles of width $dx$. But even just writing this makes me cringe. The fact that the number of rectangles gets bigger and bigger does not mean that there are infinitely many rectangles (hence the difference between 'potential infinity' and 'actual infinity'). And judging by the sentiments you expressed in your question, infinitesimals may be not the right approach for you either. You are absolutely right about the apparent paradox created if we interpret the width of each rectangle as truly 'infinitesimal'. The formal definition of an integral sidesteps this issue entirely by defining $\int_a^b f(x) \, dx$ as the limit of the sum of the areas of the rectangles as $\Delta x$ approaches $0$: $$ \lim_{\Delta x \to 0}\sum_{a+\Delta x}^b f(x) \, \Delta x $$ A pattern seems to have emerged. Every time you catch yourself thinking about infinitesimals, think abouts limits! You don't need to throw your intuition out of the window—if you find infinitesimals useful for building up your mental picture of calculus, then it would be unwise for you to do away with them. Equally though, never forget what it is really going on.
*Be careful with the phrase 'approaches infinity'. This has a very different meaning to 'approaching $5$', say. If I say $x$ is approaching infinity, then all I mean is that $x$ is getting bigger and bigger. That's it.
**In this particular case, the limit 'does not exist', as we get end up with two different answers: $+\infty$ and $-\infty$, depending on whether we approach $0$ from 'above' or 'below'. Therefore, we have to restrict ourselves to considering the case where $x$ approaches $0$ solely from the positive end, or solely from the negative end.
Best Answer
I think it's best to not think of $dx$ to be infinitesimal small, but instead to be just small but finite. Then there will be some error, as you correctly observed. Now, what happens if you instead choose a $dx$ which is only half as big? Well, intuitively, the error gets smaller. Now, choose a $dx$ which is again half as big, and then again and again. The error gets smaller and smaller and, in the limit, approaches zero. And this limit is how you define the length of the curve, no infinitesimals involved.
There are of course some simplifications in the description above: when making $dx$ smaller, the error not necessarily becomes smaller. However, it is sufficient if the error "eventually" becomes "sufficiently smaller" if you only continue to decrease $dx$. Also, of course, you want that the error approaches zero independently if you split $dx$ into two, three or $\pi$ parts every step...