This is, in my opinion, a common feeling after a "lifetime of approaching math the wrong way". People are taught math in a very rigid rule-based formula/pattern method, and then when they contrast this against mathematical proofs they have a knee-jerk reaction against anything which looks even remotely like what they did before. The fact of the matter is, however, that you will need to be able to do some of this without aid of a computer.
When reading a proof, it is easy to take for granted that you are able to fill in the details between steps of the proof, when really all these steps are able to be filled in precisely because you have the understanding of solving equations (basic algebra) and working with inequalities (basic arithmetic), for example. The same thing holds true for proofs involving integration and derivatives.
Perhaps it's best to leave it to those who really know what they're talking about - Spivak writes in his chapter on integration that our motivation should be that:
- Integration is a standard topic in calculus, and everyone should know about it.
- Every once in a while you might actually need to evaluate an integral, under conditions which do not allow you to consult any of the standard integral tables.
- The most useful "methods" of integration are actually very important theorems (that apply to all functions, not just elementary ones).
He emphasizes that the last reason is the most crucial.
I would personally advocate that students should be wary of falling into the trap of thinking that such pedantic methods are beneath them. It is often easy to think you understand something at a high level, but you don't truly learn what it is all about until you really get your hands dirty with it.
We are teaching math wrong in many, many ways.
For some reason, there has been a historical backlash against abstraction in secondary and pre-secondary mathematics. "I'll never need to use this," say so many students and parents.
As a result, what we learn is a hackneyed attempt to apply the mathematical problems to "real world examples."
So when we learn trig, we start talking about things like leaning ladders against houses, and we give students an impression that in order to do such a thing, they need to compute the inverse sin of the height over the length of the ladder.
Of course, by the time students are 14, 15, or 16 years old, they've probably seen someone lean a ladder against a house and do no such thing.
Pre-calculus mathematics is very interesting because it is the first opportunity one has to apply relationships and definitions to problems and to learn how to transform complicated problems into simpler ones. This technique is fundamental no only in mathematics, but in the real world.
At the same time, real-time discovery is often haphazard and chaotic. We don't teach biology in the order it was discovered, principally because we assumed a whole lot of extremely wrong nonsense for most of human history.
The historical motivating factors are not the same as the present ones. Newton and Leibniz were trying to solve specific problems, and they needed new math. But we've solved those problems now, so those motivating factors have perhaps lost their edge.
Instead, we should look at their process, not their imperative. We should be teaching students to ask whether there exists a meaningful relationship between a function and its slope, and whether this can have an effect on a real-world problem. We should teach students about how we can infer new properties from a handful of well-defined conditions. We should teach students to explore "what if..."
Instead, we teach students about billiard balls, ladders, and two-column proofs, as if this bears any relevance to the real world, the mathematical world, or any world. In short, we waste their time. So yes, we're teaching it wrong.
A side story: As a partial counter-example... When I substitute taught math courses, I often got classes full of "Level 2" students, which was a way of saying "remedial". They hated their class, the lessons, the work, everything. So I used to put the Navier-Stokes equations on the board, and I'd tell them that whoever could solve them would win not just $1M dollars, but eternal fame.
I asked the students to name the most popular people they knew. They'd respond with "Avril Lavigne" or "Brittney Spears" or some such.
I asked them if they knew who Sophia Loren was. Or Dom Delouise. Or Patsy Cline. They never heard of them.
I then asked them if they ever heard of Newton. Of Einstein. Of Riemann. Of Euler. They, in fact, did know their names. I told them that mathematics is one of the few ways that your mark can be left on the world permanently. That if you did something truly great, that the high school students in 300 years would be hearing your name.
Of course, none of them really thought they could do that. But it resonated with them, because it made them think, "wow yeah, I have heard of Newton." It made them realize that maybe there was something important, and that unlike being a famous world leader, it was something that was almost purely product of self-actualization.
This context gave them a glimpse into a world they didn't know existed. None of them wanted to compute how to lean a ladder against a house. They wanted to know why math was important -- not microcosmically, but in the context of great things. These were young kids with great thoughts, not computing machines.
I told those students that "yes, it does not matter if you can compute these numbers, but it does matter if you know how." That concept alone motivated them more than anything else, because it showed them that they controlled their own destiny.
For a teenage kid frustrated with school, that's a wonderful feeling. Math is one of the few fields where we can actually deliver that consistently.
Best Answer
Things become clearer if you look at Barrow's original proof, which can be found in The Geometrical Lectures of Isaac Barrow (Proposition 11 of Lecture X).
According to the picture below, Barrow (= Newton's advisor) proved that, given two curves $ZGE$ and $AIF$ (with $AZ$, $PG$, $DE$, etc. increasing), if the area of the region $ADEZ$ is $DF\cdot R$ (for any $D$ and a given constant $R$) then $TF$ is tangent to $AIF$ at $F$, where $T$ is defined by the relation $\frac{DE}{DF}=\frac{R}{DT}$.
This is the Barrow's Fundamental Theorem of Calculus. In modern notation, if we take $R=1$, $ZGE$ is the graph of $g(x)$, $f(x)=-g(x)$, $AIF$ is the graph of $F(x)$, $x$ is the abscissa of $D$ and $a$ is abscissa of $A$, then Barrow's result implies that
$$\begin{aligned} \frac{d}{dx}\left[\int_a^x f(s)\,ds\right]&=\frac{d}{dx}\left[\text{area}(ADEZ)\right]&&\text{(modern meaning of integral)}\\ &=\frac{d}{dx}\left[F(x)\right]&&\text{(Barrow's hypothesis)}\\ &=\frac{DF}{DT}&&\text{(modern meaning of derivative)}\\ &=DE&&\text{(Barrow's hypothesis)}\\ &=f(x),&& \end{aligned}$$ which is one part of the modern Fundamental Theorem of Calculus.
The other part can be found in Proposition 19 of Lecture XI: if $FT$ is tangent, then $\text{area}(APGZ)=PI\cdot R$. In modern notation, taking $R=1$ and using the fact that $F(a)=0$ in the considered case and labeling $p$ the abscissa of $P$, we obtain the usual result $$\int_a^p f(x)\,dx=F(p)-F(a).$$
In a broad sense: