Integration is used so widely in higher areas like rocket science etc. As integration is just approximation, even $0.01$ Pascal pressure error might bring a large disaster right! Even fuel consumption error might also bring a big disaster. So does the approximation in integration don't effect any thing (as no major disasters occur in reality) and can anybody say where I am wrong in given above examples.
Why is integration used so widely though they are just approximation
applicationsapproximationcalculus
Related Solutions
You ask "why Cauchy's definition of infinitesimal, along with his 'basic approach' was superseded?"
The answer is that Cantor, Dedekind, Weierstrass and others developed a foundation for analysis to deal with certain difficulties related to Fourier series, uniform continuity, and uniform convergence. This development resulted in a formalisation that was a decisive moment in the history of analysis and was a great accomplishment. This is all well-known and rivers of ink have been spilt on the subject.
Yet the great accomplishment masked a significant failure that is less frequently spoken about. Namely, these 19th century giants failed to formalize an aspect of the procedures of calculus and analysis that was ubiquitous until and including Cauchy, namely the notion of infinitesimal. Instead, they provided infinitesimal-free paraphrases for the traditional definitions. For example, Cauchy's lucid definition of continuity of $y=f(x)$ ("infinitesimal change in $x$ always leads to infinitesimal change in $y$") got replaced by the familiar jargon ("for every epsilon there exists a delta such that, if $|x-c|$ is less then delta, then $|f(x)-f(c)|$, etc.").
Not only did they fail to formalize it but, unable to do so, some of them became convinced that there was something wrong with the notion of infinitesimal itself, and from this jumped to the conclusion that infinitesimals must be inconsistent or self-contradictory. Cantor went as far as publishing an article claiming to "prove" that infinitesimals were inconsistent. In correspondence Cantor referred to infinitesimals as "paper numbers", "cholera bacillus of mathematics", and even "abomination"; the details can be found in
Dauben, Joseph Warren. Georg Cantor. His mathematics and philosophy of the infinite. Princeton University Press, Princeton, NJ, 1990
and
Ehrlich, Philip. The rise of non-Archimedean mathematics and the roots of a misconception. I. The emergence of non-Archimedean systems of magnitudes. Arch. Hist. Exact Sci. 60 (2006), no. 1, 1–121.
A solid set-theoretic formalisation for infinitesimals did not emerge until around 1960 and by then Weierstrassian paraphrases were solidly in place, making it difficult to overcome institutional inertia.
Cauchy's idea of representing an infinitesimal by a sequence tending to zero is basically valid, but needs some polishing. Cauchy's infinitesimal specifically is dealt with in a number of articles that you can find here.
The book used the second inequality, and gets a tigher result (stronger bounds) than you. Why?
This is because, using the first expression, you bound separately numerator and denominator, and that's not the optimal thing to do. Namely, you known $m \leq u_n \leq M$, and then bound $$ \frac{8 m-8}{M+2} \leq \frac{8 u_{n}-8}{u_n+2} \leq \frac{8 M-8}{m+2} \tag{1} $$ This is correct, but not optimal. You can't see immediately it from the expression you use, but using the second expression is equivalent to bounding it as $$ \frac{8 m-8}{m+2} \leq \frac{8 u_{n}-8}{u_n+2} \leq \frac{8 M-8}{M+2} \tag{2} $$ which is better (but, again, it is not not obvious you can do this based on the first expression$^{\dagger}$).
${}^{(\dagger)}$ a way to realize you can indeed do this without using the second expression (as the book did) is to notice that the function $f\colon x\to \frac{8x-8}{x+2}$ is increasing, and so $f(m)\leq f(u_n) \leq f(M)$ whenever $m\leq u_n\leq M$.
Best Answer
This question appears based on a misunderstanding:
Integration is in fact a fully exact technique. It may feel like an approximation since $\int_a^b f(x)dx$ is defined as the limit of a family of approximations,$^*$ but that's not the case: the whole magic of limits is that by aggregating a bunch of approximations in a particular way we can in fact get the exactly correct result.
For example, remember that $0.999999...=1$ even though by definition $0.9999999...=\lim_{n\rightarrow\infty}\sum_{i=1}^n 9\cdot 10^{-i}$ and each partial sum $\sum_{i=1}^n 9\cdot 10^{-i} $ is strictly $<1$. Integration is more complicated than this, but the underlying logic is the same.
Now there are techniques which do yield only approximate results, and which are employed when the actual integral is difficult or impossible to calculate - namely, numerical integration. However, these techniques also provide error bounds (and proofs of their adequacy), so they still let us produce answers satisfying a given accuracy requirement.
$^*$Note that the characterization of the integral via the fundamental theorem of calculus as $\int_a^bf(x)dx=F(b)-F(a)$ for $F$ an antiderivative of $f$ also involves a limit - namely in the definition of the derivative. We can't escape the role of limits in calculus, which is why it's important to understand that they in fact do provide exact results (despite what the language we use to describe them may suggest).