I think this (interesting!) question is yet-another instance of the occasional mismatch of science (and human perception) and formal mathematics. For example, the arguments used by the questioner, and common throughout science and engineering, were also those used by Euler and other mathematicians for 150 years prior to Abel, Cauchy, Weierstrass, and others' "rigorization" of calculus. The point is that the extreme usefulness and power of calculus and differential equations was illustrated prior to epsilon-delta proofs!
Similarly, c. 1900 Heaviside's (and others') use of derivatives of not-differentiable functions, of the "Dirac delta" functions and its derivatives, and so on, brought considerable ridicule on him from the mathematics establishment, but his mathematics worked well enough to build the transatlantic telegraph cable. "Justification" was only provided by work of Sobolev (1930s) and Schwartz (1940s).
And I think there are still severe problems with Feynman diagrams, even tho' he and others could immediately use them to obtain correct answers to previously-thorny quantum computations.
One conclusion I have reached from such stories is that we have less obligation to fret, if we have a reasonable physical intuition, than undergrad textbooks would make us believe.
But, back to the actual question: depending on one's tastes, non-standard analysis can be pretty entertaining to study, especially if one does not worry about the "theoretical" underpinnings. However, to be "rigorous" in use of non-standard analysis requires considerable effort, perhaps more than that required by other approaches. For example, the requisite model theory itself, while quite interesting if one finds such things interesting, is non-trivial.
In the early 1970s, some results in functional analysis were obtained first by non-standard analysis, raising the possibility that such methods would, indeed, provide means otherwise unavailable. However, people found "standard" proofs soon after, and nothing comparable seems to have happened since, with regard to non-standard analysis.
With regard to model theory itself, the recent high-profile proof of the "fundamental lemma" in Langlands' program did make essential use of serious model theory... and there is no immediate prospect of replacing it. That's a much more complicated situation, tho'.
With regard to "intuitive analysis", my current impression is that learning an informal version of L. Schwartz' theory of distributions is more helpful. True, there are still issues of underlying technicalities, but, for someone in physics or mechanics or PDE... those underlying technicalities themselves have considerable physical sense, as opposed to purely mathematical content.
Strichartz' nice little book "A guide to distribution theory and Fourier transforms" captures the positive aspect of this, to my taste, altho' the Sobolev aspect is equally interesting. And, of course, beware the ponderous, lugubrious sources that'll make you sorry you ever asked... :) That is, anything can be turned into an ugly, technical mess in the hands of someone who craves it! :)
So, in summary, I think some parts of "modern analysis" (done lightly) more effectively fulfill one's intuition about "infinitesimals" than does non-standard analysis.'
Because from my understanding, in order for it to be a tangent line, it intersects the curve at one point only, however Δx approaches zero, it never reaches it, so Δx must be greater than zero, however infinitesimally small, correct?
You're right. We don't ever reach that point. We take a limit.
The colloquialism, "reaching the point" is a good anthropomorphic description. Limits allow us to stretch the constraints of the real numbers by pushing towards the infinite and infinitesimal. Technically, though, to venture into such territory, we need to properly define limits. This is often introduced with the epsilon-delta formalization.
Say there exists a limit $f'(x)=\lim_{\Delta x\rightarrow0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$. Then for every $\epsilon>0$, there exists some $\delta>0$ such that whenever $0<\Delta x<\delta$, we find $|f'(x) - \frac{f(x+\Delta x)-f(x)}{\Delta x}|<\epsilon$.
We can heuristically think of the last paragraph as the following: our derivative exists if for every positive number $\epsilon$ and $\delta$, including the most ridiculously small numbers you can ever imagine, whenever $\Delta x$ is trapped between zero and any of these ridiculously small numbers, the difference between our derivative and the original expression is imperceptible.
But wait a minute, you say
...Δx must be greater than zero, however infinitesimally small, correct?
The epsilon-delta definition seems to hint that as well, but there's a catch: $$|f'(x) - \frac{f(x+\Delta x)-f(x)}{\Delta x}|<\epsilon$$
This is not less than some real positive number $\epsilon$. This is less than ANY POSSIBLE real positive number $\epsilon$. Such a concept only exists within the formalism of a limit, and is by no means a measurable quantity. That's what is meant by infinitesimal.
Due to the limit, then, the derivative cannot represent any possible secant line. There are no two points corresponding to $x+\Delta x$ and $x$ that are indistinguishable! The value we reach has converged to that which represents the slope of the tangent.
Added note:
$\Delta x\rightarrow 0$ doesn't just imply that $\Delta x$ is running through the positive numbers towards zero. For the limit to exist, we typically require it to be two-sided, meaning that $\Delta x\rightarrow0^+$ and $\Delta x\rightarrow0^-$ must produce the same result. In either case, the difference between $\Delta x$ and zero becomes vanishingly small.
Best Answer
Your perception is wrong. Non-standard analysis is grounded on Logic and it's as solid as any other field of Mathematics. I suggest that you read Abraham Robinson's Non-standard Analysis.