No. Compactness merely guarantees that $\mathsf{PA}$ (as is now standard I'll write simply "$\mathsf{PA}$" for first-order Peano arithmetic) has multiple models up to isomorphism. It does not prevent such models from all looking the same from the perspective of their internal first-order theories - that is, from being elementarily equivalent. Indeed, compactness applies to all first-order theories, including the complete ones. True arithmetic $\mathsf{TA}$ for example has $2^{\aleph_0}$-many non-isomorphic countable models, but is by definition complete.
The incompleteness of $\mathsf{PA}$ is a much stronger result, and does not follow from coarse considerations alone. Compactness shows that $\mathsf{PA}$ is not categorical (and indeed no first-order theory with infinite models is categorical). But even that is limited: it takes more than mere compactness to even show that $\mathsf{PA}$ isn't $\aleph_0$-categorical (we have to bring in Lowenheim-Skolem, or use Godel).
Yes, your understanding is basically right.
In more detail:
The term "Peano arithmetic" is used variously by different communities to refer to either first or second order Peano arithmetic (I'll denote the latter by "$\mathsf{PA}_1$" and "$\mathsf{PA}_2$" respectively). Within the modern mathematical logic community, "Peano arithmetic" almost exclusively refers to $\mathsf{PA}_1$; however, in older texts and among philosophers and historians of mathematics, it often refers to $\mathsf{PA}_2$.
The theory $\mathsf{PA}_1$ is usually formulated in the language $\{0,1,+,\times,<\}$ or something similar; roughly speaking, the only crucial point is that we have both addition and multiplication (arithmetic with only addition, for example, is quite a different matter). The axioms of $\mathsf{PA}_1$ consist of:
Weakening the induction scheme to apply to only certain first-order formulas results in subtheories of $\mathsf{PA}_1$; e.g. $\mathsf{I\Sigma_{17}}$ is the fragment of $\mathsf{PA_1}$ consisting of $\mathsf{P}^-$ together with induciton for $\Sigma_{17}$ formulas.
Meanwhile, $\mathsf{PA}_2$ is usually formulated in the much smaller language $\{0,\mathsf{succ}\}$ with a very sparse set of axioms ... followed by the full second-order induction axiom. Again, there is some flexibility in the details but the power of the underlying logic is such that it ultimately doesn't matter.
(As an aside, if we build a direct analogue of $\mathsf{PA_1}$ with second-order logic replacing first-order logic in the setup of the induction scheme we get something appropriately equivalent to $\mathsf{PA}_2$; see here.)
"Second-order arithmetic" meanwhile refers to the two-sorted, first-order theory $\mathsf{Z_2}$. The two sorts are generally called "numbers" and "sets," and the axiom of extensionality (which is part of $\mathsf{Z_2}$) ensures that we can WLOG restrict attention to models whose sets sort literally consists of subsets of the numbers sort and whose elementhood relation is actual elementhood. The language of $\mathsf{Z_2}$ consists of the language of $\mathsf{PA}_1$ on the numbers sort, and the new binary relation "$\in$" connecting the numbers and sets sorts. Its axioms consist of:
$\mathsf{P}^-$ for the numbers sort,
extensionality for the sets sort, and
the full induction and comprehension schemes.
Whereas the interesting subtheories of $\mathsf{PA}_1$ essentially arise from restricting the induction scheme, $\mathsf{Z_2}$ is "two-dimensional:" we get interseting subtheories by restricting either the induction or comprehension schemes.
For further reading about $\mathsf{PA}_1$ and $\mathsf{Z_2}$, and their interesting subtheories specifically, I strongly recommend Hajek/Pudlak and Simpson respectively. See also my recommendations here and here.
Best Answer
Just in historical terms, "first order logic comes from Peano Arithmetic" is not correct. Really the two ideas, of extending axiomatic reasoning beyond (Euclidean) geometry into algebra and arithmetic, and of formalising logic to express more than Aristotelian syllogisms could, proceeded hand in hand.
Progress in logic beyond the handling of quantifiers in the syllogism was gradual through the 19th century. Important names include Augustus de Morgan and JS Mill. Gottlob Frege essentially nailed the problem in 1879, introducing the first order quantifiers $\forall, \exists$ and hence codifying logic with the syntax still used today.
Meanwhile mathematicians including De Morgan again, Grassmann, and Peirce, had been formalising the theory of arithmetic. Frege's syntax was obviously helpful for putting sentences like Bolzano's $\epsilon$-$\delta$ definition of continuity into a precise form, and equally for the laws of arithmetic. Peirce published a set of axioms in 1881.
But the important part of the project was to get beyond the "deductive" reasoning of syllogisms to all the "inductive" rules which mathematicians use in natural proofs. These require writing down, not just true first-order statements about numbers (etc), but reasoning about collections of numbers together. Frege (and other philosophers, going way back) called such a collection the extension of a concept. The extension of a mathematical predicate $P(x)$ is just $\{x: P(x)\}$, the collection of everything for which $P$ is true. In particular, with Frege's new logical notation, any first-order formula $\phi(x)$ in the notation is a concept, so $\{x: \phi(x)\}$ should be an extension. Frege also introduced the notation $x \in A$, to mean "$x$ satisfies the concept defining $A$".
Obviously these extensions are now recognisable as (naive) sets. (I can't recall offhand when this word was introduced). Dedekind in 1888 and then more neatly Peano in 1889 gave axioms for arithmetic including reasoning with these sets. Peano's point is that all the use one needs to make of sets in arithmetic comes in the familiar rule of arithmetic induction, which (as had been obvious for ages) can't be expressed in syllogisms. "If the extension of any predicate includes 0, and includes $x+1$ whenever it includes $x$, then it includes all numbers"---Peano was certainly thinking in terms of full Fregean logic.
Then in 1901 Bertrand Russell demonstrated, in a letter to Frege, his paradox of the set $\{x: x \not \in x\}$. This horrified Frege, Dedekind, and everybody else involved with the project of formalising mathematics---it showed that some of the formulas expressible in Frege's syntax aren't consistent concepts. Or possibly it showed that the whole formalism was wrong; but it was obviously too useful for real mathematics to throw away entirely.
So Russell, Zermelo, and all the rest, put in a lot of effort salvaging Fregean logic from the paradox. The obvious cause of the problem with $\{x: x \not \in x\}$ is that it's self-referential (like the ancient Liar paradox), and the natural way to avoid self-reference is to stratify your reasoning. Philosophically the paradoxical set is failing to distinguish between an object and a predicate. Russell's solution was, from the top down, to impose a Byzantine structure of orders and degrees on the sets and quantifiers, to ensure that objects and predicates never conflicted. From this the language of first-order and second-order logic emerged.
The downside of this is that you can obviously do first-order reasoning about (well-behaved) sets themselves---De Morgan's laws, for example---and so you need to replicate all of first-order logic in the second-order case, and so on for a towering hierarchy of identical proofs with different subscripts. The symbol $\in$ becomes very difficult to work with.
Zermelo introduced a bottom-up construction for well-behaved sets which could be guaranteed not to need Russell's structure, to let $\in$ be an ordinary mathematical relation, and yet still to avoid the paradox.
Fortunately Peano's axioms can be made to work in either context. You don't need the full self-referential behaviour of $\in$ to talk about arithmetic, so "second-order" formulas in Russell's sense are adequate. In Zermelo's set theory you can construct a set satisfying the Peano second-order axiom whenever the predicate for inducting over is expressible as a set, so (relative to the set theory) Peano arithmetic works here as well.
Much of the work on Zermelo sets was done, in the 1920s, by Skolem; he refined the Zermelo axioms, especially distinguishing the Axiom of Choice. In the meanwhile a whole school of "Intuitionistic Logic" had been developed, to try to encapsulate more accurately the reasoning mathematicians really need, rather than trying to row backwards from the excessive and inconsistent Fregean system, in which Peano arithmetic seems to work but only haphazardly. This wasn't successful in its own terms (though it has proved useful in other contexts, category theory and so forth). Primitive recursive arithmetic was Skolem's formulation of an intuitionistically valid part of all arithmetic. It wasn't successful in supplanting full Peano reasoning, but is genuinely interesting in its own right.
Finally in 1931 Gödel published his first Incompleteness Theorem, showing that second-order Peano reasoning, or equally arithmetic in the Zermelo universe, could neither of them be proved consistent. At this point mathematicians heaved a sigh of relief and stopped worrying that they might be proved inconsistent. And that is where the story ends.
(All dates off Wikipedia.)