There are indeed many proofs of the Compactness theorem. Leo Harrington once told me that he used a different method of proof every time he taught the introductory graduate logic course at UC Berkeley. There is, of course, the proof via the Completeness Theorem, as well as proofs using ultrapowers, reduced products, Boolean-valued models and so on. (In my day, he used Boolean valued models, but that was some time ago, and I'm not sure if he was able to keep this up since then!)
Most model theorists today appear to regard the Compactness theorem as the significant theorem, since the focus is on the models---on what is true---rather than on what is provable in some syntactic system. (Proof-theorists, in contast, may focus on the Completeness theorem.) So it is not because Completness is too hard that Marker omits it, but rather just that Compactness is the important fact. Surely it is the Compactness theorem that has deep applications (or at least pervasive applications) in model theory. I don't think formal deductions appear in Marker's book at all.
But let's get to your question. Since the exact statement of the Completeness theorem depends on which syntactic proof system you set up---and there are a huge variety of such systems---any proof of the Completeness theorem will have to depend on those details. For example, you must specify which logical axioms are formally allowed, which deduction rules, and so on. The truth of the Completness Theorem depends very much on the details of how you set up your proof system, since if you omit an important rule or axiom, then your formal system will not be complete. But the Compactness theorem has nothing to do with these formal details. Thus, there cannot be hands-off proof of Completeness using Compactness, that does not engage in the details of the formal syntactic proof system. Any proof must establish some formal properties of the formal system, and once you are doing this, then the Henkin proof is not difficult (surely it fits on one or two pages). When I prove Completeness in my logic courses, I often remark to my students that the fact of the theorem is a foregone conclusion, because at any step of the proof, if we need our formal system to be able to make a certain kind of deduction or have a certain axiom, then we will simply add it if it isn't there already, in order to make the proof go through.
Nevertheless, Compactness can be viewed as an abstract Completness theorem. Namely, Compactness is precisely the assertion that if a theory is not satisfiable, then it is because of a finite obstacle in the theory that is not satisfiable. If we were to regard these finite obstacles as abstract formal "proofs of contradiction", then it would be true that if a theory has no proofs of contradiction, then it is satisfiable.
The difference between this abstract understanding and the actual Completness theorem, is that all the usual deduction systems are highly effective in the sense of being computable. That is, we can computably enumerate all the finite inconsistent theories by searching for formal syntactic proofs of contradiction. This is the new part of Completness that the abstract version from Compactness does not provide. But it is important, for example, in the subject of Computable Model Theory, where they prove computable analogues of the Completeness Theorem. For example, any consistent decidable theory (in a computable language) has a decidable model, since the usual Henkin proof of Completeness is effective when the theory is decidable.
Edit: I found in Arnold Miller's lecture
notes
an entertaining account of an easy proof of (a fake version of) Completeness from Compactenss (see page 58). His system amounts to the abstract formal system I describe above. Namely, he introduces the MM proof system (for Mickey Mouse),
where the axioms are all logical validities, and the only
rule of inference is Modus Ponens. In this system, one can
prove Completeness from Compactness easily as follows: We
want to show that T proves φ if and only if every model
of T is a model of φ. The forward direction is
Soundness, which is easy. Conversely, suppose that every
model of T is a model of φ. Thus, T+¬φ has no
models. By Compactness, there are finitely many axioms
φ0, ..., φn in T such that
there is no model of them plus ¬φ. Thus,
(φ0∧...∧φn implies
φ) is a logical validity. And from this, one can easily
make a proof of φ from T in MM. QED!
But of course, it is a joke proof system, since the
collection of validities is not computable, and Miller uses this example to illustrate the point as follows:
The poor MM system went to the Wizard of OZ and said, “I
want to be more like all the other proof systems.” And the
Wizard replied, “You’ve got just about everything any other
proof system has and more. The completeness theorem is easy
to prove in your system. You have very few logical rules
and logical axioms. You lack only one thing. It is too hard
for mere mortals to gaze at a proof in your system and tell
whether it really is a proof. The difficulty comes from
taking all logical validities as your logical axioms.” The
Wizard went on to give MM a subset Val of logical
validities that is recursive and has the property that
every logical validity can be proved using only Modus
Ponens from Val.
And he then goes on to describe how one might construct
Val, and give what amounts to a traditional proof of
Completeness.
Here is a result along the lines you are requesting, which
I find beautifully paradoxical.
Theorem. Every model of ZFC has an element that is a
model of ZFC. That is, every $M\models ZFC$ has an element
$m$, which $M$ thinks is a structure in the language of set
theory, a set $m$ and a binary relation $e$ on $m$, such
that if we consider externally the set of objects $\bar
m=\{\ a\ |\ M\models a\in m\ \}$ with the relation $a\mathrel{\bar e}
b\leftrightarrow M\models a\mathrel{e} b$, then $\langle
\bar m,\bar e\rangle\models ZFC$.
Many logicians instinctively object to the theorem, on the
grounds of the Incompleteness theorem, since we know that
$M$ might model $ZFC+\neg\text{Con}(ZFC)$. And it is true
that this kind of $M$ can have no model that $M$ thinks is
a ZFC model. The paradox is resolved, however, by the kind
of issues mentioned in your question and the other answers,
that the theorem does not claim that $M$ agrees that $m$ is
a model of the ZFC of $M$, but only that it externally is a
model of the (actual) ZFC. After all, when $M$ is
nonstandard, it may be that $M$ does not agree that $m$
satisfies ZFC, even though $m$ actually is a model of ZFC,
since $M$ may have many non-standard axioms that it insists
upon.
Proof of theorem. Suppose that $M$ is a model of ZFC. Thus,
in particular, ZFC is consistent. If it happens that $M$ is
$\omega$-standard, meaning that it has only the standard
natural numbers, then $M$ has all the same proofs and
axioms in ZFC that we do in the meta-theory, and so $M$
agrees that ZFC is consistent. In this case, by the
Completeness theorem applied in $M$, it follows that there
is a model $m$ which $M$ thinks satisfies ZFC, and so it
really does.
The remaining case occurs when $M$ is not
$\omega$-standard. In this case, let $M$ enumerate the
axioms of what it thinks of as ZFC in the order of their Goedel numbers. An initial segment of
this ordering consists of the standard axioms of ZFC. Every
finite collection of those axioms is true in some
$(V_\alpha)^M$ by an instance of the Reflection theorem.
Thus, since $M$ cannot identify the standard cut of its
natural numbers, it follows (by overspill) that there is
some nonstandard initial segment of this enumeration that
$M$ thinks is true in some $m=(V_\alpha)^M$. Since this
initial segment includes all actual instances of the ZFC
axioms, it follows that $m$ really is a model of ZFC, even
if $M$ does not agree, since it may think that some
nonstandard axioms might fail in $M$. $\Box$
I first learned of this theorem from Brice Halimi, who was visiting in New York in 2011, and who subsequently published his argument in:
Halimi, Brice, Models as universes, Notre Dame J. Formal Logic 58, No. 1, 47-78 (2017). ZBL06686417.
Note that in the case that $M$ is $\omega$-nonstandard, then we actually get that a rank initial segment $(V_\alpha)^M$ is a model of ZFC. This is a very nice transitive set from $M$'s perspective.
There are other paradoxical situations that occur with countable computably saturated models of ZFC. First, every such M contains rank initial segment $(V_\alpha)^M$, such that externally, $M$ is isomorphic to $(V_\alpha)^M$. Second, every such $M$ contains an element $m$ which $M$ thinks is an $\omega$-nonstandard model of a fragment of set theory, but externally, we can see that $M\cong m$. Switching perspectives, every such $M$ can be placed into another model $N$, to which it is isomorphic, but which thinks $M$ is nonstandard.
Best Answer
Your proof strategy via (1) and (2) is impossible. If $PA\cup\Sigma$ proves that Goodstein's theorem is false, then the proof will have finite length, and so there will be some finite $\Sigma_0\subset\Sigma$ such that $PA\cup\Sigma_0$ proves that Goodstein's theorem is false. This would imply by (1) that Goodstein's theorem is false in the standard model. But Goodstein's theorem holds in the standard model, as Goodstein proved.
A second point is that you may find that there are no specific "natural" models of PA at all other than the standard model. For example, Tennenbaum proved that there are no computable nonstandard models of PA; that is, one cannot exhibit a nonstandard model of PA so explicitly that the addition and multiplication of the model are computable functions. (See this related MO question.) But I do not rule out that there could be natural nonstandard models in other senses.