Your question makes sense to me, but it's a bit hard to answer concisely. The short answer is that for the most part, when mathematicians decide to accept anything as true, it is because it has been proven rigorously. And as you allude to, rigorous reasoning lends itself to an analysis where we convert the reasoning into a formal proof in some system (say PA, or ZFC). When you come up with a semantic proof for the truth of something by considering models (or truth tables for propositional logic), you are using reasoning that can be formalized in ZFC or in rare cases, some somewhat stronger system. In fact 'mathematically accepted fact' roughly correlates with 'has a proof formalizable in ZFC.' Thus, formal proofs are always there... it's really the only way we know to clearly establish mathematical truth.
Some of your thinking seems to stem from overgeneralizing from the example of propositional logic. Frankly, I'm a bit surprised your instinct was that formal proofs are the more convenient method here. A formal proof can be tedious to find, even in the case of propositional logic, especially if you're using an inconvenient Hilbert-style system. On the other hand, what could be easier than just plugging in some values to a truth table? (conceptually... computationally, the problem does not scale well).
However, this is an artifact of the fact that propositional logic is decidable. If we move on to first order logic, we're faced with difficulty: how do we decide if every $m\in M$ satisfies $\phi(m)$ for $M$ some structure and $\phi$ some formula of its language? We no longer necessarily have a decidible problem. The "brute force solution" would be checking $\phi(m)$ for every $m\in M,$ which cannot be done if $M$ is infinite (and who knows if $\phi(m)$ is decidible itself). It turns out that this basic difficulty is not surmountable in general, and Godel's theorem and the halting problem are ways of seeing this.
To show that a sentence is true in the structure $\mathbb N,$ we must have some way of proving things about $\mathbb N.$ Finding a proof formalizable in some system like PA or ZFC are the only accepted ways of doing so, although the incompleteness theorem tells us we can't decide every statement in some fixed system.
So what of these "true but unprovable" statements of Godel? How can we determine it's true if it isn't provable? Well, if we're considering the Godel sentence $G$ for PA, we haven't shown it isn't provable, only that it isn't provable in PA. The proof that the Godel sentence is true is done in a stronger system than PA. The basic argument is this: Recall that $G$ essentially says of itself that it is not provable in PA. Now, assuming PA only proves correct things (i.e. it is sound), we can see that it can't be provable in PA, and thus that it is true (since it says it is not provable in PA). It turns out the premise of soundness can be weakened to just consistency.
Godel's first theorem "if PA is consistent, then $G$ is not provable in PA" can be shown in a fairly weak system. The first step toward proving $G$ by the logic above is showing PA is consistent and a closer look reveals this must be the step at which PA fails (this is Godel's second theorem). However, you can prove PA is consistent in ZFC, and the easiest way to do this is to show that $\mathbb N$ is a model for PA. Then the rest of the reasoning falls into place, and proceed to prove $G$ is true. You believe this since your reasoning is formalizable in ZFC and you believe ZFC (or the fragment you used, anyway).
So semantics are not some black box that generates truths that you cannot get from formal proof. Philosophical worries notwithstanding, the truths are there, but there's no way to see what they are without proof. Instead, ZFC is a formal system that proves some things that you cannot prove in PA. And ZFC is also incomplete: in order to believe that its 'true but unprovable statements' (like its Godel sentence or Con(ZFC)) are in fact true, we need to believe assumptions beyond those made in ZFC.
Best Answer
We can easily turn an indirect proof of a theorem (or proof of a theorem by contradiction) into an ordinary direct syntactic proof. This is a special case of the Deduction Theorem which tells us that if $\Sigma \cup \{A\} \vdash B$ for some collection of statements $\Sigma$ and specific statements $A$ and $B$, then $\Sigma \vdash A \rightarrow B$. Specifically, if we let $A$ be $\lnot F$ and $B$ be some contradiction $C \land \lnot C$ derived from $\Sigma \cup \{\lnot F\}$, then the deduction theorem tells us that we have a proof of $\lnot F \rightarrow (C \land \lnot C)$ from $\Sigma$. But since $F$ is a tautological consequence of this last statement, we can extend such a proof of this $\Sigma$-theorem to a proof of $F$. This direct syntactic proof of $F$ then gives us a direct semantic proof by Gödel's completeness theorem for first-order logic.
Note that the Deduction Theorem itself is a metatheorem that can be proven by induction on the length of the proof of $B$ from $\Sigma \cup \{A\}$, and its proof provides us with a constructive way of getting the actual proof of $A \rightarrow B$ from $\Sigma$.