Your question makes sense to me, but it's a bit hard to answer concisely. The short answer is that for the most part, when mathematicians decide to accept anything as true, it is because it has been proven rigorously. And as you allude to, rigorous reasoning lends itself to an analysis where we convert the reasoning into a formal proof in some system (say PA, or ZFC). When you come up with a semantic proof for the truth of something by considering models (or truth tables for propositional logic), you are using reasoning that can be formalized in ZFC or in rare cases, some somewhat stronger system. In fact 'mathematically accepted fact' roughly correlates with 'has a proof formalizable in ZFC.' Thus, formal proofs are always there... it's really the only way we know to clearly establish mathematical truth.
Some of your thinking seems to stem from overgeneralizing from the example of propositional logic. Frankly, I'm a bit surprised your instinct was that formal proofs are the more convenient method here. A formal proof can be tedious to find, even in the case of propositional logic, especially if you're using an inconvenient Hilbert-style system. On the other hand, what could be easier than just plugging in some values to a truth table? (conceptually... computationally, the problem does not scale well).
However, this is an artifact of the fact that propositional logic is decidable. If we move on to first order logic, we're faced with difficulty: how do we decide if every $m\in M$ satisfies $\phi(m)$ for $M$ some structure and $\phi$ some formula of its language? We no longer necessarily have a decidible problem. The "brute force solution" would be checking $\phi(m)$ for every $m\in M,$ which cannot be done if $M$ is infinite (and who knows if $\phi(m)$ is decidible itself). It turns out that this basic difficulty is not surmountable in general, and Godel's theorem and the halting problem are ways of seeing this.
To show that a sentence is true in the structure $\mathbb N,$ we must have some way of proving things about $\mathbb N.$ Finding a proof formalizable in some system like PA or ZFC are the only accepted ways of doing so, although the incompleteness theorem tells us we can't decide every statement in some fixed system.
So what of these "true but unprovable" statements of Godel? How can we determine it's true if it isn't provable? Well, if we're considering the Godel sentence $G$ for PA, we haven't shown it isn't provable, only that it isn't provable in PA. The proof that the Godel sentence is true is done in a stronger system than PA. The basic argument is this: Recall that $G$ essentially says of itself that it is not provable in PA. Now, assuming PA only proves correct things (i.e. it is sound), we can see that it can't be provable in PA, and thus that it is true (since it says it is not provable in PA). It turns out the premise of soundness can be weakened to just consistency.
Godel's first theorem "if PA is consistent, then $G$ is not provable in PA" can be shown in a fairly weak system. The first step toward proving $G$ by the logic above is showing PA is consistent and a closer look reveals this must be the step at which PA fails (this is Godel's second theorem). However, you can prove PA is consistent in ZFC, and the easiest way to do this is to show that $\mathbb N$ is a model for PA. Then the rest of the reasoning falls into place, and proceed to prove $G$ is true. You believe this since your reasoning is formalizable in ZFC and you believe ZFC (or the fragment you used, anyway).
So semantics are not some black box that generates truths that you cannot get from formal proof. Philosophical worries notwithstanding, the truths are there, but there's no way to see what they are without proof. Instead, ZFC is a formal system that proves some things that you cannot prove in PA. And ZFC is also incomplete: in order to believe that its 'true but unprovable statements' (like its Godel sentence or Con(ZFC)) are in fact true, we need to believe assumptions beyond those made in ZFC.
The two premises of the argument are : $A \cap C \subset B$ and $a \in C$.
Then we have two cases :
(i) $a \in A$. Then $a \in A \cap C \subset B$, and thus $a \in B$. By $\lor$-intro : $a \notin A \lor a \in B$.
(ii) $a \notin A$. Thus by $\lor$-intro again : $a \notin A \lor a \in B$.
In conclusion, due to the fact that the two cases are exhaustive, we can conclude that under assumptions above : $a \notin A \lor a \in B$.
The logical rule of inference to be used is Disjunction Elimination (aka : Proof by cases) :
$\alpha \to \varphi, \lnot \alpha \to \varphi \vDash \varphi$,
using the tautology : $\alpha \lor \lnot \alpha$.
Here $\alpha$ is $a \in A$ and $\varphi$ is $a \notin A \lor a \in B$ and the argument has shown that :
$a \in C, A \cap C \subset B \vDash a \notin A \lor a \in B$.
Best Answer
Natural deduction offers a system for proving sequents, but to show that $\Gamma\not\vdash \phi$ you need an interpretation for the formulae such that $\Gamma$ is true (or the conjunction of the formulae in $\Gamma$) and $\phi$ is false.
In your example you can just put $P$ as true and $Q$ as false; since $P \land Q$ is therefore false we see that $P \not\vdash P \land Q$.
If you are looking for a more systematic method for finding counterexamples we have truth trees (semantic tableaux).
Edit: a proof theoretic approach using cut-free sequent calculus.
Suppose $\Rightarrow P \to \, (P \land Q)$ were derivable with $P$ and $Q$ as atomic formulae.
The last rule used must have been $R \to$, (introduction of $\to$ on the right). That is, we have:
$$\frac {P \Rightarrow P \land Q} {\Rightarrow P \to (P \,\land \,Q)}$$
And before that we must have had an instance of $R \, \land$ (introduction of $\land$ on the right) :
\begin{align*} \overline{ P \Rightarrow P} \; \;\;\;\; P\Rightarrow Q \\ \hline P \Rightarrow P \land Q \ \ \ \ \ \\ \hline \Rightarrow P \to (P \,\land \,Q) \end{align*}
$P \Rightarrow P$ is axiomatic, which is fine, but there's no rule with which we can derive $Q$ from $P$ as atomic formulae, so the sequent above is underivable.