We can't whip up truth predicates for arbitrary "small" sets of sentences, but when we bound the complexity - in terms of the Levy hierarchy in set theory - we can. The reason we can't "glue" these definitions together to get a full truth predicate is that the definitions themselves get increasingly complex; since there's no bound on their complexity, we can't find a single formula which does the job.
This phenomenon also happens - and may be easier to understand - in the context of arithmetic; here we use the arithmetical hierarchy instead of the Levy hierarchy, but the abstract idea is the same.
In the context of arithmetic - which again will be easier to understand, and uses the same ideas - the book Metamathematics of first-order arithmetic by Hajek and Pudlak has a good explanation. In the context of set theory, I believe Kunen's book is a good source (but I don't have it on hand to check); Jech's book also probably covers it.
I think a substantial chunk of your question can be rephrased as follows (and the rest is clarified by the answer to this rephrasing):
In forcing, how are "global" statements about $G$ - which a priori are only determined once $G$ is "completed" - determined by "local" information (namely individual conditions)?
Rougly speaking, the point is that they don't - it's only in the presence of a genericity assumption on $G$, which is itself a "global" fact about $G$.
Incidentally, this is closely related to the "Generic Comments" section of my answer to an earlier question, to which I've just made some minor edits for readability (and corrected one major typo).
A good first step towards demystify this is to first think about rather concrete properties - e.g. if we force with finite binary sequences in the usual way, just thinking about the definition of genericity it's clear that we'll have infinitely many $1$s in $G$: for each $k$, the set $D_k$ of conditions which already have at least $k$ many $1$s is dense, so by genericity $G$ has to meet each $D_k$ and hence have infinitely many $1$s.
Phrased in terms of the forcing relation, we've shown that $$\emptyset\Vdash\forall k(\vert G^{-1}(1)\vert\ge k)).$$ So there's an example of some "local" information - in this case, no information at all! - determining some "global" fact about $G$.
The above example probably feels like cheating at first: it wasn't really an individual condition, but rather the genericity requirement, which was doing the heavy lifting. But this is exactly the point! When we say $p\Vdash\varphi$ we don't really mean that the "local" fact that $p\in G$ on its own tells us that $\varphi$ will be true, but rather that this local fact together with the "global" fact that $G$ is sufficiently generic tells us that $\varphi$ will be true.
So we're not magically deducing "global" information from "local" information; rather, we're identifying a certain kind of global information which reduces all information to local information, in the following rough sense:
Suppose $P$ is some "global" question about filters. Then for any generic filter $G$, whether $P$ holds or fails of $G$ is determined entirely by some "local" fact about $G$ (namely, some condition $p\in G$) together with the fact that $G$ is generic.
This kind of "local-to-global-given-global" mechanism is actually something we see all the time - once we replace "global" with "future". For example:
If I'm playing chess, the "local" fact that I have a king and a rook against a king and it's my turn tells me the "global/future" fact that I'm going to win - given the "global/future" fact that I'm going to play optimally.
Suppose I'm seeing, digit-by-digit, the decimal expansion of some number $\theta$. Then I know right away (the "trivial" amount of "local" information) the "global/future" fact that I'll eventually see a digit which is not $3$ ... given the "global/future" fact that $\theta$ is guaranteed to be irrational.
The "global-from-local" principle in forcing (which is one of the two forcing theorems) is really just another example this phenomenon. It is more mysterious at first for two reasons:
The relevant kind of "global guarantee" is $(i)$ rather technical (genericity) and $(ii)$ surprisingly uniform (it works for all appropriately-expressible global questions).
Related to point $(ii)$ above, the global facts we're reducing to local facts via a global guarantee (genericity) are in general very complicated. In the example above, it was obvious how even a little genericity guaranteed that $G$ had infinitely many $1$s; the connection between genericity and the continuum hypothesis is much less clear.
But the underlying nature of the situation is the same.
The above directly answers your first question. It also points the way towards the answer to the second: it's hidden in my observation that
The relevant kind of "global guarantee" is [...] surprisingly uniform (it works for all appropriately-expressible global questions)
(changed emphasis mine). The point is that the question of whether the generic literally is a given specific thing is not so expressible, so the paradox you describe doesn't occur: for general $X$ (e.g. $X\not\in M$), the fact "$G$ is fully $M$-generic" is not enough to reduce the question "Is $G=X$?" to a local question about $G$.
The forcing theorem doesn't say that genericity reduces all global information to local information; it only applies to some things, namely those expressible in the forcing language.
Let me end with a largely-unrelated but perhaps worthwhile minor point: when you write
...determine all expressions about $G$ before $G$ itself is fully known,
the phrasing is ambiguous in a way which might be adding confusion (and even if you're not having an issue at this point, another reader might). So let me clarify: each individual fact about $G$ winds up being determined at some stage during the construction of $G$ (and in particular before $G$ is "completed"), but there is no stage during the construction where all facts about $G$ have been determined.
Best Answer
Just found this :
Is weak forcing a semantic relation?
and the solution is in the associated comments and Carl Mummert's answer.
So based on the above post an example would be :
Assume $\text{ not true } p \Vdash A$
The Cohen forcing definition for negation is : $$ p \Vdash \neg (\neg A) \iff \forall Q \supseteq p \text{ not true } Q \Vdash \neg A \tag{1}$$
So as Cohen forcing is intended to be negation complete (Cohen Lemma 3, p119) and consistent (Cohen Lemma 1, p118) there could be a q $\supset p$ : $q \Vdash A$ (q $\nsubseteq$ p as we are assuming $\text{ not true } p \Vdash A$).
If $ \exists q \supset p : q \Vdash A$ then $\forall Q \supseteq p \text{ not true } Q \Vdash \neg A$
So using (1) $p \Vdash \neg (\neg A)$ or equivalently $p \Vdash \neg \neg A$