If this doesn't completely answer your question, I can expand it later if you ask a followup in a comment. The simple answer is just that there's a difference between letting a formula have infinite length and letting the interpretation (and truth-value) of a formula depend on infinitely many assignments. You make a clear split between (i) the formal language and its deductive system as just mechanical manipulations of meaningless strings of symbols, (ii) the structures that you interpret the language to be talking about, and (iii) the way that the interpretations (mappings from (i) to (ii)) themselves work. Very generally/vaguely, restricting things to be finite should keep things simpler. This seems reasonable, yes? So there might be advantages to confining the infinite bits to one or two of the above steps. It turns out that this does make a difference, and this is one thing that introducing quantifiers allows you to do.
To answer your questions more directly:
(1) Yes, in some sense, but where these infinite things are and how precisely they are infinite (i.e., what their structure is) is another matter. In FOL, they are relegated to the metalanguage and metatheory.
(2) This seems like a reasonable way to think of things, but it depends on the details of what you mean. De Morgan's laws are about relationships between the logical operators, and the operators are exactly the same in propositional and predicate/first-order logic. De Morgan's laws do hold in FOL -- they're exactly the same as in propositional logic. If you want to point out a relationship among the concepts of negation, conjunction, and disjunction that we use in the different steps above (in the object language and metalanguage) and how they relate to quantification, then yes, you seem to have the right idea. The reason that the existential and universal quantifiers are related in the way that you point out is because of how negation, conjunction, and disjunction are related in the metatheory. But you might be able to trace this idea back to Aristotle and company.
Cheers,
Rachel
Edit to answer comment: Yes, the distinction between finitary and infinitary logics usually has to do with the formal language and formal deduction part. You can allow formulas or proofs (or both) of infinite length.
The same distinction between formal, syntactic types of things and meaningful, semantic types of things is also applied to propositional logic. It is not always brought to everyone's attention when studying propositional logic because it's a simpler system, and the pieces are related in a way that makes the distinction not of much consequence. The thing to note is that the same steps are happening in propositional logic, but humans don't need to be told how to do them because using natural language has already taught them how to do this mapping between form and meaning. It's more like natural language has made the steps invisible, and they need to be pointed out. Well, also, people don't consciously know how they use natural language, and that is where the work is -- to make these things explicit.
If you're interested in logic and a bit of model theory, two very good books are Hodges' Logic and Machover's Set theory, logic, and their limitations. Hodges is incredibly funny and insightful, and Machover is very good about pointing out clearly and thoroughly the relationships just touched on.
The last two should have the same answer, since the second one, $\lnot\forall x.P(x)$ says that it is not true that $P(x)$ holds for every $x$, while the third one, $\exists c.\lnot P(x)$ says that there exists an $x$ for which $P(x)$ does not hold. These mean exactly the same thing. "Not every crow is black" is the same as "There is a crow that is not black."
But your answer for the second one is not correct.
Mouse over for hint:
$\lnot\forall x.P(x)$ says that it is not true that every element of the domain satisfies $P$. But $P(-2)\lor P(-1)\lor\ldots\lor P(2)$ says that $P(x)$ is true for -2, or for -1, or…
EDIT: Now your second one is correct, if you and I have the same idea for the part you indicated by "…", but it could be much simpler.
EDIT: Now your second one is incorrect again. $\lnot\forall x.P(x)$ says that $P(x)$ is not true for every $x$. It does $not$ say that $P(x)$ is true for any $x$; it might be false for all $x$.
Best Answer
A and B are correct. C can be read as "There is at least one x for which P(x) is not true." This is the inverse of part B, which says "P(x) is true for all x."
Therefore part C is
$\neg(P(-2) \wedge P(-1) \wedge P(0) \wedge P(1) \wedge P(2))$
This can also be written as
$\neg P(-2) \vee \neg P(-1) \vee \neg P(0) \vee \neg P(1) \vee \neg P(2) \vee$
These two statements are equivalent, and are both correct.