Gosh. I wonder if those recommending Bourbaki have actually ploughed through the volume on set theory, for example. For a sceptical assessment, see the distinguished set theorist Adrian Mathias's very incisive talk https://www.dpmms.cam.ac.uk/~ardm/bourbaki.pdf
Bourbaki really isn't a good source on logical foundations. Indeed, elsewhere, Mathias quotes from an interview with Pierre Cartier (an associate of the Bourbaki group) which reports him as admitting
'Bourbaki never seriously considered logic. Dieudonné himself was very vocal against logic'
-- Dieudonné being very much the main scribe for Bourbaki. And Leo Corry and others have pointed out that Bourbaki in their later volumes don't use the (in fact too weak) system they so laboriously set out in their Volume I.
Amusingly, Mathias has computed that (in the later editions of Bourbaki) the term in the official primitive notation defining the number 1 will have
2409875496393137472149767527877436912979508338752092897
symbols. It is indeed a nice question what possible cognitive gains in "security of foundations" in e.g. our belief that 1 + 1 = 2 can be gained by defining numbers in such a system!
Many students, myself, some years ago, included, are often confused about the aim of logic. It is certainly not the aim of logic to somehow provide a rigorous framework to do all of mathematics in a somehow pure manner, without invoking any form of intuition or axiomatisation. In particular, it is not the aim of logic to define the meaning of such phrases as for all
or there exists
at any level other than the intuitive obvious one. It is simply of little interest to do so (that is actually a big fat lie, as there are various flavours of logic in which the meaning of, e.g., there exists
is defined differently to the classical one, and some mathematicians object to the classical definition, for many plausible reasons - such issues obivously fall under the umberla of foundations, since it is a study of different variants at a very low level, the very basic building blocks of all arguments).
So, what is logic about then? Well, logic comprises of several fields, so let's just look at one: model theory. Model theory can be described, somewhat, as the study of the expressive power of formal languages. For a long time, mathematicians debated the correctness of certain objects subjectively. Axioms were treated as evident truths, indicating they are indisputably true. Famously, Euclid's fifth postulate in geometry was debated, and argued by many to be an evident axiom of geometry, while literally walking on a counter example. Development in analysis and other realms dictated a much more intricate understanding of the role of axioms, definitions, and the structure of proof. The old axiomatic, platonistic, approach was replaced by a formalism approach: the axioms are just sentences chosen to hold, rather than somehow being externally obvious, and the practice of mathematics is the study of that which flows from the axioms. Of course, when Hilbert said that mathematics is just a formal game, he (I presume) did not mean that when we actually do mathematics, we just randomly choose some axioms, and see what happens. That would be futile. Instead, the formality is there, practiced informally by human beings. However, it cannot be disputed that it is a formal game, and that formal aspect can be studied. So, along comes model theory: a formal definition of language, predicates, statements, etc., together with its semantics. This sets the ground to study the relationship between the language we use in order to study something, and the actual properties of that something. Few things are more foundations than the understanding of the strengths and limitations of the language we use in our quest to understand mathematical objects.
Goedel's completeness and incompleteness theorems are remarkable examples of amazing answers to such foundational questions as the following. If I found a proof of a property from some axioms, does it mean that the property holds in any model of the axioms? (Answer: yes, by a simple proof by induction following the definitions of the semantics). If a property is true (semantics!) in every model of some axioms, does it follow that there exists a proof (a syntactic thing!) of the property from the axioms? (Answer: Yes, this is the completeness theorem). Given a model of interest, is it possible to capture its semantics (i.e., to capture precisely all of the statements that hold for it) as precisely those theorems provable from a regonisable set of axioms? (Answer: depending on the language, but if the model is interesting enough, and the language is rich enough, then the answer is no (the incompleteness theorem)).
I will stop here, though there is much more to say. Perhaps the discussion can continue in the comments, if you have further questions.
Best Answer
There are different ways to build a foundation for mathematics, but I think the closest to being the current "standard" is:
Philosophy (optional)
Propositional logic
First-order logic (a.k.a. "predicate logic")
Set theory (specifically, ZFC)
Everything else
When rigorously followed (e.g., in a Hilbert system), classical logic does not depend on set theory in any way (rather, it's the other way around), and I believe the only use of languages in low-level theories is to prove things about the theories (e.g., the deduction theorem) rather than in the theories. (While proving such metatheorems can make your work easier afterwards, it is not strictly necessary.)