It is more helpful to view "predicate logic" as a taxonomic term (the same goes for the term "logic" itself). So the question becomes: what properties of a logic cause us to call it a "predicate logic"?
That's a hard question partially because "logic" itself is so broad. We can identify at least a few common properties, but not every predicate logic will have all of them. The basic examples, of course, are the logics that are called "first-order logic" in the literature. But there are also higher-order logics, modal predicate logics, temporal predicate logics, etc.
Here are a few common traits:
Predicate logics may have variables to range over "individual" objects. There many be more than one sort of "individual".
Predicate logics may have variables that range over higher types or predicates, with syntax to match.
Predicate logics often have quantifiers over the individuals and other sorts of objects
Predicate logics often come with semantics in which the predicate symbols in formulas are interpreted as relations on a set of "individuals".
Many students, myself, some years ago, included, are often confused about the aim of logic. It is certainly not the aim of logic to somehow provide a rigorous framework to do all of mathematics in a somehow pure manner, without invoking any form of intuition or axiomatisation. In particular, it is not the aim of logic to define the meaning of such phrases as for all
or there exists
at any level other than the intuitive obvious one. It is simply of little interest to do so (that is actually a big fat lie, as there are various flavours of logic in which the meaning of, e.g., there exists
is defined differently to the classical one, and some mathematicians object to the classical definition, for many plausible reasons - such issues obivously fall under the umberla of foundations, since it is a study of different variants at a very low level, the very basic building blocks of all arguments).
So, what is logic about then? Well, logic comprises of several fields, so let's just look at one: model theory. Model theory can be described, somewhat, as the study of the expressive power of formal languages. For a long time, mathematicians debated the correctness of certain objects subjectively. Axioms were treated as evident truths, indicating they are indisputably true. Famously, Euclid's fifth postulate in geometry was debated, and argued by many to be an evident axiom of geometry, while literally walking on a counter example. Development in analysis and other realms dictated a much more intricate understanding of the role of axioms, definitions, and the structure of proof. The old axiomatic, platonistic, approach was replaced by a formalism approach: the axioms are just sentences chosen to hold, rather than somehow being externally obvious, and the practice of mathematics is the study of that which flows from the axioms. Of course, when Hilbert said that mathematics is just a formal game, he (I presume) did not mean that when we actually do mathematics, we just randomly choose some axioms, and see what happens. That would be futile. Instead, the formality is there, practiced informally by human beings. However, it cannot be disputed that it is a formal game, and that formal aspect can be studied. So, along comes model theory: a formal definition of language, predicates, statements, etc., together with its semantics. This sets the ground to study the relationship between the language we use in order to study something, and the actual properties of that something. Few things are more foundations than the understanding of the strengths and limitations of the language we use in our quest to understand mathematical objects.
Goedel's completeness and incompleteness theorems are remarkable examples of amazing answers to such foundational questions as the following. If I found a proof of a property from some axioms, does it mean that the property holds in any model of the axioms? (Answer: yes, by a simple proof by induction following the definitions of the semantics). If a property is true (semantics!) in every model of some axioms, does it follow that there exists a proof (a syntactic thing!) of the property from the axioms? (Answer: Yes, this is the completeness theorem). Given a model of interest, is it possible to capture its semantics (i.e., to capture precisely all of the statements that hold for it) as precisely those theorems provable from a regonisable set of axioms? (Answer: depending on the language, but if the model is interesting enough, and the language is rich enough, then the answer is no (the incompleteness theorem)).
I will stop here, though there is much more to say. Perhaps the discussion can continue in the comments, if you have further questions.
Best Answer
Once you have a syntax for your formal logic, you know what the rules are for putting together symbols in your language, so you know what the valid strings are.
Next we need to know what
what the strings mean and
what we can do with these strings.
The first one is given by the satisfaction relation $\models$, which tells you when a string is satisfied, or in other words, when a string is "true".
The second one is given by the rules of inference of the system, also called the axioms, deduction rules, or proof calculus. These describe the relation $\vdash$, and tell you what are the valid rules for proving new statements in the logic. This is the axiomatization of the logic.
So on the one hand we have a notion of truth and on the other hand we have a notion of provability, and of course we would like these two to correspond to each other, i.e. we want the system to be sound (meaning that if something can be proven, then it is true) and complete (meaning that if something is true, then it can be proven).
So to axiomatize a logic means to start with a formal language for your logic and a satisfaction relation that tells you what those strings mean, and from this create a set of axioms or deduction rules such that the logic is at the least sound, and hopefully also complete.