About intuition.
Aristotle introduced a notion of consequence along the following lines. Suppose $A$ and $B$ are sentences, each of them either true or false. We study the relation :
(1) 'If A is true, then B has to be true too'.
Many logicians reckoned that the main use of logic is derive conclusions from premises, with the obvious goal that if we start from true premises, and we use “logical rules”, we will derive true conclusions. This idea of logic gives rise to the relation :
(2) 'From A, we can prove B'.
Now we try to translate all this into modern terms.
(1) translates into the relation of logical consequence $A \vDash B$ :
'For every interpretation $\mathcal I$, if A is true under $\mathcal I$, then B is true under $\mathcal I$.'
We “generalize” it putting in place of the sentence A a set $\Gamma$ of sentences :
$\Gamma \vDash \varphi$
which means that :
'For every interpretation $\mathcal I$, if all of $\Gamma$ are true under $\mathcal I$, then $\varphi$ is true under $\mathcal I$.'
Lastly, we have the “special acse” when $\Gamma = \emptyset$ : $\vDash \varphi$ means that the sentence $\varphi$ is valid (in propositional logic it is a tautology) when it is true under every interpretation.
In this case, we call it also logical (or universally) valid sentence, meaning that it is true “under all possible circumstances”.
For (2) [see Neil Tennant, Natural logic (1978), page 5], we have that, in order to demonstrate the validity of an argument one needs a proof.
A proof of an argument is a sequence of steps starting from its premisses, taken as starting points, to its conclusion as end. Within a proof each step, or 'inferences', must be obviously valid.
A system of proof is a codification of these obviously valid kinds of inference, which we call rules of inference. A proof in accordance with these rules must, in order to meet the demands of certainty, satisfy the following conditions :
(i) It must be of finite size.
(ii) Every one of its steps must be effectively checkable.
A system of this type is a logical calculus (i.e.proof systems) formalizing the relation of derivability :
$A \vdash B$
which is :
'B is derivable using only the logical rules of the calculus, starting from A as premise’.
We “generalize” it putting in place of the sentence A a set $\Gamma$ of sentences :
$\Gamma \vdash \varphi$.
Lastly, we have the “special case” when $\Gamma = \emptyset$ : $\vdash \varphi$ means that the sentence $\varphi$ is theorem of the calculus (it is provable) when it is derivable from no premises.
Obviously there must be no invalid proofs in our system : the system must be sound.
Moreover we wish to be able to prove any valid argument expressible in the language. The system, that is, must be complete (or adequate).
Soundness : if $\Gamma \vdash \varphi$, then $\Gamma \vDash \varphi$ (with the particular case : if $\vdash \varphi$, then $\vDash \varphi$)
Completeness : if $\Gamma \vDash \varphi$, then $\Gamma \vdash \varphi$ (with the particular case : if $\vDash \varphi$, then $\vdash \varphi$).
You are right in your definitions, but both symbols can be used in the metatheory, meaning that they are used to describe properties and relations between objects of the theory : axioms, theorems, and between the theory and its models.
With a formal theory, like $\mathsf {ZFC}$ or $\mathsf {PA}$, based on first-order logic, it is useful to separate symbols of the language, like :
$\lnot, \rightarrow, \land, \lor$, all belonging to the common first-order language
$\in$, specific of set theory
$+$, specific of arithmetic,
and symbols used in the meta-language, like :
$\vdash$, for the relation of derivability in a theory, meaning there is a formal derivation of a theorem from the axioms of the theory
$\vDash$, for the relation of logical consequence, meaning that a sentence in the language of the theory is true in all models of the theory, i.e. in all interpretations satisfying the axioms.
The two relations are linked through Gödel's completeness theorem.
Expanded edition
The above is a précis of the "standard view", dating back to the '50s and '60s of last century; see :
We introduce, by a metamathematical definition, the notion of "formal deducibility under assumptions". Given a list $D_1, \ldots, D_l$ of (occurrences of) formulas, a finite sequence of one or more (occurrences of) formulas is called a (formal) deduction from the assumptions formulas $D_1, \ldots, D_l$, if each formula of the sequence is either one of the formulas $D_1, \ldots, D_l$, or an axiom, or an immediate consequence of preceding formulas of the sequence. A deduction is said to be a deduction of its last formula $E$; and this formula is said to be deducible from the assumption formulas (in symbols, $D_1, \ldots, D_l \vdash E$), and is called the conclusion of the deduction. (The symbol "$\vdash$" may be read "yields".)
See also :
we define : $B$ is a valid consequence of $A_1, \ldots, A_m$, or in symbols $A_1, \ldots, A_m \vDash B$, if, [...] the formula $B$ is true in all those [structures] in which $A_1, \ldots, A_m$ are simultaneously true. The symbol $\vDash$ may be read "entails".
And see footnote, page 36 :
The symbol "$\vdash$" goes back to Frege (Begriffsschrift, 1879); the present use of it to Rosser (1935) and Kleene (1934). [...] The parallel use of "$\vDash$" is perhaps original with Kleene (1956).
See Rob's comment below ; the sequent calculus (Gentzen, 1934) is a formalization of the derivability relation. The original Gentzen's syntax for sequents :
$\Gamma \rightarrow \Delta$
is often written with $\vdash$ in place of the auxiliary symbol $\rightarrow$ (or sometimes : $\Longrightarrow$).
Best Answer