An argument is a linguistic "object":
In logic and philosophy, an argument is a series of statements (in a natural language), called the premises or premisses (both spellings are acceptable) intended to determine the degree of truth of another statement, the conclusion. The logical form of an argument in a natural language can be represented in a symbolic formal language.
The concept of valid (deductive) argument has been defined firstly by Aristotle :
A deduction is speech (logos) in which, certain things having been supposed, something different from those supposed results of necessity because of their being so. (Prior Analytics, I.2, 24b18–20)
Each of the “things supposed” is a premise (protasis) of the argument, and what “results of necessity” is the conclusion (sumperasma).
The key discovery of Aristotle is that, in order to assess the validity of an argument, we have to consider its Logical Form.
In order to do this, is useful to "formalize" an argument using variable (i.e. reducing the linguistic argument to its "schematic" structure); see Syllogism :
Major premise: All $M$ are $P$.
Minor premise: All $S$ are $M$.
Conclusion: All $S$ are $P$.
Modern mathematical logic has improved "formalization" using the modern mathematical symbols developed for algebra.
Propositional logic is useful because in it we can have a simplified model of language: it proxy statements of natural language with propositional symbols (or variables). Thus, propositional logic provides a simple model for deductive arguments.
In propositional logic we define a formal counterpart of entailment (or: logical consequence) : $Γ⊨φ$.
The symbol reads : "formula $φ$ is a logical (or: tautological, in the case of propositional logic) consequence of the set of formulas $Γ$" and it is defined in terms of semantical concept: truth assignments (or interpretations).
The semantical concepts are related to the syntactical ones: setting up the logical calculus, we introduce rules of inference that allow us to infer a formula (the conclusion) from an initial set of formulas (the premises).
With them we define the relation of derivability, defined as follows: "$Γ ⊢ φ$ iif there is a derivation with conclusion $φ$ and with all hypotheses (or assumptions) in $Γ$."
A derivation, in turn, is a finite sequence of applications of rules of inference.
The two sides: semantical and syntactical, are linked by the property of soundness and completeness.
In propositional logic, $p \to q$ is a formula: it is a conditional with $p$ as antecedent and $q$ as consequent.
$p → q,p ⊢ q$ is the formal counterpart of a valid argument (modus ponens), where $p → q$ and $p$ are the premises and $q$ is the conclusion.
a) $1+1=2$ is not a tautology
A tautology in propositional logic is a formula that is true in every truth assignment.
In a broad sense, we can call "tautology" also a formula of predicate logic that is valid, i.e. true in every interpretation, like e.g. $\forall x (x=x)$.
b) (and c) and d)) An argument is valid when in every interpretation where all the premises are true, also the conclusion is true.
Alternatively, we have that an argument is valid if there is no interpretation where all the premises are tue and the conclusion is false.
Thus, for an argument with contradictory premises we have that there is no interpretation where the premises are all true, and a fortiori there is no interpretation where all the premises are tue and the conclusion is false.
Conclusion : an argument with contradictory premises is valid.
Best Answer
An argument, as intended in the page you mentioned, consists of a collection of premises, used to establish the truth of one (or more) conclusion.
If you were to model this in, say, propositional logic, you would call the premises $p_1, \dotsc, p_n$ and the conclusion $c$. Then, the argument would be encoded by the formula $$ p_1 \land \dotsb \land p_n \implies c $$ To attach a semantic meaning to this formula, i.e. if we want to establish if it is true or false, we need two ingredients:
If we call our interpretation $I$, we say that a formula is satisfied by $I$ (or true under that interpretation) if by assigning the truth values of all the variables as specified in $I$ and then computing the truth values of the logical connectives, the output is true.
As a mathematical convention - this is how implication is defined - a formula of the form $A \implies B$ is false when $A$ is true and $B$ is false; in all the other cases, it is true. This means that, if the premise $A$ is false, the overall formula is true, no matter the value of $B$. But if $A$ is assumed to be true, then $B$ must be true for the argument to be true.
This means that for an argument to be valid you must be free to give any possible value to each of your variables and still obtain a true formula. This can be generalized to arbitrary formulas (not only the one in argument form), and that is what the concept of tautology is about.
As an example, the formula $p \lor \neg p$ is a tautology: here, you only have two possible interpretations, one that makes $p$ true, the other makes $p$ false. You can choose any, and the formula turns out to be true.
Another example of a valid argument is $p \implies p$: assume that something is true; then, that thing is true. Here, you can again choose between two interpretations and no matter what your choice is, the formula is true.
According to the language you are using, there are different ways of defining formula and truth values. You can distinguish between propositional formulas (the ones described above), first-order formulas (as an example, $\exists{x}. p(x) \implies q(x)$), modal formulas and many others. You can choose how many truth values are there: true and false, or true, false and unknown, or infinitely many. Depending on the choices that you make here, the notion of truth and validity change. Above, I introduced the ones related to classical propositional logic.