In proof theory, a proof system as the one presented in Ebbinghaus at al.'s book must find a balance between at least two features:
It should be handy to use to derive valid formulas; let us call this feature object manageability.
It should be handy to use to derive properties of the proof system itself and its derivations; let us call this feature meta-manageability.
Unfortunately, the two features may be in conflict with each other. The following is a schematic and slightly superficial representation of the issue, but it is helpful to give you an idea.
On the one side, object manageability would tend to add as many inference rules as possible. Indeed, if we can choose among many inference rules, such as $\land$ introduction, excluded middle, contraposition, explosion, chain syllogism, disjunctive syllogism, substitution, transitivity, etc., we have many tools to find a formal derivation of a valid formula $A$, and likely this derivation would be a natural formalization of our intuitive way to prove $A$. The more rules the proof system has, the handier the proof system is to find a derivation of a valid formula.
On the other side, meta-manageability would tend to add as few inference rules as possible. Indeed, when you want to prove a property of the proof system, if the proof system is made of few rules then it is easier to check such a property. For instance, a typical desired property of a proof system is soundness: the proof system can derive only valid formulas (tautologies, in propositional logic). To prove soundness, you essentially have to show that validity is preserved by each inference rule of the proof system: for each inference rule, if its premises are valid formulas, then its conclusion is also a valid formula too. Of course, the proof of soundness is easier if there are few inference rules to check.
How can we find a synthesis between object manageability and meta-manageability? A possible balance is given by the approach somehow followed by Ebbinghaus too.
First, let us fix our proof system as small as possible, in the sense that it contains as few inference rules as possible (anyway, the proof system cannot be minimized too much because it has to be complete, i.e. strong enough to be able to derive all valid formulas). These rules may seem clumsy to use, never mind, the important thing is that they are few in number. This way, meta-manageability is assured. Let us call these inference rules the core rules. In Ebbanghaus's proof system, the core rules are Ass, Ant, PC, Ctr, $\lor$A, $\lor$S.
Second, to ensure object manageability, let us prove that many other inference rules are derivable in the proof system. An inference rule with premises $P_1, \dots, P_n$ and conclusion $C$ is derivable if there is a derivation with hypotheses $P_1, \dots, P_n$ and conclusion $C$ that only uses the core rules of the proof system. These derivable rules such as $\land$ introduction, excluded middle, contraposition, explosion, chain syllogism, disjunctive syllogism, substitution, transitivity, etc., make the task of finding a derivation for a valid formula much easier, in the sense they are closer to our intuitive reasoning. But each of them is just a "macro" for a chain of core rules. You can freely use the derivable rules to derive a valid formula, but the properties of the proof system are "encapsulated" in the core rules.
For practical purposes, there is not a definitive answer to the question "How to approach the proof that an inference rule is derivable?", given a set of core inference rules. In particular, this is true for the core rules of Ebbinghaus' proof system, which are really clumsy (in the "First digression" section here you can find a brief discussion). Being able to use the given inference rules in a smart way, is a matter of practice. In general, the absence of a definitive answer to your question is the reason why proving something is a difficult task!
Since Ebbinghaus' core rules are really clumsy, my suggestion is the following: as soon as you want to use an inference scheme that seems to be natural and general enough, check if it derivable. In this way, in your next derivations you can use this derivable inference rule instead of a clumsy chain of core rules.
For instance, are you able to prove that the following (and very handy) inference rule is derivable in Ebbinghaus' system?
\begin{align}
\Gamma \vdash A
\\
\Gamma, \, A \vdash B
\\
\hline
\Gamma \vdash B
\end{align}
Best Answer
Natural deduction doesn't need semantics in order to "tick" - it can be thought of as a purely formal and meaningless "symbol game." But that's missing the point, of course: why is that symbol game actually interesting, while many (most?) others aren't? (This is really in my opinion the fundamental criticism of formalism; while I don't think it's unanswerable, it's certainly extremely difficult.)
The key point here is that in motivating natural deduction, all we really need is some vague ideas about semantics, not an actual semantic theory. One of our common intuitions about truth is that in order for "$a$ AND $b$" to be true, both $a$ and $b$ need to be true. On the basis of this, the rule that "$a\wedge b\vdash a$" is a "correct sequent" is reasonable. But note that we did not need to settle on a semantics to justify this: all we had to agree is that we'll never find a pair of sentences $a$ and $b$ such that $a\wedge b$ is true but $a$ is false.
The "story" I like to tell about formal systems vs. semantics begins with the following:
Of course, that's only the first approximation. The whole point of the danger with naive reasoning about truth is that those intuitions are often vague and self-contradictory; even worse is when you get two people in a room, and they disagree (how about that law of the excluded middle?). So the story continues as:
Now let's go a bit deeper, and talk about how syntax and semantics interact.
General intuitions about truth are problematic, as per above. However, that doesn't mean that we need to completely abandon semantics itself as a silly thing. Broadly speaking, a semantics $\mathfrak{S}$ describes $(i)$ a class of "structures" and $(ii)$ a notion of "satisfaction" (= when a given sentence is true in a given structure). Every notion of semantics has a corresponding deductive system: the set of sequents $$\mathfrak{D}_\mathfrak{S}=\{\Gamma\vdash\varphi:\mbox{ $\varphi$ is true in every structure in which each sentence in $\Gamma$ is true}\},$$ where "true in" and "structure" are interpreted in the sense of the semantics $\mathfrak{S}$.
The interesting feature is that the converse also holds: "interesting" deductive systems (like natural deduction) have "interesting" semantics which "correspond" to them - where we say that a semantics $\mathfrak{S}$ corresponds to a deductive system $\mathfrak{D}$ iff $\mathfrak{D}_\mathfrak{S}=\mathfrak{D}$ (in the terminology of the field: iff $\mathfrak{D}$ is sound and complete with respect to $\mathfrak{S}$). Note that while each semantics corresponds to exactly one deductive system, a deductive system may correspond (in fact, will always correspond) to may different notions of semantics.
This fleshes out the story above, by adding yet another direction:
This puts a new spin on the old story by emphasizing that the semantics side need not be problematic all the time; it's only inherently problematic if we're too ambitious. The real right story, in my opinion, is: