There's a bit of subtlety with what the HoTT book is saying. When it leaves out the eta rule, it is leaving out a judgmental equality rule:
$$\frac{p : A × B}{(\mathsf{fst}\ p , \mathsf{snd}\ p) \equiv p : A × B}$$
These are the rules that are used for syntactic equality of terms, and generate what it means for two things to be considered 'exactly' equal (for the purpose of checking that things are well-typed and so on). So, by omitting this rule, you cannot freely substitute $p$ for $(\mathsf{fst}\ p, \mathsf{snd}\ p)$, because they are considered distinct terms. And this means that (for example) a variable term $p : A × B$ is not exactly equal to a term built from a constructor $(M , N)$ for any $M$ and $N$. This also has semantic consequences, in that there can be semantic values of $A × B$ that are not produced by the constructor, provided it is possible to say how the various operations on products act on them.
The principle that to define a function $f : A×B → C$, it is sufficient to specify what it does on $(x,y)$, is the analogue of recursion on products (although products are not actually recursive/self-referential). Similarly, a dependently typed version for $f : Π_{p:A×B}\ C(p)$ is analogous to induction. One can instead specify that there is an eliminator $\mathsf{uncurry}$ that takes functions $g : A → B → C$ to functions $\mathsf{uncurry}\ g : A×B → C$, with the computation rule $\mathsf{uncurry}\ g\ (x, y) \equiv g\ x\ y$.
It is typical to justify induction to new students by saying that it asserts, "every product value is built from a constructor." However, since you're reading the HoTT book, you should be suspicious of this explanation, because not long from now you will learn about the identity induction principle:
$$\frac{x:A\quad R : Π_{y:A}x=y → \mathcal{U}\quad r : R\ x\ \mathsf{refl}\quad y:A\quad p : x = y}{\mathsf{J}_{x,R}\ r\ y\ p : R\ y\ p}$$
with compute rule:
$$\mathsf{J}_{x,R}\ r\ x\ \mathsf{refl} \equiv r$$
This is analogous to saying that to define a function:
$$f : \prod_{y:A,p:x = y} R\ y\ p$$
it suffices to give the case for $f\ x\ \mathsf{refl}$ (since $\mathsf{refl}$ is the only constructor). But, the whole point of HoTT is that not every value of the identity type need be $\mathsf{refl}$. There are additional values introduced by the univalence principle and higher inductive types (when you get to those).
So, you should not necessarily think of induction principles as supposing that every value is exactly equal to a constructed value. Rather, for inductive types, explaining how a function works on constructors is sufficient to derive how they work on any values that are not presented by constructors, should they happen to exist.
In the case of products, induction allows proving that there is an identity (as in a value of the identity type) between every value and one given by a constructor. This is what is proved later. But this still doesn't mean that every term is exactly/judgmentally equal to such a term, or that semantically every value is given by a constructor.
In fact, we can even create little models of this within HoTT using the aforementioned higher inductive types. For instance, in Agda I can write (note, Agda uses the opposite convention for $=$ vs $\equiv$ compared to what I was writing above):
data _×_ (A B : Type) : Type where
_,_ : A -> B -> A × B
_,,_ : B -> A -> A × B
twist : ∀ x y → (x , y) ≡ (y ,, x)
So, this type has the usual $(x , y)$, and also a backwards $(y ,, x)$, plus a "higher" constructor declaring that there is an identity between forwards and backwards pairs. But, I can still define the normal-looking induction principle:
uncurry : ((x : A) -> (y : B) -> C (x , y)) -> (p : A × B) -> C p
uncurry f (x , y) = f x y
-- wrong: C (y ,, x) is not exactly equal to C (x , y)
-- uncurry f (y ,, x) = f x y
uncurry {C = C} f (y ,, x) = subst C (twist x y) (f x y)
uncurry {C = C} f (twist x y i) =
transp (λ j → C (twist x y (i ∧ j))) (~ i) (f x y)
(Don't worry too much about the twist
case.) I've included a (commented) wrong case where we try to define the $(y ,, x)$ case as $f\ x\ y$. This doesn't work, because $C (x, y)$ is not exactly equal to $C (y ,, x)$. But, $C$ must respect identities, not just exact equality, so we can transport the $C(x,y)$ value along the $\mathsf{twist}$ identity to get a corresponding $C(y,,x)$ value. So, even though not every value of $A×B$ here is exactly equal to some $(x,y)$, it suffices to explain what to do on $(x,y)$, because that completely determines what to do on other values. And in fact, we can define projections, and prove the eta identity, even though exact eta equality is simply false:
fst : A × B -> A
fst = uncurry (λ x _ → x)
snd : A × B -> B
snd = uncurry (λ _ y → y)
eta : ∀(p : A × B) → p ≡ (fst p , snd p)
eta = uncurry λ x y → refl
If this part didn't make much sense, don't worry about it. The book doesn't cover this until chapter 6.
Best Answer
It depends on what exactly you have in mind.
If you mean that you can deduce something like: $$Π(s : A + B). (Σ(a : A). \mathsf{Id}\ (\mathsf{inl}\ a)\ s) + (Σ(b : B). \mathsf{Id}\ (\mathsf{inr}\ b)\ s)$$ then it should follow from the induction principle. If this isn't in the HoTT book, then it probably just means that it wasn't a lemma they needed for anything else. The induction principle is sort of a generalization of this. To establish $P(s)$, it suffices to establish $P(\mathsf{inl}\ a)$ and $P(\mathsf{inr}\ b)$.
If you mean that there is some sort of judgmental, eta-like rule for sums, then this is usually not included in theories for any inductive types, for technical reasons. Sums would be less problematic because they aren't recursive (so, there'd be no worry about expanding forever), but eta rules involving inductive eliminators are kind of unwieldy.
If you mean that for any closed term $s : A + B$ there must either be a closed $a : A$ or closed $b : B$ such that $s$ is judgmentally equal to $\mathsf{inl}\ a$ or $\mathsf{inr}\ b$ respectively, then this property is called "canonicity." It is a desirable property when using the type theory as a programming language. However, postulates without a computational interpretation generally invalidate it. So, e.g. excluded middle causes it to fail, because there is no mechanism for deciding arbitrary propositions. And in fact, the univalence axiom in the book also breaks it, since there is no (complete) computational interpretation given for it (this can in principle be fixed, though).
If you define binary sums in terms of $Σ$ and $2$, then these scenarios reduce to analogous ones for those types. Judgmental eta rules are frequently used for $Σ$ (although I think the HoTT book does not include one), but #1, #2 and #3 will be similar for $2$ as for $+$.
The stuff about groups is not really relevant. It essentially means that groups do not form a model of type theory. You can actually encode the phenomenon in HoTT, though, because identity/path types act like groups. So, if you encode a group $G$ as a type with one point $\mathsf{base}$ such that the loop space $\mathsf{Id\ base\ base}$ is isomorphic to $G$, then I think their codproduct as groups is encoded by the wedge sum over their base points (page 257 of the book). If you have groups $G$ and $H$, then $G$ embeds as left paths, and $H$ embeds as right paths, and the added path between the left and right base points lets you compose them together.