Here is a possible line of approach, lacking only the insight why
$(d+1)\frac{n}{b^d-1}$ must be $1$.
If the base $b$ representation of a number $n$ is cyclic of (exact) length $d\geq\lceil{\log_b n}\rceil$ (with inequality for leading zeros),
then the first $d$ consecutive multiples of $n$, $\{kn|1\leq k\leq d\}$,
exhaust (i.e. are in bijection with) all the cycles, which in turn
correspond bijectively with the repeating base-$b$ expansions of
$\frac{kn}{b^d-1}\in(0,1)$.
Their sum satisfies
$$
\frac{b^d-1}{b-1}s=
\frac{d(d+1)}{2}n
\qquad\text{where}\qquad
s=d\frac{b-1}{2}\frac{(d+1)n}{b^d-1}\in\mathbb{N}
$$
is the sum of the base-$b$ digits of $n$.
Since each digit is between $0$ and $b-1$,
$s$ must lie between $0$ and $d(b-1)$ inclusive.
However, for the sum $s$, the range must be exclusive (for $b>2$),
since otherwise the digits would all be the same ($0$ or $b-1$)
and $d$ would have to be $1$. Thus we have
$$0<\frac{s}{d}=\frac{b-1}{2}\frac{(d+1)n}{b^d-1}<b-1$$
$$0<0.\overline{n_{d-1}\dots n_0}=\frac{(d+1)n}{b^d-1}<2$$
where the middle quantities in the first and second lines are
the average value $\frac{1}{d}\sum a_i$ of the base-$b$ digits of $n$,
and the fraction obtained from repeating the digits of
$n=\sum_0^{d-1}a_ib^i$ after the base-$b$ decimal point,
respectively. These quantities are rational numbers, but
not guaranteed to be integers. What we want to show however, as we shall see,
is that the latter quantity is in fact an integer, and therefore $1$.
Let's assume that $d>1$, and that $d$ is minimal, in the
sense that the $n$ is not a repeated or decomposable cycle:
$$
\nexists c\vert\;d,
\quad 1<c<d
\quad(c\;\text{a proper divisor of}\;d)
\quad\text{with}
\quad\frac{b^{d}-1}{b^c-1}\vert\;n.
$$
We should also note that $s\equiv n\pmod{b-1}$,
i.e. that $\frac{n-s}{b-1}\in\mathbb{Z}$,
since each base-$b$ digit of $n$ remains fixed modulo $b-1$
when multiplied by its respective nonnegative power of $b$.
We actually want to show that
$$n=\frac{b^d-1}{d+1}
\qquad\text{and that}\qquad
t=\frac{b^d-1}{n}=d+1\in\mathbb{N}$$
is in fact prime with $b$ as primitive root.
Perhaps there is a good argument why $s=d\frac{b-1}{2}$ lies exactly in the middle (the expected value) of the prescribed range or, equivalently,
that the average value of the digits of $n$ base $b$ is $\frac{b-1}{2}$, or
that the first noncyclic multiple of $n$ (which we know is the $d+1^\text{th}$) satisfies $(d+1)n=b^d-1$ (which is $b-1$ times the repunit of length $d$).
Certainly we know that the sequence $\{kn\}_{k=1}^d$ is increasing and bounded by $b^d$ (since each term has at most $d$ digits base $b$). Therefore, they must correspond with the lexicographic ordering of cyclically shifted length-$d$ strings starting with $n$, zero-padded if necessary (i.e. if $b^{d-1}>n$). And since $(d+1)n=dn+n$ is the sum of the smallest and largest of these cyclic shifts, its leading digit must also be the sum of the smallest and largest digits of $n$ (unless a carry increases the sum to $\geq b^d$).
If we need to resort to an examination in more detail of the products $\{kn\}_{k=1}^d$ and their relation to base-$b$ shifts of $n$, we need not resort to naming $n$'s digits. We can in stead rely on the division algorithm, and note that if
$$
n=q_k b^k+r_k
\quad\text{with}\quad
\left\{\begin{matrix}
q_k=\lfloor{b^{-k}n}\rfloor,\\
r_k=n-b^kq_k
\end{matrix}\right.
\quad\text{for}\quad
0\leq k\leq d
\quad\text{and if}\quad
n_k=r_k b^{d-k}+q_k,
$$
then $\{n_k\}_{k=1}^{d-1}$
is a permutation of $\{nk\}_{k=1}^{d-1}$.
Note that if $n$ and $n_k$ are identified with
strings of $d$ letters in the alphabet $\{0,\dots,b-1\}$,
then $n_k=\text{right}(n,k)+\text{left}(n,d-k)$,
where the plus symbol here indicates string concatenation and the
right and left functions are familiar from some programming languages,
since $q_k$ and $r_k$ are the left $d-k$ and
right $k$ digits of $n$ base $b$ respectively.
Once we can establish that $n=\frac{b^d-1}{d+1}$,
we would have that $b^d\equiv 1\pmod t$.
From here we could argue that $b$ must have order $d$ modulo $t$
using the minimality of $d$: if $1<c=\text{ord}_t(b)<d$,
then we would have a nontrivial repunit factorization
$$
\frac{b^c-1}{t}\cdot\frac{b^d-1}{b^c-1}=n
$$
so that $n$ is a repetition of a shorter cycle of length $c$,
contradicting our assumption.
But this would prove the result, since then
$t-1=d=\text{ord}_t(b)\leq\phi(t)\leq t-1$,
i.e. we would have sandwiched Euler's totient function
$\phi(t)$ into attaining its theoretical maximum,
which only occurs when $t$ is prime, while on the other hand,
the order of any element $b$ mod $t$ only attains $\phi(t)$
when the element $b$ is a primitive root,
i.e. a generator of $(\mathbb{Z}/t\mathbb{Z})^*$.
Finally (just for fun), note that a start at factoring $n$ for $d>1$ is
$$
n=\frac{b^d-1}{t}=\frac{1}{t}\prod_{0<c|d}\Phi_c(b)
$$
where $\Phi_c(x)$ denotes the cyclotomic polynomial of degree $c$,
and the product above will be a partial factorization with $\tau(d)$ terms,
one of which must be divisible by the prime $t$,
where $\tau(d)$ is the number of positive divisors of $d$.
I would not add the complication of making this into a proof by contradiction. Euclid did not do it that way, despite many modern authors, dating back at least to Dirichlet in the middle of the 19th century, asserting that Euclid did it that way.
Start with any finite set $S$ of prime numbers. (For example, we could have $S=\{2, 31, 97\}$) Let $p = 1 + \prod S$, i.e. $1$ plus the product of the members of $S$.
Then $p$ cannot be divisible by any of the primes in $S$. Therefore either
$p$ is itself prime, in which case there are more primes than those in $S$, or
$p$ is divisible by some other primes not in $S$, in which case there are more primes than those is $S$.
Thus every finite set $S$ of primes can be extended to a larger finite set of primes.
(For example, if $S=\{5,7\}$, then $1+\prod S = 1 + 35 = 36 = 2\times2\times3\times 3$, and the additional primes not in $S$ are $2$ and $3$.)
The initial assumption that $S$ contains all primes is at best a needless complication that serves no purpose.
Best Answer
Each even number $n$ is of the form $n=2^ku$ where $u$ is odd and $k\ge 1$. If $k=1$ then $n$ is already an E-prime. Otherwise let the first E-prime be $2u$ and all others be $2$ as many as needed.
Note this method relies on a small part of unique factorization, and it is not the same as the way suggested to solve it in the Silverman text.
Note: For part A you may want to be more explicit and say an E-prime is any number of the form $2u$ with $u$ odd. Also it may or may not need a sign, depending on whether the E-zone includes negative integers.
Elaboration of part B argument: First, the statement that
can be shown by induction. Base case $n=2=2\cdot 1$ is of the desired form with $k=1,u=1$. Now suppose $n>2$ is even so that $n=2k$ for some $k>1$. If $k$ is odd then we have the desired form $n=2^1u$ where $u=k$. If $k$ is even, then $k=2k'$ and then $n=2\cdot(2k').$ At this point $2k'$ is an even number less than $n$ so by the inductive hypotheses $2k'=2^tu$ with $t\ge 1$ and $u$ odd. Putting this back into $n=2 \cdot 2k'$ gives $n=2^{t+1}u$ which is of the desired form, finishing the inductive step.
Now that's done, and you want to show every even number $n$ is a product of E-primes, where these are all numbers of the form $2u$ with $u$ odd (from part A). Note first that $2=2\cdot 1$ qualifies as an E-prime using $u=1$. Since $n$ is even we can use the above shown fact and write $n=2^ku$ with $k\ge 1$ and $u$ odd. There are two cases:
case 1: $k=1$. In this case $n=2^1u=2u$ with $u$ odd, so $n$ is itself an E-prime. It is thus the product of a single E-prime and the result holds in this case.
case 2: $k>1$. Here we can write $k=1+r$ and then $$n=2^ku=(2u)\cdot 2 \cdot 2 \cdots \cdot 2,$$ where there are $r$ copies of the E-prime $2$ being multiplied together after the initial E-prime $(2u)$. So in case 2 also, we have $n$ as a product of E-primes.
I hope this helps explain part B...