I've not enough time to complete that answer at the moment, but I can leave you in a position where you likely can proceed on your own.
The factoring-out is a good start: You get
$$ 2^b(2^A-1) = 3^d(3^C-1) $$
Then you rewrite
$$ {2^A-1\over 3^d} = {3^C-1 \over 2^b} $$
[lhs]: Then there is a little rule (see "Euler's totient theorem") , which values $A$ must assume for the numerator in the lhs to be divisible by $3$ or more precisely exactly by $3^d$:
$$A = \varphi(3^d) = 2 \cdot 3^{d-1} \qquad \text{ by Euler's totient theorem}$$
Sidenote: in this case, we deal with the primefactor $3$ and the $\varphi(3^d)$ is indeed the "order of the cyclic subgroup of $2^n \pmod 3$ - which is relevant here, since we want the exponent $A$ such that the denominator divides exactly. The situation here, with primefactor $3$ is easier than (and different from) some other primefactors like for instance $p=7$ where the $\varphi(7)=6$ but the order of the cyclic subgroup $ \pmod 7$ is smaller and actually $\lambda_7=3$ (Note, this is not the Carmichael-function!) .
One can multiply additional factors to $A$ which only must not contain the primefactor 3: with $v$ where $\gcd(v,3)=1$ the complete expression is
$$A = \varphi(3^d) \cdot v = 2 \cdot 3^{d-1} \cdot v \qquad \qquad \gcd(v,3)=1 \tag 1$$
[rhs]: Similarly this is for the rhs:
$$ {C = 1 \cdot w \text{ for } b=1 \text{ and } \qquad \qquad \gcd(w,2)=0 \tag {2.1} } $$ $$ C=2^{b-2} \cdot w \text{ for } b \ge 3 \qquad \qquad \gcd(w,2)=0 \tag {2.2}$$
Here a solution for $b=2$ does not exist; because any $3^C-1$ which is divisible by $2^2$ is also divisible by $2^3$ and thus one factor $2$ remains, and then this cannot equal the lhs which has an odd value. In effect, this blows up our equation to
$$ {2^{2 \cdot 3^{d-1} \cdot v} - 1\over 3^d} = {3^{1 \cdot w}-1 \over 2^1} \tag {3.1 when $b=1$}$$
$$ {2^{2 \cdot 3^{d-1} \cdot v} - 1\over 3^d} = {3^{2^{b-2} \cdot w}-1 \over 2^b} \tag {3.2 when $b\ge 3$ }$$
I'm not going further at the moment; but now you might look for existence of different additionally primefactors on the lhs and the rhs for choices of $d$ and $b$ and $v$ and $w$. For instance, if $d \gt 1$ we have in the lhs additional primefactors with group-order $3,6,9,18$ which means the primefactors $7,-,73,19$ which must then also occur in the rhs. We know by a theorem of Szigmondy(?spell) that only for $A=2\cdot 3=6$ there is no other additional ("primitive") primefactor existent. This and the "trivial" cases should come out as the only possible solutions.
[update]
To make Jack's observation more explicite. First we observe, that in the exponent of the LHS there is unconditionally a factor 2 , so to improve readability a bit we write
$$ {4^{ 3^{d-1} \cdot v} - 1\over 3^d} = {3^{2^{b-2} \cdot w}-1 \over 2^b} \tag {3.2}$$
were we took the second version, assuming $b \ge 3$
Then we observe the table of cycle-lengthes for the first few primes for the numerator in the LHS and the RHS:
pf 4^A-1 3^C-1
-------------------
2 - 1
3 1 -
5 2 4
7 3 6
11 5 5
13 6 3
17 4 16
19 9 18
23 11 11
29 14 28
31 5 30
37 18 18
41 10 8
43 7 42
47 23 23
53 26 52
59 29 29
61 30 10
67 33 22
71 35 35
73 9 12
... ... ...
Now we discuss the possibility of equality, given that $d=2$. Then we look at the composition of $A$, see, what primefactors on the lhs are involved, conclude that they are involved in the rhs (note: must be to the same power!), that sets conditions on $C$ which includes more primefactors for the rhs thus also for the lhs, which then imposes a new composition of A, and so on, until we possibly get a contradiction. Here is a short decision-path-diagram. The "*" sign marks the inclusion of the primefactor due to compositions of $A$ or $C$:
A=3^1
|
* 3 1 0
* 7 3 6 ==> C=2.3.w
|
* 13 6 3 ==> A=2.3
|
* 5 2 4 ==> C=4.3.w
|
* 73 9 12 ==> A=2.3^2 #####
contradiction, because exponent of 3 in A was assumed=1
So for the assumption of $d=2$ there is no solution of the original problem
While I'm still not quite sure what you're asking, I shall go through each section of the proof in a lot of detail with the hope that this will help. Again, I'm sorry if I go over a lot of things you already know but I just don't know quite what part you're having difficulty with.
The order of an element $a$ modulo our prime $p$, as a reminder, is the smallest positive exponent $i$ s.t. $a^i=1$ modulo $p$. General proof strategy, then, is as follows:
We define a function $F(m)$ as the number of elements of order $m$ modulo $p$. So as an example, if we look modulo $5$, we see that $4^2=1$ (so 4 is an element of order 2) and every other element is either 1 (so has order 1) or has order greater than 2 (as $2^2=3^2=4\neq 1$). So, modulo 5, $F(2)=1$.
We then remember that the order of an element must divide the number of non-zero elements modulo $p$.
Why is this? (feel free to skip this if you know it already) Firstly, we know by Fermat's Little Theorem that $a^{p-1}=1$ modulo $p$ for every non-zero element $a$ modulo $p$. Then, if the order of an element $a$ is $e$, we can long divide $p-1$ by $e$ so that $$p-1=e\times n+b, 0\leq b<e$$ Then we get that, modulo $p$, $$1=a^{p-1}=a^{e\times n+b}=a^{e\times n}a^b=(a^{e})^na^b=(1)^na^b=a^b$$ so $a^b=1$ but we know that $b<e$ and $e$ is the smallest positive exponent for which $a^e=1$ so $b$ cannot be positive! But we also know that $0\leq b$ so $b=0$ and so $p-1=e\times n$ so $e|p-1$.
So the order of an element must divide $p-1$. But, obviously, every element has some order (given that orders are bounded above by $p-1$, i.e. we already know that $a^{p-1}=1$ modulo $p$ for every nonzero element $a$). So thus $$\sum_{k|(p-1)}F(k)=(p-1)$$This sum really doesn't say much - the number of elements that have some order is the number of nonzero elements.
Next, the proof whips out a famous identity which it says you've already seen the proof of. There are a number of other proofs online (some completely elementarily, some using the FTA, some using the multiplicative nature of the sum) but I won't give one here - you apparently have one in your textbook. The identity is the following:
$$n = \sum_{k|n}\phi(k)$$
for a natural number $n$ and we particularly care about this result when $n=(p-1)$. Now the proof says that you already know, from some previous section, that $F(k) \leq \phi(k)$ and since $$\sum_{k|n}\phi(k)=\sum_{k|(p-1)}F(k)$$ we deduce that $F(k)=\phi(k)$ for each divisor $k$. Why is this?
Suppose that any of these inequalities is strict, i.e. $F(k)<\phi(k)$ for some $k$ in the sum. Then, even if $F(n)=\phi(n)$ for every other divisor of $(p-1)$, we would have that $$\sum_{k|n}\phi(k)<\sum_{k|(p-1)}F(k)$$ which is a contradiction.
Then if $F(k)=\phi(k)$ for each divisor $k$ of $(p-1)$ and $\phi(k)\geq 1$ for each $k$, we know that $F(p-1)=\phi(p-1) \geq 1$, i.e. there is at least one element of order $(p-1)$ (a primitive root!) and we're done!
ADDENDUM:
Just because I'm not personally such a fan of using $F(m)\leq \phi(m)$ without explaining it (given that it's arguably the most important part of the proof) and that I can't find anybody else on this forum explaining this fact, I'm going to give my own explanation here. You don't have to read this but I strongly suggest it, as I think this is very nice:
Consider the polynomial $x^d-1$ for a divisor $d$ of $(p-1)$ and suppose, hypothetically, it had some root $u$ modulo $p$. Then we can notice that $u,u^2,u^3,...,u^d$ are all roots also.
Why? $(u^k)^d-1=(u^d)^k-1=1^k-1=0$.
But notice that $x^d-1$ is a polynomial of degree $d$ and so, by Lagrange's Theorem, can have at most $d$ roots (the proof of L's theorem literally replicates polynomial division in modular arithmetic so we don't worry about it here - you may already know a proof!). But if it can have at most $d$ roots and $u,...,u^d$ are all roots, then $u,...,u^d$ are the only roots of this equation!
But every element of order $d$ must be a root of $x^d-1$ by definition so every element of order $d$ is a power of $u$. (NOTE: not every root of $x^d-1$ is an element of order $d$ - if $u^k$ had order $d/2$, it would still be a root of the equation)
But which powers of $u$ have order $d$ given that $u^d-1=0$? Well, all the powers of $u$ whose order is $coprime$ with $d$ (for the same EXACT reason as used in the proof that if there is 1 primitive root modulo $p$ then there are $\phi(p-1)$ many, a theorem whose proof you say you understand).
So if every element of order $d$ is a power of $u$ and there are exactly $\phi(d)$ many powers of $u$ of order $d$, then there are $\phi(d)$ many elements of order $d$.
So, if we call the number of elements of order $d$ "$N_d$", then we have just shown that $N_d=0$ OR $N_d=\phi(d)$, i.e. $N_d \leq \phi(d)$ as required.
===========================================================================
Phew! That took a long time to write, so I hope it's helpful. Don't be scared to ask for further clarification if necessary!
Best Answer
Answer to my second question Based upon the comment from @Gottfried Helms
Suppose $n>1$ is a positive integer and $ a>b>0 $ are coprime integers. Also suppose that $a^{2n}-b^{2n}$ has a primitive prime factor $p$.
Then $p$ is not a factor of $a^n-b^n$ and is therefore a factor of $a^n+b^n$. For $k<n$, $p$ is not a factor of $a^{2k}-b^{2k}$ and is therefore not a factor of $a^k+b^k$. $p$ is therefore a primitive prime factor of $a^n+b^n$.
This means that the theorem for $a^n+b^n$ follows easily from that for $a^n-b^n$.