When you use Einstein Summation Convention you sum over repeated indices on each of the terms. And in truth we really say that the index to be summed over must appear once upstairs and once downstairs (I'll tell you the reason in a moment).
If we insist to force summing over repeated indices even though they're all downstairs like yours, the second way to think about it is correct: you must think of each term independent of the other. For example, if I give you the relation $A_i B_i + A_j C_j B_i D_i$ this should mean the summation in each term itself: $\sum_i A_i B_i + \sum_j \sum _i A_iC_jB_iD_i$.
Now, when I've learned this notation I found pretty confusing, so even though the question has already been addressed I'll just give a quick overview of it below.
In my opinion this notation becomes really good just when we sum over indices upstairs and downstairs only. The point is: given a set of vectors (and I say vectors, and not their components) we can index the vectors. The convention is:
Notation: If we are given a set of $n$ vectors to index, we index the vectors downstairs, i.e.: we write the set as $\{v_1, \dots, v_n\}$
Now, the second convention is that when we index the coeficients of a linear combination they sould be written upstairs:
Notation: If we are given a set of $n$ scalars and $n$ vectors and we are to form the linear combination of the vectors with the scalars, we write each scalar with index upstairs, so the set of scalars will be $\{a^1, \dots, a^n\}$.
Now the convention is:
Einstein Summation Convention: if in a summation an index appears once upstairs and once downstairs, it must be summed over without needing to explicit the sum.
In this case, look that the linear combination of vectors $v_i$ with coeficients $a^i$ will be simply denoted $a^iv_i$ and the summation is being understood. Now there's just a another thing:
Notation: If we are given a set of $n$ linear functionals, i.e.: linear functions defined on a vector space with values in the scalar field, then the functionals are indexed with upstairs indexes and the set becomes $\{\omega^1, \dots, \omega^n\}.$
And the convention to write the scalars when linearly combining functionals is just the opposite again:
Notation: If we are given a set of $n$ scalars and $n$ linear functionals and we are to form the linear combination of the linear functionals with the scalars, we write each scalar with index downstairs, so the set of scalars will be $\{a_1, \dots, a_n\}$.
The consequence is that the combination of linear functionals will be $a^i\omega_i$. Now the convention extends to tensor directly from this: covariant tensors will have indexes downstairs and contravariant tensors will have indexes upstairs. And writing linear combination and so on becomes natural. Just to finish:
Notation: When giving coordinates to a point or indexing components of a function, we write the indexes upstairs. So if $a \in \mathbb{R}^n$ is a point we write $a = (a^1, \dots, a^n)$ and if $f : \mathbb{R}^n \to \mathbb{R}^m$ we write it's components as $f^i$. These notations extends to manifolds in general.
If you understand all of this and can get along without being confused go ahead and use the notation, it makes life easier (specially in differential geometry).
Your solution is equivalent to
$$\sum_{i,j,k=1}^2g^i_{jk} $$
and no summation convention (I think it is a way to say "Einstein convention on repeated indices") appears, as pointed out by @Raskolnikov.
The aim of the exercise is to arrive at an expression with "repeated indices", i.e. an expression in which you use the summation convention. To do so, one needs to have no free index (like yours $i$, $j$ and $k$) and to contract-or produce pairs of- all indices.
It is clear that the starting expression has 3 indices: so 3 summations, or contractions are needed. The textbook begins to produce a summation considering at first the above index, called $i$. This is done introducing the vector
$$c=(c_1,c_2)=(1,1)$$
and realizing the sum $\sum_{i=1}^2 g^i_{j,k}$ (which, once again, uses no summation convention) as
$$g^1_{j,k}+g^2_{j,k}=\sum_{i=1}^2 g^i_{j,k}=g^i_{j,k}c_i,$$
for any $j,k$. On the rightmost r.h.s. of the above expression we use the Einstein convention, summing over $i$, the only repeated index. Please note that the length of $c$ is equal to $2$, i.e. the cardinality of the set $\{1,2\}$ of all the possible values for $i$.
We are left with 2 indices, so 2 more summations are needed. We repeat the above lines to produce the summation w.r.t. to $j$ through
$$g^i_{1,k}c_i+g^i_{2,k}c_i=\sum_{j=1}^2 g^i_{j,k}c_i=g^i_{j,k}c_ic_j,$$
for any $k$. Repeating the same trick with $k$, the only free index remaining, we arrive at
$$\sum_{k=1}^2g^i_{j,k}c_ic_j=g^i_{j,k}c_ic_jc_k,$$
and
$$\sum_{i,j,k=1}^2g^i_{jk}=g^i_{j,k}c_ic_jc_k.~~~(1) $$
On the r.h.s. of $(1)$ we have all repeated indices and we are done.
Best Answer
Basically the point is that it is ambiguous to use an index as both a free and dummy index. $$ A_iB_{ii} \stackrel{\huge{?}}{=} A_iB_{jj} = A_i \sum_j B_{jj}$$ or $$ A_iB_{ii} \stackrel{\huge{?}}{=} A_jB_{ji} = \left(\sum_j A_jB_j\right)_i ? $$ This is a problem, we can't have both.
Added after Trapu's edit. So, to be clear, I will be non-einstein notation on the r.h.s. of the equations below: the point here is that $A_iB_{ii}$ cannot be interpreted meaningfully treating one pair of $i$ as dummies (summed over) and the other as free (not summed) $$ A_iB_{ii} = \sum_{j=1}^n A_jB_{ji} = A_1B_{1i}+A_2B_{2i}+ \cdots +A_nB_{ni} \qquad (I.) $$ verses: $$ A_iB_{ii} = \sum_{j=1}^n A_iB_{jj} = A_i\sum_{j=1}^n B_{jj} = A_i\left(B_{11}+B_{22}+ \cdots +B_{nn} \right) \qquad (II.) $$ Expressions (I.) and (II.) are two reasonable interpretations of $A_iB_{ii}$ if just one index is taken to be free. But, these are not equal.
For example, $B_{11}=1, B_{22}=-1, B_{12}=0=B_{21}$ and $A_1=1, A_2=1$, $$ (I.) \qquad A_1B_{1i}+A_2B_{2i} = B_{1i}+B_{2i} = \begin{cases} 1 & i=1 \\ -1 & i=2 \end{cases} $$ verses $$ (II.) \qquad A_i(B_{11}+B_{22}) = 0 = \begin{cases} 0 & i=1 \\ 0 & i=2 \end{cases} $$ As you can see these expressions do not agree. Therefore, we cannot use the same index for a dummy and a free index.
I would liken this problem to that I have with my calculus I students who insist they need not change the bounds in a u-substitution since they're just going to write it back in terms of $x$ at the end. However, if such a practice is made then some of the intermediate steps are wrong. We are left with the situation that what we write is insufficient to capture the precise mathemtical intent of the expression. This should be avoided since good notation ought to be unambiguous. Or, at a minimum the ambiguity should reflect a deeper mathematical structure as in the case of quotient spaces and the non-uniqueness of the representative. This is not that, this is just bad notation. It does lead to errors, trust me, I've made them.