If I decode your question correctly, you say that you're told that the correct answer is
$$E_1=\pmatrix{1&0&0\\1&1&0\\0&0&1},\quad E_2=\pmatrix{1&0&0\\0&1&0\\0&-1&1}$$
But you don't tell us the complete story of what this is the complete answer to, right?
I imagine that you must have had a $3\times 5$ matrix such as
$$A=\pmatrix{1&2&0&4&5\\-1&-2&4&1&1\\0&0&4&8&16}$$
After the first row operation you had another matrix
$$B=\pmatrix{1&2&0&4&5\\0&0&4&5&6\\0&0&4&8&16}$$
You describe this operation as
The first row operation from part a was $R_1 + R_2$.
but that is not a complete description of a row operation; it describes how to make a new row (namely, take the sum of the first and second rows), but not what you do with that new row. You should have said
The first row operation was to replace $R_2$ with $R_1+R_2$.
The way the elementary matrix works is that it encodes your row operation as a matrix multiplication from the left:
$$E_1 A = B$$
(Write this out and compute the entires of the product matrix to see how it works!)
The reason it works is that the rows of the elementary matrix are all equal to the corresponding rows of the 3×3 identity matrix, except for the second row, which is the one your row operation modifies. Thus the other rows will be left unchanged by the operation. The second row of $E_1$ is $(1\;1\;0)$, which has ones in the first position and encodes how you make the new second row, namely as $R_1+R_2$, or a bit more verbosely, $1\cdot R_1+1\cdot R_2+0\cdot R_3$. Note how the coefficients here match the row from the elementary matrix exactly.
In the second row operation the row of $E_2$ that corresponds to the "new" row is $(0\;-1\;1)$, which means that we're replacing $R_3$ with $0\cdot R_1+(-1)\cdot R_2+1\cdot R_3$, which is the same as $R_3-R_2$. So we get
$$\pmatrix{1&2&0&4&5\\0&0&4&5&6\\0&0&0&3&10} = E_2B = E_2 E_1 A$$
They're not special. They're just convenient. It's relatively easy to tell what happens to a matrix when you apply an elementary row operation to it, and this isn't quite as true for more complicated types of operations.
In the language of group theory, elementary matrices form a set of generators for the group of invertible square matrices. You could choose a different set of generators if you wanted to, but again, the elementary matrices are convenient.
Best Answer
Given an $n$-dimensional vector space $V$ and an ordered basis $\mathscr B$ of $V,$ it is true that one can identify a linear operator $T : V \to V$ with an $n \times n$ matrix $A.$ Explicitly, we can compute $T(v_i)$ for each of the vectors $v_i \in \mathscr B,$ and we can subsequently form the matrix $A$ whose $i$th column is $v_i.$
Like you mentioned, if $A$ is an invertible $n \times n$ matrix, then one can compute the inverse of $A$ by a sequence of elementary row operations $E_1, \dots, E_k.$ Each elementary row operation is a linear operator $E_i : V \to V.$ Composition of linear operators corresponds to multiplication of the matrices that represent the linear operators, so as you said, we find that $E_k \cdots E_1 A = I.$ (Here, I am slightly abusing notation and using $E_i$ for both the linear operator and the matrix that represents it with respect to the ordered basis $\mathscr B.$) From this, you can see (as you have) that an invertible $n \times n$ matrix gives rise to a composition of elementary row operations.
Conversely, we may start with a composition $E_k \circ \cdots \circ E_1$ of elementary row operations $E_i : V \to V.$ Observe that for each elementary row operation $E_i,$ there exists an elementary row operation $F_i$ such that $F_i \circ E_i = I,$ from which it follows that $(F_1 \circ \cdots \circ F_k) \circ (E_k \circ \cdots \circ E_1) = I.$ (Basically, $F_i$ is the linear operator that does the "opposite" of what $E_i$ does. For instance, if $E_i$ sends $R_1$ to $3R_1 - R_2,$ then $F_i$ sends $R_1$ to $\frac{1}{3}(R_1 + R_2),$ and we have that $F_i \circ E_i = I = E_i \circ F_i$).
Consequently, we have that $T = E_k \circ \cdots \circ E_1$ is an invertible linear operator, hence there exists an invertible $n \times n$ matrix that corresponds to $T.$ (Use the construction from the first paragraph above.) From this, you can see (as you have) that a composition of elementary row operations gives rise to an invertible $n \times n$ matrix.