All you have written is correct. You only have to take care on the order of the transformations. For this, ask: 'What happens to $x$?' and reverse the order and the operations.
In the case of $e^{-(x-3)}$, $x$ is first decreased by $3$, then multiplied by $-1$. If we reverse these operations, we see that first we have to reflect the graph of $e^x$ along the $y$-axis and then shift it to the right by $3$ (shift it to the left by $-3$).
For the same $e^{-x+3}$, we find that $x$ is first multiplied by $-1$ then the gotten expression is increased by $3$, so, reversing these, we first shift, indeed to the left, and then reflect.
Update:
The transformation for $e^{-(x-3)}$ corresponds to the substitions: let $u:=x-3$. First, from $u\mapsto e^u$ we go to $u\mapsto e^{-u}$ by reflecting the original graph on the $y$ axis. Then making the substition $x\mapsto x-3$ i.e. $x\mapsto u$ in the variable will give us the second step. You will be convinced if you plug in (enough) concrete values of $x$:
e.g. if $x=3$ then $u=0$ and then $e^{-(x-3)}=e^{-u}=1$. If $x=4$ then $u=1$, and so on..
In general, the graph of $g(x)=f(x-3)$ is shifted to the right (to the left by $-3$) compared to the graph of $f(x)$, because
$$g(x+3)=f(x)\,.$$
Revised upon reading OPs comments
Your specific questions:
1. There is no other function like that. Only the exponential satisfies $f'=f$, non trivially (of course $f=0$ also works).
2. You are confusing exponents with repeated differentiation notation. If $f=e^{zx}$ where z is a constant, then $f'=ze^{zx}, f''=z(ze^{zx})=z^2e^{zx}$ each diffentiation brings down an additional z from the exponenet and multiplies it by the existing function. That is why you get that.
After reading your comments, I think I have a better handle on where your confusion lies.
First, note that the exponential is a general solution only for linear differential equations with constant coefficients. If you have variable coefficients, then all bets are off, as you can get Bessel functions and other non-exponential solutions.
With that said, we are focusing on cases like the following: $af''+bf'+cf=0$
Now, what is this equation telling us? It is saying the left hand side (LHS) must be zero, so the weighted sum of the function and its first two derivatives must equal 0 everywhere $f$ is defined. Therefore, the derivatives of $f$ must be expressible in terms of linear combinations of each other so that you can cancel out the functions for all values of x.
You can see that by using the above equation to solve for $f$ or one of its two derivatives in terms of the other two:
$f=-\frac{1}{c}(af''+bf')$
$f'=-\frac{1}{b}(af''+cf)$ and
$f''=-\frac{1}{a}(bf'+cf)$
As you can see, each derivative is equal to a linear combination of the other two, hence all must be expressible as linear combinations of some set of functions. If that were not the case, then there wold be some values of x where the above equation takes a non-zero value. As an analogy, think of the following very simple equation:
$x^2 + 3x + af(x) = 0$, where we want to solve for $f$. By simple algebra (one step!) we know $f$ must be expressed in terms of $x^2$ and $x$, otherwise there will be at least one x where we can get a non-zero value (e.g., if $f(x) = x^3 + x^2 + x$) there is no value for $a$ that will satisfy the equation for all x.
Likewise, as we saw above, linear differential equations with constant coefficients will equal 0 for all x only if the function and its derivatives can be expressed as linear combinations of a finite number of functions, which will allow us to select coefficient values so that all the deriviatives cancel out.
This can be extended to the case where there are "repeated roots" in the characteristic equation. Here, we have N roots and N-r functions. Hence, we are missing some functions in our "function space", which is similar to a vector space, except the unit vectors $\overrightarrow{\mathbf{i}},\overrightarrow{\mathbf{j}},\overrightarrow{\mathbf{k}}$ etc are replaced by abstract "directions" represented by functions, so that an abstract "vector" in this space is $\overrightarrow{\mathbf{f}}=\langle c_1,c_2,c_3...c_n\rangle\cdot\langle f_1,f_2,f_3...f_n\rangle$. Therefore, we have a situation where r of the "vectors" are collinear, just like with matrix algebra, and so the space is underdetermined, just like when you have N equations and N+M unknowns. You need more equations to get a unique solution. That is why we use the functions $x^ne^{kx}$, for n={1...M} where M is the degree of multiplicity for a particular root. This function has the unique property that its derviatives will be expressible in terms of lower values of n, and hence will be expressible in this expanded "basis" of functions.
Overall, that is the key property of the exponential in linear differential equations with constant coefficients - derivatives of the exponential, or x times the exponential, produce functions that do not stray outside the set of functions forming the basis for the abstract space of solutions. Other functions will not do this, which means that the higher derivatives will produce functions that are not present in the actual proposed solution, and will therefore not be able to be cancelled for all x. As an example, take $f(x) = \sum c_i\frac{1}{x^n}$ No matter what you choose for the $c_i$ the derivatives will produce functions that are not in $f$, and hence there is no set of constant coefficients that will allows the derivates to cancel out to zero for all x.
Hope that was a bit clearer than before.
Best Answer
There is a completely satisfying abstract explanation of why the exponential function shows up so much but it requires some familiarity with linear algebra. Once you understand that language, the explanation is: $e^{\lambda x}$ is the unique (up to scale) eigenvector for the differentiation operator $\frac{d}{dx}$ with eigenvalue $\lambda$. This says concretely that $e^{\lambda x}$ is the unique (up to scale) solution of
$$\frac{d}{dx} y(x) = \lambda y(x).$$
This turns out to completely explain the importance of exponentials for solving linear homogeneous constant-coefficient ODEs; abstractly and in one sentence, it's because exponentials diagonalize any differential operator which is a polynomial in $\frac{d}{dx}$. But one has to know quite a lot of linear algebra to make sense of this. For that you can consult any textbook on linear algebra; Axler's Linear Algebra Done Right might be a good place to start.