This is a common problem with Leibniz's notation; people often treat the "$dx$" as something that can be moved around as though it were a variable, and $\frac{dy}{dx}$ as a fraction, rather than as a symbol denoting one singular quantity. The other issues I have with this is the (arguably) premature use of $\int f(x)\,dx$ to denote antiderivatives (i.e. indefinite integrals) before the notion of $\int_{a}^{b}f(x)\,dx$ is introduced as the limit of Riemann sums (i.e. the definite integral.) These similar notations are used to denote seemingly disparate ideas in calculus that are only truly reconciled through the Fundamental Theorem(s) of Calculus. That being said, I'm not surprised you're a bit confused. Let me try to clarify things a bit more...
Derivatives:
First off, let's start with the derivative. This is commonly referred to as an "instantaneous rate of change", which is a complete oxymoron. It is actually the limit of the average rate of change, as taken over ever-shrinking intervals of "time" (or position, etc.) Formally, we say the derivative of the function $f$ at the value $a$ is the limit
$$
f'(a) := \lim_{x\to a}\frac{f(x)-f(a)}{x-a},
$$
and is notated $f'(a)$ for convenience. Since the function $f$ depends on a variable (here we call that variable $x$), then the Leibniz notation for the derivative at $x=a$ is written
$$
\frac{df}{dx}(a).
$$
The "$df$" part is to tell the reader that the function being derived is $f$; while the "$dx$" part is to remind us what variable the derivative is being taken with respect to -- in this case $x$. The "$(a)$" part is telling you that the derivative is being taken at the real number $a$, in the domain of $f$.
Since the derivative can (usually) be taken at many different values of the domain, we naturally extend the idea of the derivative at a point to the notion of the derivate as a function. This gives rise to the notation
$$
f'(x) \quad \text{or} \quad \frac{df}{dx}
$$
which is used to represent a new function whose value at $x$ is given by the derivate of $f$ at the point $x$; in this way we derive (in the English sense of the word) an entirely new function from the old one.
You also might occasionally see "operator" notion thrown around, which usually looks like $\frac{d}{dx}(f)$. Again, "$d$" and "$dx$" are not to be thought of as separate pieces of a fraction, but rather that $\frac{d}{dx}$ is a operation (much like squaring or square-rooting) to be performed on a function. In this way you'd see equations written like
$$
\frac{d}{dx}(\sin x) = \cos x.
$$
Note: The common notation of $\frac{dy}{dx}$ is nothing special either. It means the same thing as $\frac{df}{dx}$, but with the added confusion of identifying the $y$ variable of the coordinate plane with the function $f(x)$, via the graph $y = f(x)$. I prefer to leave $y$'s out of the picture unless we are explicitly graphing something.
Antiderivatives:
The notation of
$$
\int f(x)\,dx
$$
is a bit awkward, especially when you do not yet have the connections between definite and indefinite integrals. You should interpret these symbols as standing in for a new function (much like how $\frac{df}{dx}$ does) defined in terms of $x$ with the following property:
$$
F(x) = \int f(x)\,dx \iff F'(x) = f(x).
$$
That is, if you take the derivate of $F(x)$ as a function, then you get back the original function $f(x)$. Hence the more appropriate name of "antiderivative".
One curious property of antiderivatives is that they are not unique, unlike derivatives. This is because if $F'(x) = f(x)$, then $\frac{d}{dx}(F(x)+C) = f(x)$ as well, for any constant $C$.
Again, the "$dx$" part of the antiderivative isn't really a "part" at all. It is simply there to tell the reader what variable is being "anti-differentiated". The elongated "S" part has no place in our discussion before we've covered the connection between definite integrals (a.k.a. areas under the graphs of functions) and indefinite integrals (a.k.a. antiderivatives). Sadly it is the conventional symbol, however, and so we will need to deal with it whether we want to or not.
Chain Rule:
When you have a composition of functions (one function shoved inside of another one) you need the chain rule to evaluate derivates. Notationally we say write
$$
h(x) = (f\circ g)(x), \quad \text{if }\, h(x) = f(g(x)).
$$
This chain rule then tells us
\begin{align}
h'(x) &= (f'\circ g)(x) \cdot g'(x) \\
&= f'(g(x))\cdot g'(x).
\end{align}
In words: if $h$ is the composition of $f$ and $g$, then the derivative of $h$ is the derivative of $f$, composed with $g$, and then multiplied by the derivative of $g$.
A quick example of the chain rule can be demonstrated using nothing more than the "power rule" of differentiation for polynomials:
\begin{align}
\frac{d}{dx}(x^6) &= \frac{d}{dx}((x^2)^3)\\
&= \frac{d}{dx}(f(g(x))), \quad \text{where $f(x) = x^3$ and $g(x) = x^2$}\\
&= 3(x^2)^2\cdot(2x), \quad \text{where $f'(x) = 3x^2$ and $g'(x) = 2x$}\\
&= (3\cdot2)x^4\cdot x \\
&= 6 x^5.
\end{align}
In Leibniz notation this would be written as
$$
\frac{dh}{dx} = \frac{d(f\circ g)}{dx} = \frac{df}{dx}(g(x))\cdot \frac{dg}{dx}(x),
$$
where one must be careful to indicate that the derivative of $f$ is being evaluated at the number $g(x)$ and NOT the number $x$. Remember,
$$
\frac{df}{dx}(g(x))\cdot \frac{dg}{dx}(x) \neq \frac{df}{dx} \cdot \frac{dg}{dx}.
$$
Since, in a composition of functions, the entire function of $g$ is replacing every instance of the variable $x$ in the function $f$, it's not uncommon to "abuse notation" and simply write $f(g)$, where it is implicitly understood that since $g$ is a function of $x$, then the composition must be a function of $x$ as well. In this way, the Leibniz notation can be abused as well, by writing
$$
\frac{d(f\circ g)}{dx} = \frac{df}{dg}\cdot \frac{dg}{dx},
$$
where $\frac{df}{dg}$ is understood to mean "take the derivative of $f$ as though it were a function of a variable called $g$". It should now be clear why this suggestive "fractional" notation has prevailed for so long: it's as if the $dg$'s "cancel" via multiplication, leaving only the desired $\frac{df}{dx}$. Again, this is not what is actually happening, but it is the reason the notation is the way it is.
$u$-Substitution:
$u$-substitution is merely the reverse of the chain rule, the way antiderivatives are the reverse of derivatives. Using the conventional "integral" notation for antiderivatives, we simply look to the previous section to see how to reverse the chain rule:
$$
\int (f\circ g)'(x) \,dx = (f\circ g)(x) + C.
$$
The key idea when using $u$-substitution to integrate (i.e. anti-differentiate) is to isolate a part of the function (the "$u$" part) that:
- is "inside" a separate function (like $f(u(x))$)
- whose derivative is being multiplied with the "outer" function ($\cdot u'(x)$)
It is nothing more than simply calling $g$ from the chain rule by a different name, $u$:
$$
\int \big( f'(u(x))\cdot u'(x) \big) \,dx = f(u(x)) + C.
$$
The idea of replacing "$dx$" with "$du$" is again just an abuse of notation.
By thinking of the composition $f(u(x))$ as just $f(u)$ -- that is, just treat $f$ as a function of a variable named $u$, rather than of a variable named $x$ -- we can abuse the Leibniz notation to write the equation above as
$$
\int \frac{df}{du} \cdot \frac{du}{dx} \,dx = f(u(x)) + C,\tag{*}
$$
where again $\frac{df}{du}$ is interpreted to be the derivate of $f$, treated as a function of $u$. But notice that if $f$ were simply a function of a variable called $u$ (and there was no mention whatsoever of $x$'s) then we would have
$$
\int \frac{df}{du} \,du = f(u) + C.
$$
And so, yet again, we are tempted to abuse the notation even further in (*) by "canceling" the "$dx$" in the derivate (thinking of it like a fraction) with the "$dx$" in the integral (whatever that means) to "leave behind" the "$du$" part.
At face value this is patently absurd: we are pulling apart things that were never disjointed in the first place, and canceling odd symbols as though they were ordinary numbers. However, as a mnemonic the Leibniz notation gives quite a concise way of remembering (not explaining) how these formulas work.
Best Answer
Breaking the integral is a good idea to start. Proceed as follows:
$\int_{0}^{3}f(x)dx=\int_{0}^{1}f(x)dx+\int_{1}^{3}f(x)dx=1+\int_{1}^{3}f(1+(x-1))dx$
$=1+\int_{0}^{2}f(1+u)du$
by the change of variables theorem and by identity $1$.
Then we have that,
$=1+\frac{1}{2}\int_{0}^{2}f(u)du=1+\frac{1}{2}\int_{0}^{1}f(u)du+\frac{1}{2}\int_{1}^{2}f(u)du=\frac{3}{2}+\frac{1}{2}\int_{1}^{2}f(1+(u-1))du$.
by identity $1$ and $3$. Now by the change of variables theorem and identities $1$ and $3$ we get
$\frac{3}{2}+\frac{1}{2}\int_{0}^{1}f(1+z)dz=\frac{3}{2}+\frac{1}{4}=\frac{7}{4}$.