First of all we will show that $\int_A f \, d\lambda = 0$ for all closed sets $A \subset [0, 1]$.
Consider the function $h: x \mapsto d(x, A) = \inf \limits_{y \in A} d(x, y)$. It is easy to proof that $h$ is (Lipschitz-)continuous and $h(x) = 0 \iff x \in A$ (for the latter equality you actually need that $A$ is closed). Now we define a sequence of (uniformly) bounded functions via $g_n(x) = \min\{1, n h(x)\}$. It is easy to verify that $g_n$ converges pointwise to the indicator function of $A^c$. Since $g_n$ is uniformly bounded and $f$ integrable, the dominated convergence theorem implies
$$\int_A f \, d\lambda = \int_{[0, 1]} f \, d\lambda - \int_{[0, 1]} f I_{A^c} \, d\lambda = -\lim \limits_{n \to \infty} \int_{[0, 1]} f g_n \, d\lambda = 0$$
Now consider the set $S = \{A \subset [0, 1] \mid A \text{ is measurable and }\int_A f \, d\lambda = 0\}$. By the dominated convergence theorem, $S$ is in fact a $\sigma$-algebra and it contains all closed sets, so $S = \mathcal{B}([0, 1])$. But now we can simply choose $A = \{f > 0\}$ and we get $\int_A f \, d\lambda = 0$, which implies that $f = 0$ almost surely on $A$. Choosing $B = \{f < 0\}$ implies $f = 0$ almost surely on $B$ and both together imply $f = 0$ almost surely.
Assume
- $g$ is absolutely continuous on $[a,b]$ and strictly increasing. Since it is continuous there is an interval $[c,d]$ so that $g:[a,b]\to [c,d]$.
- Assume that $f:[c,d]\to \mathbb R$ is absolutely continuous. That means the composition $f\circ g:[a,b]\to \mathbb R$. (You won't need that $f$ is absolutely continuous on all of $(-\infty,\infty)$ which is a stronger statement.)
- Now do your thing. Let $\epsilon=\epsilon_1>0$ and use the absolute continuity of $f$ to get your $\delta_1$ etc.
- Now do it for $g$. Using $\epsilon_2=\delta_1$ find a $\delta_2$ etc.
I think you have the idea here. You want to end up with something like this:
$$\sum (b_i-a_i) <\delta_2 \implies \sum [g(b_i)-g(a_i)] < \delta_1$$
write $[c_i,d_i] = [g(a_i),g(b_i)]$ then
$$\sum (d_i-c_i) <\delta_1 \implies \sum |f(d_i)-f(c_i)| < \epsilon$$
so that
$$\sum (b_i-a_i) <\delta_2 \implies
\sum |f\circ g(b_i)-f\circ g(a_i)| < \epsilon$$
That's the vague goal. Just write it up formally as a proof. Make it very clear where you are using the fact that $g$ is strictly increasing.
Now there is a lot more to learn about compositions of absolutely continuous functions as this special case is very elementary. See S. Saks, Theory of the Integral, (1937), pp.286-289.
https://archive.org/details/theoryoftheinteg032192mbp/page/n9/mode/2up
It was known for a long time that the composition of two absolutely continuous functions need not be absolutely continuous, except in special cases like this.
Two famous Russian mathematicians, Nina Bary and D. Menchoff, in the early 20th century completely solved the problem of precisely what functions can be expressed as the composition of two absolutely continuous functions.
My main motivation for answering this simple question is to encourage you to read Saks' excellent account of this interesting research. I would guess most analysis students may never have heard of it. Most know about Lusin's condition (N), but probably not about Banach's conditions (T${}_1$), (T${}_2$), and (S).
Who are Nina Bary and D. Menchoff? Here is a photo of the Moscow State University mathematicians from the 1950s. The marvelous Nina sits between a rather sour looking Menshov and an overly cheerful Tolstov. Tolstov, in spite of looking like a KGB colonel ordering a group photo, was a good mathematician himself.
Best Answer
Of course this is trivial given the theorem that $f$ is the integral of $f'$. More interesting is proving it directly from the definition. Hint for that:
Let $\epsilon>0$. Choose $\delta>0$ as in the definition of "$f$ is absolutely continuous". Choose a compact set $K\subset[a,b]$ with $m([a,b]\setminus K)<\delta$, such that $f'=0$ at every point of $K$.
Now if $x\in K$ then $x\in(a,b)$ with $$|f(b)-f(a)|<\epsilon(b-a).$$
$K$ is covered by finitely many such intervals. With a little finagling you can show that $K$ is covered by a finite collection of disjoint such intervals, possibly closed or half-open. So $$\sum|f(b_j)-f(a_j)|<\epsilon\sum(b_j-a_j)\le\epsilon(b-a).$$
But $[a,b]\setminus\bigcup(a_j,b_j)$ is a finite union of intervals $[\alpha_k,\beta_k]$ with $\sum(\beta_k-\alpha_k)<\delta$; hence the choice of $\delta$ shows that $$\sum|f(\beta_k)-f(\alpha_k)|<\epsilon.$$
Putting it all together, the triangle inequality shows that$$|f(b)-f(a)|<\epsilon(1+b-a).$$