The book you started reading is good. However, despite that it treats some of more involved topics (as proof by exercises of Sharkovskii's theorem or stable and unstable manifold theorem), I still think that it is too wordy at the initial stage and skips a few very relevant points later. As a first introduction it is perfectly fine, but probably you'd like to see something more comprehensive.
There is another introductory book, which is quite rigorous and still accessible, and which goes really patient by introducing relevant concepts and notions: Hale and Kocak, Dynamics and Bifurcations. I think this book gives an ideal exposition to start reading graduate texts after it. This book however assumes that you already were exposed to differential equations (this is not a prerequisite in the book you are currently reading).
No. Any zero of $f(y)$ is an equilibrium, so the hypothesis that the only equilibria are $0$ and $1$ forces $f(y) < 0$ in $(0, 1)$. This in turn implies that for any $0 < \epsilon < 1/2$ we have $f(y) < 0$ on the closed interval
$I_\epsilon = [\epsilon, 1 - \epsilon], \tag 1$
and since $I_\epsilon$ is compact, $f(y)$ attains a global maximum $m < 0$ on $I_\epsilon$; then for
$y(t_0) = y_0 \in I_\epsilon, \tag 2$
$y(t)$ obeys
$\dot y = f(y) \le m < 0 \tag 3$
on $I_\epsilon$; therefore, $y(t)$ satisfying (3) will reach the value $\epsilon$ within time
$\Delta t = \displaystyle \int_{y_0}^\epsilon \dfrac{dy}{f(y)} = -\int_\epsilon^{y_0} \dfrac{dy}{f(y)} \le -\int_\epsilon^{y_0} \dfrac{dy}{m} = -\dfrac{y_0 - \epsilon}{m} = \dfrac{\epsilon - y_0}{m}; \tag 4$
since this holds for every $0 < \epsilon < 1/2$, we see that for every $y_0 \in (0, 1)$, $y(t)$ becomes arbitrarily small for large enough $t$; but this implies
$\displaystyle \lim_{t \to \infty} y(t) = 0. \tag 5$
The result is false if we remove the condition that $f(y)$ have only two equilibria in $[0, 1]$; consider
$f(y) = y \left (y - 1 \right ) \left ( y - \dfrac{1}{2} \right )^2, \tag 6$
and set
$y(0) = \dfrac{3}{4}; \tag 7$
$f(y)$ satisfies the requisite criteria, but
$\displaystyle \lim_{t \to \infty} y(t) = \dfrac{1}{2}, \tag 8$
which may be proved in a manner similar to the above.
Best Answer
My favorite dynamical system proof is that almost every $x \in [0,1]$ is normal base $2$. That is, if $x = .a_1a_2\dots$ in binary expansion (which is unique for all $x$ except a measure $0$ set), then $$\lim_{N \to \infty} \frac{\#\{n \le N : a_n = 1\}}{N} = \frac{1}{2}.$$ The proof is to consider the map $T: [0,1] \to [0,1]$ given by $Tx = 2x \pmod{1}$ and $f(x) = \lfloor 2x \rfloor$. Then $a_1 = f(x), a_2 = f(Tx), a_3 = f(T^2x)$, etc. One can check that $T$ is ergodic w.r.t. the Lebesgue measure. Then, by the pointwise ergodic theorem, for a.e. $x \in [0,1]$, $\frac{1}{N}\sum_{n \le N} f(T^nx) = \int_0^1 f(x)dx = \frac{1}{2}$. This easily generalizes to all other bases $b$ (with $f(x) := 1_{\{j\}}(\lfloor bx \rfloor)$ giving that the digit $j$ occurs a proportion $\frac{j}{b}$ of the time).