Of course $\Psi(x,t)\to\frac{\partial\varphi}{\partial t}(x,s)$ as $t \to s$ uniformly on $[a,b]$. You have shown that there exists $u$ between $t$ and $s$ such that
$$\left|\Psi(x,t)-\frac{\partial\varphi}{\partial t}(x,s) \right|=\left|\frac{\partial\varphi}{\partial t}(x,u)-\frac{\partial\varphi}{\partial t}(x,s) \right|$$
Since $|u-s| \leqslant |t-s|$ it follows from condition (3) that there exists $\delta$ such that if $|t-s| < \delta$, then for all $x \in [a,b]$ we have
$$\left|\Psi(x,t)-\frac{\partial\varphi}{\partial t}(x,s) \right|=\left|\frac{\partial\varphi}{\partial t}(x,u)-\frac{\partial\varphi}{\partial t}(x,s) \right|< \varepsilon$$
Proving that interchanging the derivative and integral is valid is straightforward and requires no further complication with regard to uniformly convergent sequences. Clearly, when $|t-s| < \delta$ we have
$$\left|\frac{f(t)-f(s)}{t-s}- \int_a^b \frac{\partial\varphi}{\partial t}(x,s) \, dx\right| \leqslant \int_a^b \left|\frac{\partial\varphi}{\partial t}(x,u)-\frac{\partial\varphi}{\partial t}(x,s) \right| \leqslant \varepsilon (b-a),$$
and it follows that
$$f'(s) = \lim_{t \to s} \frac{f(t)-f(s)}{t-s} = \int_a^b \frac{\partial\varphi}{\partial t}(x,s) \, dx$$
All of this depends on the fact that $x \mapsto \frac{\partial \varphi}{\partial t}(x,t)$ is Riemann integrable on $[a,b]$ which you are also asked to prove. I leave that for you to consider.
Bravo on worrying about the details here.
Your proofs seem correct, though for the second one it might be better to treat the different cases of $b<a$ or $b>a$ separately.
Here are some equivalent arguments that you might find more intuitive:
We actually know that $f$ is decreasing on some interval to the left $(a-\delta, a]$ where $a$ is included though Spivak doesn't really say so.
So, for all $x$ in the open interval $(a-\delta, a)$, $f(x) > f(a)$.
Why?
Suppose instead $f(x_0) \leq f(a)$ for some $x_0$ in this interval.
Since $f$ is decreasing, there must be some $x_1$ in $(x_0, a)$ such that $f(x_1) < f(x_0) \leq f(a)$.
But then continuity of $f$ and the IVT tell us there must be some $x_2$ with
$$x_1 < x_2 < a \text{ and } f(x_1) < f(x_2) < f(a),$$
but this contradicts $f$ decreasing to the left of $a$. (For example, there must be some $x_2$ s.t. $f(x_2) = \frac {f(x_1) + f(a)}{2}$)
As such, we can safely say that $f$ takes on its minimum value on $(a-\delta, a]$ at $a$.
Similar arguments can apply to the right side interval.
Edit We can generalize this idea as follows:
Theorem: If $f$ is continuous on $[a,b]$ and decreasing on $(a,b)$ then $f$ takes on its minimum value on $[a,b]$ at $b$ (and its maximum value at $a$).
The proof is very similar to what we did above. Furthermore, a nearly identical theorem applies to $f$ increasing.
Note: this result doesn't depend on $f$ being differentiable.
Edit another approach, similar to your second proof: we know there is some interval $[a-\delta, a+\delta]$ with $f'(x) < 0$ for all $x$ in the interval left of $a$, and $f'(x) > 0$ for all $x$ in the interval to the right of $a$.
We know $f$ takes on a minimum value at some $y$ in $[a-\delta, a+\delta]$
$f'(a-\delta) < 0$ indicates that $f(a-\delta) > f(x)$ for all $x$ just right of $a-\delta$.
Similar considerations apply to the right endpoint, and taken together we see that neither endpoint can be the minimum of $f$ on $[a-\delta, a+\delta]$. Thus, $f$ must have a local minimum point somewhere in $(a-\delta, a+\delta)$.
Since $f$ is differentiable and has a local minimum somewhere in $(a-\delta, a+\delta)$, $f' = 0$ at this minimum. But $a$ is the only point with zero derivative on $(a-\delta, a+\delta)$, so $a$ must be the minimum point.
Edit 2
A third take, which is a slight modification of Spivak's Corollary 3:
Theorem: If $f$ is continuous on $[a,b]$ with $f'(x) < 0$ for all $x$ in $(a,b)$, $f$ is [strictly] decreasing on $[a,b]$.
Proof:
If $a \leq x < y \leq b$, from MVT we have
$$ \frac{f(y) - f(x)}{y-x} = f'(x_0) \text{ for some $x_0$ in $(a,b)$},$$
but $f'(x_0) < 0$ so we must have $f(y) < f(x)$.
Notice the above theorem applies even to the endpoints $a$ and $b$ even though we don't know anything about the value of $f'$ at those points.
We can apply this theorem (and a nearly identical version for $f' > 0$) to the second derivative scenario to see that $a$ must be a minimum point.
Best Answer
Per the part in red:
We have $f''(c) > 0$, or
$$\lim_{x\to c} \frac{f'(x) - f'(c)}{x-c} = f''(c) > 0$$
Let's use $\frac{f''(c)}{2}$ as our $\varepsilon$. There is some $\delta$ such that for all $x$ if
$$0 < |x - c| < \delta \implies \left|\frac{f'(x) - f'(c)}{x-c} - f''(c)\right| < \frac{f''(c)}{2}$$ or $$-\frac{f''(c)}{2} < \frac{f'(x) - f'(c)}{x-c} - f''(c)< \frac{f''(c)}{2} $$ $$\frac{f'(x) - f'(c)}{x-c} > \frac{f''(c)}{2} > 0$$
In order for the LHS to be $> 0$, for $x>c$ we must have $f'(x) > f'(c) = 0$.
Similarly, for $x < c$ we have $f'(x) < 0$.
Thus, $f$ is decreasing on some interval to the left of $c$ and increasing to the right, and so, $c$ is a local minimum.