Let me first say that I think Tobias Kienzler has done a great job of discussing the intuition behind your question in going from finite to infinite dimensions.
I'll, instead, attempt to address the mathematical content of Jackson's statements. My basic claim will be that
Whether you are working in finite or infinite dimension, writing the Schrodinger equation in a specific basis only involves making definitions.
To see this clearly without having to worry about possible mathematical subtleties, let's first consider
Finite dimension
In this case, we can be certain that there exists an orthnormal basis $\{|n\rangle\}_{n=1, \dots N}$ for the Hilbert space $\mathcal H$. Now for any state $|\psi(t)\rangle$ we define the so-called matrix elements of the state and Hamiltonian as follows:
\begin{align}
\psi_n(t) = \langle n|\psi(t)\rangle, \qquad H_{mn} = \langle m|H|n\rangle
\end{align}
Now take the inner product of both sides of the Schrodinger equation with $\langle m|$, and use linearity of the inner product and derivative to write
\begin{align}
\langle m|\frac{d}{dt}|\psi(t)\rangle=\frac{d}{dt}\langle m|\psi(t)\rangle=\frac{d\psi_m}{dt}(t)
\end{align}
The fact that our basis is orthonormal tells us that we have the resolution of the indentity
\begin{align}
I = \sum_{m=1}^N|m\rangle\langle m|
\end{align}
So that after taking the inner product with $\langle m|$, the write hand side of Schrodinger's equation can be written as follows:
\begin{align}
\langle m|H|\psi(t)\rangle
= \sum_{m=1}^N\langle n|H|m\rangle\langle m|\psi(t)\rangle
= \sum_{m=1}^N H_{nm}\psi_m(t)
\end{align}
Equating putting this all together gives the Schrodinger equation in the $\{|n\rangle\}$ basis;
\begin{align}
\frac{d\psi_n}{dt}(t) = \sum_{m=1}^NH_{nm}\psi_m(t)
\end{align}
Infinite dimension
With an infinite number of dimensions, we can choose to write the Schrodinger equation either in a discrete (countable) basis for the Hilbert space $\mathcal H$, which always exists by the way since quantum mechanical Hilbert spaces all possess a countable, orthonormal basis, or we can choose a continuous "basis" like the position "basis" in which to write the equation. I put basis in quotes here because the position space wavefunctions are not actually elements of the Hilbert space since they are not square-integrable functions.
In the case of a countable orthonormal basis, the computation performed above for writing the Schodinger equation in a basis follows through in precisely the same way with the replacement of $N$ with $\infty$ everywhere.
In the case of the "basis" $\{|x\rangle\rangle_{x\in\mathbb R}$, the computation above carries through almost in the exact same way (as your question essentially shows), except the definitions we made in the beginning change slightly. In particular, we define functions $\psi:\mathbb R^2\to\mathbb C$ and $h:\mathbb R^2\to\mathbb C$ by
\begin{align}
\psi(x,t) = \langle x|\psi(t)\rangle, \qquad h(x,x') = \langle x|H|x'\rangle
\end{align}
Then the position space representation of the Schrodinger equation follows by taking the inner product of both sides of the equation with $\langle x|$ and using the resolution of the identity
\begin{align}
I = \int_{-\infty}^\infty dx'\, |x'\rangle\langle x'|
\end{align}
The only real mathematical subtleties you have to worry about in this case are exactly what sorts of objects the symbols $|x\rangle$ represent (since they are not in the Hilbert space) and in what sense one can write a resolution of the identity for such objects. But once you have taken care of these issues, the conversion of the Schrodinger equation into its expression in a particular "representation" is just a matter of making the appropriate definitions.
By the simple form of the equation (1) you wrote down, Allan really meant
$$ \frac{\partial}{\partial t} \Psi(x,t)|_{t=0} = \frac{E}{i\hbar}\Psi(x,0) $$
He just used the notation where $t=0$ is substituted from the beginning but he clearly did mean that $\Psi(x)$ is first considered as a general function of $t$, then differentiated, and then we substitute $t=0$.
This equation says that the time derivative of $\Psi(x,t)$ at $t=0$ is proportional to the same wave function. By itself, it does not imply that $\Psi(x,t)$ for an arbitrary later $t$ will be given by equation (2): if we only constrain the derivative at one moment $t=0$, the wave function may do whatever it wants at later (or earlier) moments $t$.
However, we may generalize (1) to any moment $t$ which is what you wrote down
$$ \frac{\partial}{\partial t} \Psi(x,t) = \frac{E}{i\hbar}\Psi(x,t) $$
and this equation does imply (2). If the $t$-derivative of $\Psi(x,t)$ is proportional to the same $\Psi(x,t)$, then $\Psi(x,t)$ and $\Psi(x,t')$ are proportional to each other for each $t,t'$. That implies that $\Psi(x,t)$ must factorize to $\Psi(x)f(t)$ and the function $t$ is the simple complex exponential to solve the equation with the right coefficient.
Because you are more or less writing the same things, I find it plausible that you are not missing anything.
Best Answer
I) The solution to the time-dependent Schrödinger equation (TDSE) is
$$ \Psi(t_2) ~=~ U(t_2,t_1) \Psi(t_1),\tag{A}$$
where the (anti)time-ordered exponentiated Hamiltonian
$$\begin{align} U(t_2,t_1)~&=~\left\{\begin{array}{rcl} T\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right] &\text{for}& t_1 ~<~t_2 \cr\cr AT\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right] &\text{for}& t_2 ~<~t_1 \end{array}\right.\cr\cr ~&=~\left\{\begin{array}{rcl} \underset{N\to\infty}{\lim} \exp\left[-\frac{i}{\hbar}H(t_2)\frac{t_2-t_1}{N}\right] \cdots\exp\left[-\frac{i}{\hbar}H(t_1)\frac{t_2-t_1}{N}\right] &\text{for}& t_1 ~<~t_2 \cr\cr \underset{N\to\infty}{\lim} \exp\left[-\frac{i}{\hbar}H(t_1)\frac{t_2-t_1}{N}\right] \cdots\exp\left[-\frac{i}{\hbar}H(t_2)\frac{t_2-t_1}{N}\right] &\text{for}& t_2 ~<~t_1 \end{array}\right.\end{align}\tag{B} $$
is formally the unitary evolution operator, which satisfies its own two TDSEs
$$ i\hbar \frac{\partial }{\partial t_2}U(t_2,t_1) ~=~H(t_2)U(t_2,t_1),\tag{C} $$ $$i\hbar \frac{\partial }{\partial t_1}U(t_2,t_1) ~=~-U(t_2,t_1)H(t_1),\tag{D} $$
along with the boundary condition
$$ U(t,t)~=~{\bf 1}.\tag{E}$$
II) The evolution operator $U(t_2,t_1)$ has the group-property
$$ U(t_3,t_1)~=~U(t_3,t_2)U(t_2,t_1). \tag{F}$$
The (anti)time-ordering in formula (B) is instrumental for the (anti)time-ordered expontial (B) to factorize according to the group-property (F).
III) The group property (F) plays an important role in the proof that formula (B) is a solution to the TDSE (C):
$$\begin{array}{ccc} \frac{U(t_2+\delta t,t_1) - U(t_2,t_1)}{\delta t} &\stackrel{(F)}{=}& \frac{U(t_2+\delta t,t_2) - {\bf 1} }{\delta t}U(t_2,t_1)\cr\cr \downarrow & &\downarrow\cr\cr \frac{\partial }{\partial t_2}U(t_2,t_1) && -\frac{i}{\hbar}H(t_2)U(t_2,t_1).\end{array}\tag{G}$$
Remark: Often the (anti)time-ordered exponential formula (B) does not make mathematical sense directly. In such cases, the TDSEs (C) and (D) along with boundary condition (E) should be viewed as the indirect/descriptive defining properties of the (anti)time-ordered exponential (B).
IV) If we define the unitary operator without the (anti)time-ordering in formula (B) as
$$ V(t_2,t_1)~=~\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right],\tag{H}$$
then the factorization (F) will in general not take place,
$$ V(t_3,t_1)~\neq~V(t_3,t_2)V(t_2,t_1). \tag{I}$$
There will in general appear extra contributions, cf. the BCH formula. Moreover, the unitary operator $V(t_2,t_1)$ will in general not satisfy the TDSEs (C) and (D). See also the example in section VII.
V) In the special (but common) case where the Hamiltonian $H$ does not depend explicitly on time, the time-ordering may be dropped. Then formulas (B) and (H) reduce to the same expression
$$ U(t_2,t_1)~=~\exp\left[-\frac{i}{\hbar}\Delta t~H\right]~=~V(t_2,t_1), \qquad \Delta t ~:=~t_2-t_1.\tag{J}$$
VI) Emilio Pisanty advocates in a comment that it is interesting to differentiate eq. (H) w.r.t. $t_2$ directly. If we Taylor expand the exponential (H) to second order, we get
$$ \frac{\partial V(t_2,t_1)}{\partial t_2} ~=~-\frac{i}{\hbar}H(t_2) -\frac{1}{2\hbar^2} \left\{ H(t_2), \int_{t_1}^{t_2}\! dt~H(t) \right\}_{+} +\ldots,\tag{K} $$
where $\{ \cdot, \cdot\}_{+}$ denotes the anti-commutator. The problem is that we would like to have the operator $H(t_2)$ ordered to the left [in order to compare with the TDSE (C)]. But resolving the anti-commutator may in general produce un-wanted terms. Intuitively without the (anti)time-ordering in the exponential (H), the $t_2$-dependence is scattered all over the place, so when we differentiate w.r.t. $t_2$, we need afterwards to rearrange all the various contributions to the left, and that process generate non-zero terms that spoil the possibility to satisfy the TDSE (C). See also the example in section VII.
VII) Example. Let the Hamiltonian be just an external time-dependent source term
$$ H(t) ~=~ \overline{f(t)}a+f(t)a^{\dagger}, \qquad [a,a^{\dagger}]~=~\hbar{\bf 1},\tag{L}$$
where $f:\mathbb{R}\to\mathbb{C}$ is a function. Then according to Wick's Theorem
$$ T[H(t)H(t^{\prime})] ~=~ : H(t) H(t^{\prime}): ~+ ~C(t,t^{\prime}), \tag{M}$$
where the so-called contraction
$$ C(t,t^{\prime})~=~ \hbar\left(\theta(t-t^{\prime})\overline{f(t)}f(t^{\prime}) +\theta(t^{\prime}-t)\overline{f(t^{\prime})}f(t)\right) ~{\bf 1}\tag{N}$$
is a central element proportional to the identity operator. For more on Wick-type theorems, see also e.g. this, this, and this Phys.SE posts. (Let us for notational convenience assume that $t_1<t_2$ in the remainder of this answer.) Let
$$ A(t_2,t_1)~=~-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t) ~=~-\frac{i}{\hbar}\overline{F(t_2,t_1)} a -\frac{i}{\hbar}F(t_2,t_1) a^{\dagger} ,\tag{O}$$
where
$$ F(t_2,t_1)~=~\int_{t_1}^{t_2}\! dt ~f(t). \tag{P}$$
Note that
$$ \frac{\partial }{\partial t_2}A(t_2,t_1)~=~-\frac{i}{\hbar}H(t_2), \qquad \frac{\partial }{\partial t_1}A(t_2,t_1)~=~\frac{i}{\hbar}H(t_1).\tag{Q} $$
Then the unitary operator (H) without (anti)time-order reads
$$\begin{align} V(t_2,t_1)~&=~e^{A(t_2,t_1)} \\ ~&=~\exp\left[-\frac{i}{\hbar}F(t_2,t_1) a^{\dagger}\right]\exp\left[\frac{-1}{2\hbar}|F(t_2,t_1)|^2\right]\exp\left[-\frac{i}{\hbar}\overline{F(t_2,t_1)} a\right].\tag{R} \end{align}$$
Here the last expression in (R) displays the normal-ordered for of $V(t_2,t_1)$. It is a straightforward exercise to show that formula (R) does not satisfy TDSEs (C) and (D). Instead the correct unitary evolution operator is
$$\begin{align} U(t_2,t_1)~&\stackrel{(B)}{=}~T\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right] \\~&\stackrel{(M)}{=}~:\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right]:~ \exp\left[\frac{-1}{2\hbar^2}\iint_{[t_1,t_2]^2}\! dt~dt^{\prime}~C(t,t^{\prime})\right] \\ ~&=~ e^{A(t_2,t_1)+D(t_2,t_1)}~=~V(t_2,t_1)e^{D(t_2,t_1)}\tag{S}, \end{align}$$
where
$$ D(t_2,t_1)~=~\frac{{\bf 1}}{2\hbar}\iint_{[t_1,t_2]^2}\! dt~dt^{\prime}~{\rm sgn}(t^{\prime}-t)\overline{f(t)}f(t^{\prime})\tag{T}$$
is a central element proportional to the identity operator. Note that
$$\begin{align} \frac{\partial }{\partial t_2}D(t_2,t_1)~&=~\frac{{\bf 1}}{2\hbar}\left(\overline{F(t_2,t_1)}f(t_f)-\overline{f(t_2)}F(t_2,t_1)\right) \\ ~&=~\frac{1}{2}\left[ A(t_2,t_1), \frac{i}{\hbar}H(t_2)\right]~=~\frac{1}{2}\left[\frac{\partial }{\partial t_2}A(t_2,t_1), A(t_2,t_1)\right].\tag{U} \end{align}$$
One may use identity (U) to check directly that the operator (S) satisfy the TDSE (C).
References: