In general, suppose $B$ and $Y=[y_1,\ldots,y_n]$ are two square matrices of the same sizes. Let $\{e_1,\ldots,e_n\}$ be the standard basis. When $Y$ is nonsingular, we have
\begin{aligned}
&\det\left(Y^{-1}BYe_1,e_2,\ldots,e_n\right)
+\det\left(e_1,Y^{-1}BYe_2,\ldots,e_n\right)
+\cdots
+\det\left(e_1,\ldots,e_{n-1},Y^{-1}BYe_n\right)\\
&=\operatorname{tr}(Y^{-1}BY)=\operatorname{tr}(B).
\end{aligned}
Hence
$$
\det\left(By_1,y_2,\ldots,y_n\right)
+\det\left(y_1,By_2,\ldots,y_n\right)
+\cdots
+\det\left(y_1,\ldots,y_{n-1},By_n\right)=\operatorname{tr}(B)\det(Y)\tag{1}
$$
and by a continuity argument, $(1)$ is true for singular $Y$ as well.
Now, return to your question. By the product rule,
\begin{aligned}
\dot{Z}
&=
\det\left(\dot{\phi}_1,\phi_2,\ldots,\phi_{n-1},x\right)
+\det\left(\phi_1,\dot{\phi}_2,\ldots,\phi_{n-1},x\right)
+\cdots
+\det\left(\phi_1,\phi_2,\ldots,\phi_{n-1},\dot{x}\right)\\
&=
\det\left(A\phi_1,\phi_2,\ldots,\phi_{n-1},x\right)
+\det\left(\phi_1,A\phi_2,\ldots,\phi_{n-1},x\right)
+\cdots
+\det\left(\phi_1,\phi_2,\ldots,\phi_{n-1},Ax+b\right)\\
&=
\det\left(A\phi_1,\phi_2,\ldots,\phi_{n-1},x\right)
+\det\left(\phi_1,A\phi_2,\ldots,\phi_{n-1},x\right)
+\cdots
+\det\left(\phi_1,\phi_2,\ldots,\phi_{n-1},Ax\right)\\
&\phantom{=}+\det\left(\phi_1,\phi_2,\ldots,\phi_{n-1},b\right),
\end{aligned}
Put $B=A$ and $[y_1,\ldots,y_n]=[\phi_1,\ldots,\phi_{n-1},x]$ into $(1)$, the result follows.
Assume there is a solution with $x(t_1)=1$. Let $t_0=\sup\{t>0:x(t)=0\}$. Then for the time difference one gets
$$
\frac{dx}{dt}=f(x)\implies t_1-t_0=\lim_{\varepsilon\to0}\int_ε^1\frac{dx}{f(x)}.
$$
This of course only works if the limit of the integral has a finite value. If the integral diverges, then no such solution exists, the zero function is the unique solution.
Remarks: To properly apply existence theorems, one could extend $f$ as an odd function $f(-x)=-f(x)$, so that $f$ is defined and continuous on all of $\Bbb R$, and $(t_0,x_0)=(0,0)$ is an inner point of the domain.
For any $ε>0$, $f$ has a positive minimum on the interval $[ε,1]$, so that any solution that deviates from the zero solution will be strictly increasing after leaving the zero axis.
As a consequence, if a solution with $x(t_1)=1$ exists, then for every $ε>0$ there is a unique $δ>0$ so that $x(t_0+δ)=ε$. Now by the substitution rule one can integrate
$$
t_1-(t_0+δ)=\int_{t_0+δ}^{t_1}dt=\int_{t_0+δ}^{t_1}\frac{\dot x(t)}{f(x(t))}dt=\int_{ε}^1\frac{dx}{f(x)}.
$$
In view of the last two points, the emphasis for $t_0$ should perhaps be more that it is the start of the positive values, $t_0=\inf\{t>0:0<x(t)\}$.
If $f$ is Lipschitz on $[0,1]$, then any solution is unique. On the other side of the claim, $|f(x)|\le L|x|$ implies
$$
\int_{ε}^1\frac{dx}{f(x)}\ge\frac{-\ln(ε)}{L},
$$
which is unbounded for $ε\to0$.
Best Answer
The idea is to use the Osgood criterion (see Osgood criterion. Encyclopedia of Mathematics.), which states that if $f$ is continuous and satisifies $|f(t,x_1) - f(t,x_2)| \leq \omega(|x_1-x_2|)$ for some $\omega$ with $\int_0^\infty \frac{ds}{\omega(s)} = \infty$, then there exists a unique solution to the IVP $\dot x = f(t,x), x(t_0) = x_0$. It remains to check that $\int_0^\infty \frac{ds}{s|\log(s)|} = \infty$ (this is quite well known).