Take a simple system $\dot{x} = x$, then $A=1$ and $\phi(t,t_0) = e^{t-t_0}$.
That is $A$ is constant, but the system response is time varying. In general this
will be the case unless $A=0$.
If you have $\phi(t,t_0)$ in general you have $\dot{\phi}(t,t_0) = A(t) \phi(t,t_0)$, or $A(t) = \phi(t,t_0)^{-1} \dot{\phi}(t,t_0) = \phi(t_0,t) \dot{\phi}(t,t_0)$, so you can determine if $A$ is time varying or not.
Addendum: It is not too hard to show that the system is time invariant iff
$\phi(t,t_0) = \phi(t-t_0,0)$ for all $t,t_0$. A little more work shows that in this case $A$ is essentially constant.
The basic problem is that $A(t)$ and $\int_{t_0}^t A(s) \; ds$ won't commute for general $A(t)$:
$\left [A(t), \displaystyle \int_{t_0}^t A(s) \; ds \right ] \ne 0; \tag 1$
this issue manifests itself when forming
$\dfrac{d}{dt} \left [ \exp \left ( \displaystyle \int_{t_0}^t A(s) \; ds \right ) \right ]; \tag 2$
with the matrix exponential defined by the series expansion
$\exp \left (\displaystyle \int_{t_0}^t A(s) \; ds \right ) = \displaystyle \sum_0^\infty \dfrac{1}{n!} \left ( \displaystyle \int_{t_0}^t A(s) \; ds \right )^n, \tag 3$
we encounter difficulties with the terms of degree $2$ and higher, as is illustrated by
$\dfrac{d}{dt} \left ( \displaystyle \int_{t_0}^t A(s) \; ds \right )^2 = A(t) \left ( \displaystyle \int_{t_0}^t A(s) \; ds \right ) + \left ( \displaystyle \int_{t_0}^t A(s) \; ds \right ) A(t); \tag 4$
in the light of (1) we cannot interchange the factors in the second term to bring $A(t)$ to the forefront, and the same issue evidently pertains to every power of the integral occurring in the sum on the right of (3); and without this commutation, there is no way to validate
$\dfrac{d}{dt} \left [ \exp \left ( \displaystyle \int_{t_0}^t A(s) \; ds \right ) \right ] = A(t) \exp \left ( \displaystyle \int_{t_0}^t A(s) \; ds \right ) \tag 5$
as there is in the case of a one-dimensional variable $a(t)$, for which
$\dfrac{d}{dt} e^{a(t)} = a'(t) e^{a(t)} \tag 6$
follows easily from the ordinary chain rule of single-variable calculus.
For matrices $A(t)$ such that
$\left [A(t), \displaystyle \int_{t_0}^t A(s) \; ds \right ] = 0 \tag 7$
for all $t$ and $t_0$, however, the formula (5) applies. Once class of such matrices which has some proven utility is
$A(t) = f(t) B, \tag 8$
where $B$ is a constant matrix; it is easy to see that (5) binds for such $A(t)$.
Nota Bene: I have encountered this situation so many times in my own work with time-dependent, linear ordinary differential equations that I can only say I wish (5) were true; it would certainly make many things hella' easier! End of Note.
Best Answer
This is a Euler-Cauchy differential equation. Multiply by $t^2$ to obtain:
$$t^2y’’+4ty’+2y=t^2u.$$
You can determine the solutions of the homogeneous part/fundamental solution by the ansatz $y_h=t^n$.
$$t^2n(n-1)t^{n-2}+4tnt^{n-1}+2t^{n}=0 \quad \text{; if } t\neq 0\implies n^2+3n+2=0 $$ $$\implies (n+2)(n+1)=0 \implies n_1=-1 \qquad n_2 = -2.$$
After using the ansatz and determining the two possible values for $n_{1,2}$ you can assemble the fundamental system
$$x_1=y=c_1t^{-1}+c_2t^{-2} \implies \boldsymbol{X}_{\text{scalar}}=\begin{bmatrix}t^{-1} & t^{-2} \end{bmatrix}$$ $$x_2=\dot{y}=-c_1t^{-2}-2c_2t^{-3}$$
$$\implies \boldsymbol{x}(t)=c_1\begin{bmatrix}t^{-1} \\-t^{-2} \end{bmatrix}+c_2\begin{bmatrix}t^{-2}\\-2t^{-3} \end{bmatrix}$$
The fundamental solution (columns are vectors associated with the constants of integration $c_1$ and $c_2$) is given by:
$$\boldsymbol{X}(t)=\begin{bmatrix}t^{-1}& t^{-2}\\ t^{-2}&-2t^{-3}\end{bmatrix} \implies \boldsymbol{X}^{-1}(t)= \dfrac{1}{\det\boldsymbol{X}(t)}\begin{bmatrix}-2t^{-3}& -t^{-2}\\ -t^{-2}&t^{-1}\end{bmatrix}.$$
An alternative approach for determining the fundamental system of the matrix equation would be to use the scalar fundamental system $\boldsymbol{X}_{\text{scalar}}$ and the following relationship.
$$\boldsymbol{X}(t)=\begin{bmatrix}\boldsymbol{X}_{\text{scalar}}\\\dot{\boldsymbol{X}}_{\text{scalar}}\end{bmatrix} .$$
The state transition matrix $\boldsymbol{\Phi}(t,\tau)$ is given by:
$$\boldsymbol{\Phi}(t,\tau)=\boldsymbol{X}(t)\boldsymbol{X}^{-1}(\tau).$$
An alternative method to directly obtain the state transition matrix is the Peano-Baker series
$${\boldsymbol {\Phi }}(t,\tau )={\boldsymbol{I}}+\int _{\tau }^{t}{\boldsymbol {A}}(\sigma _{1})\,d\sigma _{1}+\int _{\tau }^{t}{\boldsymbol {A}}(\sigma _{1})\int _{\tau }^{{\sigma _{1}}}{\boldsymbol {A}}(\sigma _{2})\,d\sigma _{2}\,d\sigma _{1}$$ $$+\int_{\tau }^{t}{\boldsymbol {A}}(\sigma _{1})\int _{\tau }^{{\sigma _{1}}}{\boldsymbol {A}}(\sigma _{2})\int _{\tau }^{{\sigma _{2}}}{\boldsymbol {A}}(\sigma _{3})\,d\sigma _{3}\,d\sigma _{2}\,d\sigma _{1}+...$$
The general solution is then given by:
$${\boldsymbol {x}}(t)={\boldsymbol {\Phi }}(t,t_{0}){\boldsymbol {x}}(t_{0})+\int _{{t_{0}}}^{t}{\boldsymbol {\Phi }}(t,\tau ){\boldsymbol {B}}(\tau ){\boldsymbol {u}}(\tau )d\tau .$$
As Kwin van der Veen already mentioned your $\boldsymbol{B}(t)$ should be different. In your case it is:
$$\boldsymbol{B}(t)=\begin{bmatrix}0 \\1\end{bmatrix}.$$