You need $x$ to be the state (a vector of position and velocity, generally) at some time $t$. Usually when people use the notation $x_f$ it means $x(t_f)$, or the state at time $t_f$.
Then, $\dot{x}_f$ is the vector of instantaneous velocity and acceleration (usually due to gravity at the position given in $x_f$).
So, $A =\frac{\partial \dot{x}_f}{\partial x_f}$ is going to be a Jacobian. For orbital problems it usually looks like $A = \begin{bmatrix}0_{3\times3} & I_{3\times3} \\ G(r) & 0_{3\times3}\end{bmatrix}$, where $r$ is the position component of $x$ at whatever time you're interested in and $G(r)$ is the gravity gradient Jacobian, $\frac{\partial\ddot{r}}{\partial r}$ evaluated at $x$.
You can find a formula for the gravity gradient in a lot of places, but I just happen to have my copy of Battin open (An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition), and it's given at the bottom of page 454, just under problem 9-11.
If $A(t_1)$ and $A(t_2)$ commute for all $t_1$ and $t_2$, so $A(t_1)\,A(t_2) = A(t_2)\,A(t_1)$, then you could use
$$
\Phi(t,0) = e^{\int_0^t A(\tau)\,d\tau} = e^{B(t)}
$$
which indeed for the time-invariant case comes down to $e^{A\,t}$. In order to see why, one can use the Taylor expansion as a way of calculating the matrix exponential
$$
e^{B(t)} = I + B(t) + \frac{1}{2!} B(t)^2 + \frac{1}{3!} B(t)^3 + \cdots
$$
The time derivative of this expression should be equal to $A(t)\,\Phi(t,0)$. Taking the time derivative of this Taylor expansion gives
$$
\frac{d}{dt}e^{B(t)} = \dot{B}(t) + \frac{1}{2!} \left(\dot{B}(t)\,B(t) + B(t)\,\dot{B}(t)\right) + \frac{1}{3!} \left(\dot{B}(t)\,B(t)^2 + B(t)\,\dot{B}(t)\,B(t) + B(t)^2\,\dot{B}(t)\right) + \cdots
$$
when using that $\dot{B}(t) = A(t)$ and that $B(t)$ can be seen as a linear sum of all $A(\tau)\,\forall \tau\in[0,t]$, then if $A(t_1)$ and $A(t_2)$ commute for all $t_1$ and $t_2$ it follows that $A(t)$ and $B(t)$ should commute as well. This allows for $\dot{B}(t) = A(t)$ to be factored out for all term of the derivative of the Taylor series
$$
\frac{d}{dt}e^{B(t)} = A(t)\left(I + \frac{1}{2!} 2\,B(t) + \frac{1}{3!} 3\,B(t)^2 + \cdots\right) = A(t)\left(I + B(t) + \frac{1}{2!} B(t)^2 + \cdots\right)
$$
which is just equal to $A(t)\,e^{B(t)}$.
However your $A(t)$ does not satisfy this condition, so this solution for the state transition matrix can not be used. I am not aware of another way one could for finding an analytical solution for the state transition matrix besides the Peano-Baker series. In this document it is mentioned at page 40 that the state transition matrix is almost always obtained via numerical integration. That document is from 1991. So if back then numerical integration would be preferred over analytical solutions, then nowadays with much more computation power at our disposal it should probably still be the preferred method.
However it can be noted that once you have obtained a solution for $\Phi(t,0)\,\forall\,t\in[0,T]$ with $T$ the period of $A(t)$, then all further solutions can be derived from this using the following expression recursively
$$
\Phi(t+T,0) = \Phi(t,0)\,\Phi(T,0).
$$
From this you can also say something about stability, namely
$$
\Phi(n\,T,0) = \Phi(T,0)^n\quad \forall\ n\in\mathbb{Z}
$$
so if $\Phi(T,0)$ is a Schur matrix (all eigenvalues inside the unit circle) then the system will be stable.
Best Answer
There is definitely a connection between something that is close to (2): the adjoint system:
For a system \begin{equation} \dot{x}(t) = A(t) x(t) \end{equation}
The adjoint system is given by $$ \dot{y}(t) = -A(t)^T y(t) $$
And if $\Phi_A(t, t_0)$ is the fundamental solution/ state transition matrix for (1), the adjoint has solution $$ \Phi_{-A^T}(t_0, t)^T $$
That is, the transpose of the original, with the time arguments reversed.
This matches up with your LTI case of a constant $A$: The $\Phi_A(t, 0) = e^{A(t-0)} = e^{At}$. Taking the negative transpose, we get $\Phi_{-A^T}(0, t) = e^{-A^T(0-t)} = e^{tA^T} = \Phi_A ^T(t, 0)$
For more, see definition 2.2 https://www.ece.umd.edu/class/enee664.S2012/andre_tits_enee664_2011_version.pdf
As for your system (2), I'm not sure if you'll get as clean a connection between just the transpose and the original system without the negative.