You need $x$ to be the state (a vector of position and velocity, generally) at some time $t$. Usually when people use the notation $x_f$ it means $x(t_f)$, or the state at time $t_f$.
Then, $\dot{x}_f$ is the vector of instantaneous velocity and acceleration (usually due to gravity at the position given in $x_f$).
So, $A =\frac{\partial \dot{x}_f}{\partial x_f}$ is going to be a Jacobian. For orbital problems it usually looks like $A = \begin{bmatrix}0_{3\times3} & I_{3\times3} \\ G(r) & 0_{3\times3}\end{bmatrix}$, where $r$ is the position component of $x$ at whatever time you're interested in and $G(r)$ is the gravity gradient Jacobian, $\frac{\partial\ddot{r}}{\partial r}$ evaluated at $x$.
You can find a formula for the gravity gradient in a lot of places, but I just happen to have my copy of Battin open (An Introduction to the Mathematics and Methods of Astrodynamics, Revised Edition), and it's given at the bottom of page 454, just under problem 9-11.
If $A(t_1)$ and $A(t_2)$ commute for all $t_1$ and $t_2$, so $A(t_1)\,A(t_2) = A(t_2)\,A(t_1)$, then you could use
$$
\Phi(t,0) = e^{\int_0^t A(\tau)\,d\tau} = e^{B(t)}
$$
which indeed for the time-invariant case comes down to $e^{A\,t}$. In order to see why, one can use the Taylor expansion as a way of calculating the matrix exponential
$$
e^{B(t)} = I + B(t) + \frac{1}{2!} B(t)^2 + \frac{1}{3!} B(t)^3 + \cdots
$$
The time derivative of this expression should be equal to $A(t)\,\Phi(t,0)$. Taking the time derivative of this Taylor expansion gives
$$
\frac{d}{dt}e^{B(t)} = \dot{B}(t) + \frac{1}{2!} \left(\dot{B}(t)\,B(t) + B(t)\,\dot{B}(t)\right) + \frac{1}{3!} \left(\dot{B}(t)\,B(t)^2 + B(t)\,\dot{B}(t)\,B(t) + B(t)^2\,\dot{B}(t)\right) + \cdots
$$
when using that $\dot{B}(t) = A(t)$ and that $B(t)$ can be seen as a linear sum of all $A(\tau)\,\forall \tau\in[0,t]$, then if $A(t_1)$ and $A(t_2)$ commute for all $t_1$ and $t_2$ it follows that $A(t)$ and $B(t)$ should commute as well. This allows for $\dot{B}(t) = A(t)$ to be factored out for all term of the derivative of the Taylor series
$$
\frac{d}{dt}e^{B(t)} = A(t)\left(I + \frac{1}{2!} 2\,B(t) + \frac{1}{3!} 3\,B(t)^2 + \cdots\right) = A(t)\left(I + B(t) + \frac{1}{2!} B(t)^2 + \cdots\right)
$$
which is just equal to $A(t)\,e^{B(t)}$.
However your $A(t)$ does not satisfy this condition, so this solution for the state transition matrix can not be used. I am not aware of another way one could for finding an analytical solution for the state transition matrix besides the Peano-Baker series. In this document it is mentioned at page 40 that the state transition matrix is almost always obtained via numerical integration. That document is from 1991. So if back then numerical integration would be preferred over analytical solutions, then nowadays with much more computation power at our disposal it should probably still be the preferred method.
However it can be noted that once you have obtained a solution for $\Phi(t,0)\,\forall\,t\in[0,T]$ with $T$ the period of $A(t)$, then all further solutions can be derived from this using the following expression recursively
$$
\Phi(t+T,0) = \Phi(t,0)\,\Phi(T,0).
$$
From this you can also say something about stability, namely
$$
\Phi(n\,T,0) = \Phi(T,0)^n\quad \forall\ n\in\mathbb{Z}
$$
so if $\Phi(T,0)$ is a Schur matrix (all eigenvalues inside the unit circle) then the system will be stable.
Best Answer
Let me turn my comment into an answer: if we write $x(t)=e^{At}y(t)$, then $y$ satisfies the differential equation $\dot{y}=\cos(\omega t)C(t)y$, where $C(t)=e^{-At}Be^{At}$. We see thus that $\Phi(t,0)=e^{At}\Lambda(t,0)$, where $\Lambda(t,0)$ is the state-transition matrix associated to $\cos(\omega t)C(t)$. Hence, we must prove that $\Lambda(t,0)y_0\to y_0$ when $\omega\to\infty$.
To show this, we integrate the differential equation to get $y(t)=y_0+\int_0^t\cos(\omega s)C(s)y(s)\,ds$, so we must prove that the integral term on the right goes to zero, but $y$ depends on $\omega$, then we write also $y_\omega$. We integrate by parts to get
$$ \frac{1}{\omega}\big[-\int_0^t\sin(\omega s)(C(s)y_\omega(s))'\,ds+\sin(\omega t)C(t)y_\omega(t)\big] \\ = \frac{1}{\omega}\big[-\int_0^t\sin(\omega s)(C'(s)y_\omega(s)+\cos(\omega s)C(s)^2y_\omega(s))\,ds+\sin(\omega t)C(t)y_\omega(t)\big]\hspace{0.5cm}(*) $$
We want to make $\omega$ goes to infinity, but we must first get a uniform upper bound of $y_\omega$. Take inner product at both sides of the differential equation to reach $\frac{1}{2}\dot{(|y|^2)}=\cos(\omega t)\langle C(t)y,y\rangle\le |C(t)||y|^2$. By Gronwall's inequality we get $|y(t)|\le R(t)|y_0|$, where $R$ is a function independent from $\omega$. Use this in equation (*), to conclude that $y(t)\to y_0$ as $\omega\to\infty$ for fixed $t$, so $\Lambda(t,0)\to I$ as $\omega\to \infty$.