Yes, the first order approximation using the adjoint is correct.
It is easyer to see if one interpret members of the Lie algebra as minimal vectors $\mathbf{x}$ instead of square matrices $\widehat{\mathbf{x}}$. Thus, we define the Lie bracket as $[\mathbf{a},\mathbf{b}]:=\widehat{\mathbf{a}}\cdot\widehat{\mathbf{b}}-\widehat{\mathbf{b}}\cdot\widehat{\mathbf{a}}$.
In case of SO3, it is simply the cross product: $[\mathbf{a},\mathbf{b}]=\mathbf{a}\times\mathbf{b}$. However, a Lie bracket in such a vector from exists for all other matrix Lie groups too.
Accordingly, the BCH-Formular is now defined as $\text{bch}(\mathbf{a},\mathbf{b}):=\log(\exp(\widehat{\mathbf{a}})\exp(\widehat{\mathbf{b}}))^\vee$
For instance, as a third order approximation, we get:
$$\left. \frac{\partial}{\partial \mathbf{x}} \log(\mathtt{A}\exp(\mathbf{x})\mathtt{B})^\vee\right|_{\mathbf{x}=\mathbf{0}} $$
$$ =\left.\frac{\partial}{\partial \mathbf{x}} \log\left(\exp(\widehat{\mathtt{Ad}_\mathtt{A}\mathbf{x}})\mathtt{A}\mathtt{B}\right)^\vee\right|_{\mathbf{x}=\mathbf{0}}$$
$$=\left.\frac{\partial}{\partial \mathbf{x}} \log\left(\exp(\widehat{\mathtt{Ad}_\mathtt{A}\mathbf{x}})\exp(\widehat{\mathbf{c}})\right)^\vee\right|_{\mathbf{x}=\mathbf{0}}$$
$$= \left.\frac{\partial}{\partial \mathbf{x}} \text{bch}(\mathtt{Ad}_\mathtt{A}\mathbf{x},\mathbf{c})\right|_{\mathbf{x}=\mathbf{0}}$$
$$\approx \frac{\partial}{\partial \mathbf{x}}\left( \mathtt{Ad}_\mathtt{A}\mathbf{x}+\mathbf{c}
+ \frac{1}{2}[\mathtt{Ad}_\mathtt{A}\mathbf{x},\mathbf{c}]+\frac{1}{12}([\mathtt{Ad}_\mathtt{A}\mathbf{x},[\mathtt{Ad}_\mathtt{A}\mathbf{x},\mathbf{c}]]+ [\mathbf{c},[\mathbf{c},\mathtt{Ad}_\mathtt{A}\mathbf{x}]])\right)_{\mathbf{x}=\mathbf{0}}$$
$$ =\left.\left(\frac{\partial \mathbf{y}}{\partial\mathbf{y}} + \frac{1}{2}\cdot\frac{\partial [\mathbf{y},\mathbf{c}]}{\partial \mathbf{y}}\right|_{\mathbf{y}=\mathtt{Ad}_\mathtt{A}\mathbf{0}}
+\frac{1}{12}\left(\frac{\partial[\mathbf{y},[\mathbf{y},\mathbf{c}]}{\partial\mathbf{y}}-\frac{\partial[\mathbf{c},[\mathbf{y},\mathbf{c}]}{\partial \mathbf{y}}\right)_{\mathbf{y}=\mathtt{Ad}_\mathtt{A}\mathbf{0}}\right)
\left.\frac{\partial \mathtt{Ad}_\mathtt{A}\mathbf{x}}{\partial \mathbf{x}}\right|_{\mathbf{x}=\mathbf{0}}$$
$$= \left(\mathtt{I}
+ \left.\frac{1}{2}\cdot\frac{\partial [\mathbf{y},\mathbf{c}]}{\partial \mathbf{y}}\right|_{\mathbf{y}=\mathbf{0}} + \frac{1}{12}
\left(\frac{\partial [\mathbf{y},[\mathbf{0},\mathbf{c}]]}{\partial \mathbf{y}}+
\frac{\partial [\mathbf{0},[\mathbf{y},\mathbf{c}]]}{\partial \mathbf{y}}
-\left.\frac{\partial [\mathbf{c},\mathbf{w}]}{\partial \mathbf{w}}\right|_{\mathbf{w}=[\mathbf{0},\mathbf{c}]}
\left.\frac{\partial [\mathbf{y},\mathbf{c}]}{\partial \mathbf{y}}
\right|_{\mathbf{y}=\mathbf{0}}
\right)\right)
\mathtt{Ad}_\mathtt{A}\nonumber$$
$$= \left(\mathtt{I}
+ \left.\frac{1}{2}\cdot\frac{\partial [\mathbf{y},\mathbf{c}]}{\partial \mathbf{y}}\right|_{\mathbf{y}=\mathbf{0}} + \frac{1}{12}\left.\frac{\partial [\mathbf{w},\mathbf{c}]}{\partial \mathbf{w}}\right|_{\mathbf{w}=\mathbf{0}}
\left.\frac{\partial [\mathbf{y},\mathbf{c}]}{\partial \mathbf{c}}\right|_{\mathbf{y}=\mathbf{0}}
\right)\mathtt{Ad}_\mathtt{A}$$
$$= \left(\mathtt{I}
+ \left.\frac{1}{2}\cdot\frac{\partial [\mathbf{y},\mathbf{c}]}{\partial \mathbf{y}}\right|_{\mathbf{y}=\mathbf{0}} + \frac{1}{12}\left(
\left.\frac{\partial [\mathbf{y},\mathbf{c}]}{\partial \mathbf{c}}\right|_{\mathbf{y}=\mathbf{0}} \right)^2
\right)\mathtt{Ad}_\mathtt{A}$$
This answer is based on the observation that a form of complex analysis can be done in the matrix algebra.
Let $\mathfrak{A}$ denote the set of all $d\times d$ complex matrices. Then an $\mathfrak{A}$-valued function $F(z) = [f_{ij}(z)]_{i,j=1}^{d}$ on an open subset $U$ of $\mathbb{C}$ is holomorphic if each entry $f_{ij}(z)$ is holomorphic on $U$. Now it is easy to note:
Lemma 1. Let $F$ and $G$ be two $\mathfrak{A}$-valued functions on a domain $U$ of $\mathbb{C}$. If $F = G$ on some subset $S$ of $U$, where $S$ has an accumulation point in $U$, then $F = G$ on all of $U$.
Proof. Apply the identity theorem entry-wise. $\square$
Lemma 2. Let $F$ be an $\mathfrak{A}$-valued holomorphic function on an open set $U$ of $\mathbb{C}$. Then $\exp(F(z))$ is also holomorphic on $U$.
Proof. The sum $\exp(F(z)) = \sum_{n=0}^{\infty} \frac{1}{n!} F(z)^n$ converges locally uniformly on $U$, and so does each entry of the partial sum. $\square$
Now let $X \in \mathfrak{A}$ be such that $S_N = \sum_{n=1}^{N}\frac{(-1)^{n-1}}{n}X^n$ converges to some $S \in \mathfrak{A}$ as $N \to \infty$. Then
\begin{align*}
\sum_{n=1}^{N} \frac{(-1)^{n-1}}{n}X^n z^n
&= \sum_{n=1}^{N} (S_n - S_{n-1}) z^n \\
&= \sum_{n=1}^{N} S_n z^n - \sum_{n=1}^{N-1} S_n z^{n+1} \\
&= S_N z^N + (1 - z) \sum_{n=1}^{N-1} S_n z^n.
\end{align*}
(Of course, this is nothing but a consequence of summation by parts.) Assuming $|z| < 1$ and letting $N \to \infty$, both sides converges and the following equality holds:
$$ \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n}X^n z^n = (1 - z) \sum_{n=1}^{\infty} S_n z^n. \tag{1} $$
Let us denote the left-hand side of $\text{(1)}$ by $f(z)$. Since $(S_n)$ converges, each entry of $S_n$ is bounded. So, if we write $S_n = [S_{n;ij}]_{i,j=1}^{d}$, then each $\sum_{n=1}^{\infty} S_{n;ij} z^n$ has radius of convergence at least $1$. From this, we find that $f$ is holomorphic on $\mathbb{D} = \{z \in \mathbb{C} : |z| < 1\}$. Then by Lemma 2, $\exp(f(z))$ is also holomorphic on $\mathbb{D}$. Moreover, we already know that if $\|zX\| < 1$, then
$$ \exp(f(z)) = 1 + zX . $$
By Lemma 1, this equality extends to all of $\mathbb{D}$. Finally, by Abel's theorem, $f(r) \to S$ as $r \to 1^-$. (Although the linked article only covers the case of $\mathbb{C}$-valued coefficients, the case of $\mathfrak{A}$-valued coefficients can be proved mutatis mutandis.) So,
$$ \exp(S) = \lim_{r \to 1^-} \exp(f(r)) = \lim_{r \to 1^-} (1 + rX) = 1 + X. $$
This is enough to establish the desired claim.
Best Answer
Using the fact that $\log(\exp(A))=\exp(\log(A))=A$, we have $$\exp(\log(A)+\log(B))=\exp(\log(A))\exp(\log(B))$$ which reduces to $$\exp(\log(A)+\log(B))=AB.$$ If we take the Log of both sides we have $$\log(\exp(\log(A)+\log(B)))=\log(AB)$$ and hence $$\log(A)+\log(B)=\log(AB)$$