To answer your stated questions: Yes, you've now done the computations correctly, and no, that's about as simple as you can have the expression. One may take advantage of the tensorial nature of the covariant derivative and write
$$ \nabla_i \omega_j = \partial_i \omega_j - \Gamma^k_{ij}\omega_k $$
using a bit of a sloppy mixture of abstract index notation with coordinate index notations, but that is about it.
To answer your question in the comments, we introduce the abstract notion of a derivation on the tangent bundle of a (smooth) manifold $M$. Let $\mathfrak{T}^{r,s}(M)$ denote the $\mathbb{R}$-vector space of smooth $(r,s)$ tensors over $M$. A tensor derivation is defined to be a $\mathbb{R}$-linear map $\mathscr{D}:\mathfrak{T}^{r,s}(M)\to \mathfrak{T}^{r,s}(M)$ defined for all pairs $(r,s)\in \mathbb{Z}_{\geq 0}^2$ satisfying the following conditions:
- If $A$ and $B$ are smooth $(r,s)$ and $(t,u)$ tensor fields respectively, we have that $\mathscr{D}(A\otimes B) = \mathscr{D}A\otimes B + A\otimes \mathscr{D}B$ (in other words, the Leibniz rule holds for tensor products).
- If $A$ is a smooth $(r,s)$ tensor field, and $\mathfrak{C}:\mathfrak{T}^{r,s}(M) \to \mathfrak{T}^{r-1,s-1}(M)$ is a tensor contraction, then we have $\mathscr{D}(\mathfrak{C}A) = \mathfrak{C}(\mathscr{D}A)$. (It commutes with tensor contractions.)
It is a theorem that for $r = s = 0$, any $\mathbb{R}$-linear map on $\mathfrak{T}^{0,0}(M)$ (the space of smooth functions) that satisfies the Leibniz rule (the second condition does not apply since there are no tensor contractions applicable to pure functions) can be represented as the directional derivative relative to a vector field. So we can add the unessential third condition
- There exists a smooth vector field $V$ such that for all smooth functions $f$ over $M$, $V(f) = \mathscr{D}f$.
Remark: This natural one-to-one correspondence between derivations and smooth vector fields allow us to also interpret a derivation as a map from $\mathfrak{T}^{1,0}(M)$, the space of smooth vector fields, into the space of $\mathbb{R}$-linear maps on smooth tensor fields that satisfies conditions 1 and 2. This manifests in the notation $\nabla_X Y$ for covariant differentiation and $\mathcal{L}_X Y$ for Lie differentiation.
In any case, just given the above three conditions are not enough to specify the derivation. But we just need one more ingredient: the action of $\mathscr{D}$ on either $\mathfrak{T}^{1,0}$ or $\mathfrak{T}^{0,1}$. I'll sketch the case of $\mathfrak{T}^{1,0}(M)$.
Suppose we know what $\mathscr{D}X$ is for all smooth vector fields. Then for one-forms $\omega$, we consider the contraction $\mathfrak{C}(\omega\otimes X)$ which we probably more commonly write as $\omega(X)$ and is a function. So using the axioms of the derivation, we have
$$ \mathscr{D}[\omega(X)] = (\mathscr{D}\omega)(X) + \omega(\mathscr{D}X) $$
this is an algebraic equation in which all objects are known except for one: we know what $\omega(X)$ is given $\omega$ and $X$, we know what $\mathscr{D}X$ is by assumption, and so also $\omega(\mathscr{D}X)$. We know that $\mathscr{D}[\omega(X)]$ is since $\omega(X)$ is a function. Hence we can solve the algebraic equation to get a formula for $\mathscr{D}\omega$, using that $\mathfrak{T}^{1,0}(M)$ and $\mathfrak{T}^{0,1}$ are duals so that to specify $\mathscr{D}\omega$ it suffices to specify $(\mathscr{D}\omega)(X)$ for all $X$.
Similarly, now given an arbitrary tensor field $\Xi\in \mathfrak{T}^{r,s}(M)$, we can do the same procedure and consider
$$ \mathscr{D}\left[ \mathfrak{C}_1\mathfrak{C}_2\cdots\mathfrak{C}_{r+s} \Xi \otimes X_1\otimes X_2\otimes\cdots\otimes X_s\otimes \omega_1\otimes\cdots\otimes \omega_r\right] $$
the full contraction of $\Xi$ with $s$ arbitrarily chosen vector fields and $r$ arbitrarily chosen one forms. Expanding the above expression using the axioms of the derivation, we are left with only one unknown: $\mathscr{D}\Xi$. Everything else $\mathscr{D}X_n$, $\mathscr{D}\omega_n$ etc are all computable from the axioms, the vector field $V$ (of the third axiom), and the assumed knowledge of $\mathscr{D}X$ for all vector fields $X$.
For illustration, we can verify that the above construction is indeed "tensorial". Given that
$$ \mathscr{D}\omega(X) = \mathscr{D}(\omega\cdot X) - \omega(\mathscr{D}X) $$
one may wonder whether $\mathscr{D}\omega$ is indeed a one-form: that is, whether $\mathscr{D}\omega(fX) = f\mathscr{D}\omega(X)$ for $f$ a smooth function. We can directly compute:
$$ \mathscr{D}\omega(fX) = \mathscr{D}(\omega\cdot fX) - \omega(\mathscr{D}(fX)) = \mathscr{D}[f(\omega\cdot X)] - \omega\left( \mathscr{D}f X + f \mathscr{D}X\right) = \mathscr{D}f (\omega\cdot X) + f \mathscr{D}(\omega\cdot X) - f \omega(\mathscr{D}X) - \omega(\mathscr{D}f X) $$
Noting that $\mathscr{D}f(\omega\cdot X) = \omega(\mathscr{D}f X)$ by the tensorial property of $\omega$ as a one form, we see that indeed the Leibniz-rule + contraction rule allows us to make sure that $\mathscr{D}\omega$ (and hence $\mathscr{D}\Xi$ for an arbitrary tensor field $\Xi$) is tensorial.
Lastly, what does this have to do with connection coefficients? All this mucking about with the Christoffel symbols is just a way of saying: we know what $\mathscr{D}X$ is. That is, consider the derivation labeled $\nabla_Y$, which acts on scalar fields as the vector field $Y$ with coordinate expansion $Y^i\partial_i$. For an arbitrary vector field $X$ given in local coordinates $X^i \partial_i$, we demand that the local coordinate expression of $(\nabla_Y X)^i \partial_i$ be given by
$$ (\nabla_Y X)^i = Y^j\partial_j X^i + \Gamma^i_{jk}X^k Y^j $$
and we carry on from there.
By a partition of unity argument, one can show that any vector bundle admits an inner product, that is a smooth symmetric $2$-tensor that is symmetric positive definite: for any smooth sections $v$ and $w$, the function
$$
p \mapsto \langle v,w\rangle_p
$$ is smooth.
Suppose $E$ is a vector bundle over $M$, of rank $k$, and chose an inner product. Suppose $E$ admits a global frame $(v_1,\ldots,v_k)$. Then
\begin{align*}
\Phi : E &\to M \times \mathbb{R}^k \\
(p,u) &\mapsto \left(p,(\langle u,v_1\rangle_p,\ldots,\langle u , v_k \rangle_p)\right)
\end{align*}
is a global trivialization.
Here is a construction of a smooth inner product on any finite rank smooth vector bundle $E$. Choose a locally finite open cover $\{U_i\}_{i\in I}$ such that $E$ is locally trivial on each $U_i$:
$$
\forall i \in I,~ \exists \Phi_i : E \overset{\sim}{\to} U_i\times \mathbb{R}^k
$$
where $\Phi_i$ is smooth. Define on $E|_{U_i}$ the smooth inner product
$$
\langle u,w\rangle_i = \langle {\Phi_i}_*u,{\Phi_i}_*w\rangle_{\mathbb{R}^k}
$$
that is $\langle \cdot,\cdot \rangle_i = (\Phi_i)^* \langle\cdot,\cdot\rangle_{\mathbb{R}^k}$. Then $\langle\cdot,\cdot \rangle_i$ is smooth over $U_i$ because so is $\Phi_i$ and the natural inner product of $\mathbb{R}^k$.
Chose a smooth partition of unity $\{\varphi_i\}_{i\in I}$ with respect to the locally finite open cover $\{U_i\}_{i\in I}$, and define, for $u$ and $w$ sections of $E$:
$$
\langle u,v \rangle = \sum_{i\in I} \varphi_i \cdot \langle u,w\rangle_i
$$
It is clearly smooth because it is a locally finite sum of smooth sections. It is clearly bilinear and symmetric. Moreover, at a point $p$:
$$
\langle u,u\rangle_p = \sum_{i \in I} \varphi_i(p) \langle u,u \rangle_i
$$
As all terms are nonnegative, it is nonnegative. Moreover, it is zero if and only if in every $U_i$, $u|_{U_i} = 0$ (this is because $\Phi_i$ are diffeomorphisms and $\langle\cdot, \cdot \rangle_{\mathbb{R}^k}$ is positive definite). Thus, it is an inner product on $E$.
Best Answer
I think that there are two things to say here: Since you are not asking for interesting examples, you have to be aware of the fact that linear connections are a very soft structure, so you can make arbitrary choices to define them. Indeed, for a local frame $\{s_j\}$ defined on $U\subset M$ you can choose an arbitrary matrix $A=(A^i_j)$ of one-forms $A^i_j\in\Omega^1(U)$ and define
$\nabla(\sum_i f_is_i):=\sum_i(df_i\otimes s_i)+\sum_{i,j}f_iA^j_i\otimes s_j$ which just means $\nabla_\xi(\sum_i f_is_i)(x):=\sum_i df_i(\xi)(x)s_i(x)+\sum_{i,j}f_i(x)A^j_i(\xi)(x)s_j(x)$.
This also illustrates the second point I want to make, namely that you are trying to go to a setting that is kind of an overkill. A choice of local trivialization of a vector bundle is equivalent to a choice of local frame defined on the same subset. So if you start with a local trivialization, you should work with the frame induced local frame, i.e. with $s_j$ characterized by the fact that $\phi\circ s_j$ is the constant function $e_j$, the $j$th unit vector. In the above language, this just means that the $f_i$ are the components of the function $\phi\circ s:U\to\mathbb R^k$. The formula above then gives you the components of $\phi\circ \nabla_\xi s$ $df_i(\xi)+\sum_jf_jA^j_i(\xi)$. Of course you can then covert this into expressions for a different frame, which causes complications but in my opinion does not lead to additional insight. Of course you can then further write things in terms of local coordinates, but sections have values in $E$, so they can only be expresses in local coordinates after a choice of trivialization (or of a second frame).