I came up with a proof of Artin's linear independence of characters in field theory. The usual proof uses a clever trick devised by Artin. Since I'm not as clever as him, I prefer a proof which doesn't use a clever trick. Is this proof well-known? The proof consists of a few easy steps.
Step 1.
Let $K$ be a field. Let $A \neq 0$ be a not-necessarily-commutative associative unital $K$-algebra. Let $f_1,\dotsc,f_n$ be distinct $K$-algebra homomorphisms from $A$ to $K$. Let $\phi:A \to K^n$ be the map defined by $\phi(x) = (f_1(x),\dotsc,f_n(x))$. Then $\phi$ is surjective.
The proof is an easy consequence of Chinese remainder theorem.
Step 2.
Let $f_1,\dotsc,f_n$ be as above. There are elements $x_1,\dotsc,x_n$ of $A$ such that $f_j(x_i) = \delta(i, j)$ where $\delta(i, j)$ is Kronecker's delta.
The proof is an easy consequence of Step 1.
Step 3
Let $K$ and $A$ be as above. Let $\text{Homalg}(A, K)$ be the set of $K$-algebra homomorphisms from $A$ to $K$. Let $\text{Hom}(A, K)$ be the set of $K$-linear maps from $A$ to $K$. Then $\text{Homalg}(A, K)$ is a linearly independent subset of $\text{Hom}(A, K)$.
The proof is an easy consequence of Step 2.
Step 4 (Artin's linear independence of characters)
Let $K$ be a field. $K$ is regarded as a monoid by multiplication. Let $M$ be a not-necessarily-commutative monoid. Let $\text{Hom}(M, K)$ be the set of monoid homomorphisms. Let $K^M$ be the set of maps from $M$ to $K$. $K^M$ is regarded as a vector space over $K$. Then $\text{Hom}(M, K)$ is a linearly independent subset of $K^M$.
The proof is an easy consequence of Step 3 if one considers the monoid algebra $K[M]$.
Best Answer
This is an approach different from yours. Let's precise some things.
Definition Let $G$ be a group and $F$ be a field.
Theorem Let $G$ be a group and $F$ be a field. For any $n\in \Bbb N$, any set $\{\sigma_1,\ldots,\sigma_n\}$ of $n$ characters from $G$ to $F$ is independent.
Proof. Proceed by induction.
If $n=1$ and $a\in F$ then $$a\sigma(x) =0\quad \forall x\in G$$ implies $a=0$ because $\sigma(G)\subseteq F^\ast$.
Suppose that the theorem holds for any $k\in\{1,\ldots,n-1\}$, being this our induction hypothesis.
Arguing by contradiction, suppose that there is a set $\{\sigma_1,\ldots,\sigma_n\}$ of $n$ characters from $G$ to $F$ such that there exists $a_1,\ldots,a_n\in F$, not all $0$, such that $$\sum_{j=1}^n a_j\sigma_j(x) = 0\quad\forall x\in G. \tag{1}$$ Notice that if some $a_j$ is $0$ we'll have a dependent set of characters with less than $n$ elements. By our induction hypothesis, this can not be, so all the $a_j$ are not $0$.
Dividing in (1) by $a_n$, we can assume that $a_n=1$. So, we have $$0=a_1\sigma_1(x)+\cdots+a_{n-1}\sigma_{n-1}(x) + \sigma_n(x)\quad \forall x\in G.\tag{2}$$
Now, $\sigma_1\neq \sigma_n$ (otherwise $\{\sigma_1,\ldots,\sigma_n\}$ has not $n$ elements) and thus there is some $g\in G$ such that $\sigma_1(g)\neq \sigma_n(g)$. Equation (2) is valid for any element of $G$, particularly it is valid for elements of the from $gx$ with $x\in G$, then we get $$0=a_1\sigma_1(g)\sigma_1(x)+\cdots+a_{n-1}\sigma_{n-1}(g)\sigma_{n-1}(x)+\sigma_n(g)\sigma_n(x)\quad\forall x\in G.$$
Divide this last equation by $\sigma_n(g)$: $$0=a_1\frac{\sigma_1(g)}{\sigma_n(g)}\sigma_1(x)+\cdots+a_{n-1}\frac{\sigma_{n-1}(g)}{\sigma_n(g)}\sigma_{n-1}(x)+\sigma_n(x)\quad\forall x\in G.$$
Subtracting the equation (2) from this last one, we get $$0=a_1\left[\frac{\sigma_1(g)}{\sigma_n(g)}-1\right]\sigma_1(x)+\cdots+a_{n-1}\left[\frac{\sigma_{n-1}(g)}{\sigma_n(g)}-1\right]\sigma_{n-1}(x)\quad\forall x\in G.$$
Thanks to the independence of $\{\sigma_1,\ldots\sigma_{n-1}\}$ we obtain $$a_1\left[\frac{\sigma_1(g)}{\sigma_n(g)}-1\right]=0,$$ and since $a_1\neq 0$, this implies $\sigma_1(g)=\sigma_n(g)$, which is absurd due to the choose of $g$.