Yes, that's true. Let $f_i : V_i \to W_i$ be two linear maps. Since $\mathrm{im}(f_1) \otimes \mathrm{im}(f_2)$ embeds into $W_1 \otimes W_2$, we may assume that $f_1,f_2$ are surjective. But then they are split, so that we can assume that $V_i = W_i \oplus U_i$ and that $f_i$ equals the projection $V_i \to W_i$, with kernel $U_i$. Then $V_1 \otimes V_2 = W_1 \otimes W_2 \oplus W_1 \otimes U_2 \oplus U_1 \otimes W_2 \oplus U_1 \otimes U_2$ and $f_1 \otimes f_2$ equals the projection of $V_1 \otimes V_2$ onto $W_1 \otimes W_2$. Hence the kernel is $W_1 \otimes U_2 \oplus U_1 \otimes W_2 \oplus U_1 \otimes U_2 = U_1 \otimes V_2 + V_1 \otimes U_2$.
This shows even more: The kernel is the pushout $(\ker(f_1) \otimes V_2) \cup_{\ker(f_1) \otimes \ker(f_2)} (V_1 \otimes \ker(f_2))$.
By the way, this argument is purely formal and works in every semisimple abelian $\otimes$-category. What happens when we drop semisimplicity, for example when we consider modules over some commutative ring $R$? Then we only need some flatness assumptions:
Let $f_1 : V_1 \to W_1$ and $f_2 : V_2 \to W_2$ be two morphisms in an abelian $\otimes$-category (for example $R$-linear maps between $R$-modules). If $f_1,f_2$ are epimorphisms, then we have exact sequences $\ker(f_1) \to V_1 \to W_1 \to 0$ and $\ker(f_2) \to V_2 \to W_2 \to 0$. Applying the right exactness of the tensor product twice(!), we get that then also
$\ker(f_1) \otimes V_2 \oplus V_1 \otimes \ker(f_2) \to V_1 \otimes V_2 \to W_1 \otimes W_2 \to 0$
is exact. If $f_1,f_2$ are not epi, we can still apply the above to their images and get the exactness of
$\ker(f_1) \otimes V_2 \oplus V_1 \otimes \ker(f_2) \to V_1 \otimes V_2 \to \mathrm{im}(f_1) \otimes \mathrm{im}(f_2) \to 0.$
Now assume that $\mathrm{im}(f_1)$ and $W_2$ are flat. Then $\mathrm{im}(f_1) \otimes \mathrm{im}(f_2)$ embeds into $\mathrm{im}(f_1) \otimes W_2$ which embeds into $W_1 \otimes W_2$. Hence, we have still that the sequence
$\ker(f_1) \otimes V_2 \oplus V_1 \otimes \ker(f_2) \to V_1 \otimes V_2 \to W_1 \otimes W_2$
is exact. In other words, we have a sum decomposition
$$\ker(f_1 \otimes f_2) = \alpha(\ker(f_1) \otimes V_2) + \beta(V_1 \otimes \ker(f_2)),$$ where $\alpha : \ker(f_1) \otimes V_2 \to V_1 \otimes V_2$ and $\beta : V_1 \otimes \ker(f_2) \to V_1 \otimes V_2$ are the canonical morphisms. In general, these are not monic! However, this is the case, by definition, when $V_1$ and $V_2$ are flat. So in this case we can safely treat $\alpha$ and $\beta$ as inclusions and write $$\ker(f_1 \otimes f_2) = V_1 \otimes \ker(f_2) + \ker(f_1) \otimes V_2.$$
Nice question. First, note that over $\mathbb{C}$, any operator can be represented with respect to an appropriate basis by an upper triangular matrix. This implies that any operator $A$ has invariant subspaces of all possible dimensions so the question is not interesting over $\mathbb{C}$.
To construct a counterexample over $\mathbb{R}$, I will use the following observations:
- If $n$ is even and the characteristic polynomial of $A$ has no real roots, then $A$ has no odd-dimensional $A$-invariant subspaces. The reason is that if you restrict $A$ to an odd-dimensional $A$-invariant subspace, you get an operator which must have an eigenvector (with a real eigenvalue), contradicting the fact that all the roots of the characteristic polynomial of $A$ aren't real.
- If the (possibly complex) roots of the characteristic polynomial of $A$ are $(\lambda_i)_{i=1}^n$ (with multiplicity) then the roots of the characteristic polynomial of $\Lambda^k(A)$ are $(\lambda_{\alpha})$ where $\alpha = (i_1 < \dots < i_k)$ runs over all possible multi-indices and $\lambda_{\alpha} := \lambda_{i_1} \dots \lambda_{i_k}$. To see this, assume first that $A$ is a complex operator and choose an ordered basis $(e_i)_{i=1}^n$ with respect to which $A$ is represented by an upper triangular matrix with
$$Ae_i = \lambda_i e_i \mod \operatorname{span} \{ e_j \}_{j < i}. $$
Then $\Lambda^k(A)$ is represented with respect to the induced ordered basis $(e_{\alpha})$ (where the order on the multi-indices is the lexicographical one) by an upper triangular matrix with
$$ \Lambda^k(A)(e_\alpha) = \lambda_{\alpha} e_{\alpha} \mod \operatorname{span} \{ e_{\beta} \}_{\beta < \alpha}. $$
The result for real operators follows by complexification using the fact that exterior power and complexification commute.
Now, let $\theta = \frac{2\pi}{3}$ and set $\alpha = e^{i\theta}$. Consider the operator $A \colon \mathbb{R}^6 \rightarrow \mathbb{R}^6$ which is represented with respect to the standard basis by the block diagonal matrix
$$ \begin{pmatrix} \cos \theta & -\sin \theta & 0 & 0 & 0 & 0 \\
\sin \theta & \cos \theta & 0 & 0 & 0 & 0 \\
0 & 0 & \cos \theta & -\sin \theta & 0 & 0 \\
0 & 0 & \sin \theta & \cos \theta & 0 & 0 \\
0 & 0 & 0 & 0 & \cos \theta & -\sin \theta \\
0 & 0 & 0 & 0 & \sin \theta & \cos \theta
\end{pmatrix}. $$
The characteristic polynomial of $A$ is
$$ (z - \alpha)^3(z - \overline{\alpha})^3 = (z^2 - (2 \Re{\alpha})z + |\alpha|^2)^3 = (z^2 + z + 1)^3 $$
with roots
$$ \alpha, \overline{\alpha}, \alpha, \overline{\alpha}, \alpha, \overline{\alpha}. $$
The roots aren't real so $A$ doesn't have a three-dimensional invariant subspace. However $\alpha^3 = 1$ is a real root of the characteristic polynomial of $\Lambda^3(A)$ (of multiplicity two) so $\Lambda^3(A)$ has two linearly independent eigenvectors which are necessarily indecomposable.
Remark: One can show using primary decomposition that if a real operator has a real eigenvalue then it has invariant subspaces of all possible dimensions. Hence, counterexamples are possible only in even dimensions. It is a nice exercise to see why you can't have a counterexample in dimension four so this is a minimal counterexample in terms of dimension.
Best Answer
One way to do it is to define
$$ (f_1 \wedge \dots \wedge f_k)(v_1 \wedge \dots \wedge v_k) := \sum_{\sigma \in S_k} (-1)^{\sigma} f_1(v_{\sigma(1)}) \wedge \dots \wedge f_k(v_{\sigma(k)}). $$
You can check directly that this is well-defined and that $\underbrace{f \wedge \dots \wedge f}_{k \textrm{ times}} = k! \cdot \Lambda^k(f)$. For $k = 2$, you get
$$ (f \wedge g)(v_1 \wedge v_2) = f(v_1) \wedge g(v_2) - f(v_2) \wedge g(v_1). $$
Then
$$ 2 \cdot \Lambda^2(f_1 + f_2) = (f_1 + f_2) \wedge (f_1 + f_2) = f_1 \wedge f_1 + 2 f_1 \wedge f_2 + f_2 \wedge f_2 \\= 2 \left( \Lambda^2(f_1) + f_1 \wedge f_2 + \Lambda^2(f_2) \right)$$
so
$$ \Lambda^2(f_1 + f_2) - \Lambda^2(f_1) - \Lambda^2(f_2) = f_1 \wedge f_2 $$
and your expression is just half the trace of $f_1 \wedge f_2$.
Remark: This might seem as an ad hoc definition but it actually quite natural from a certain perspective. Assuming $V,W$ are finite dimensional, we have $\operatorname{Hom}(\Lambda(V), \Lambda(W)) \cong \Lambda(V^{*}) \otimes \Lambda^{*}(W)$. Both $\Lambda(V^{*})$ and $\Lambda(W)$ are graded algebras so the tensor product inherits a natural multiplication defined by
$$ (\mu_1 \otimes \eta_1) \wedge (\mu_2 \otimes \eta_2) := (\mu_1 \wedge \mu_2) \otimes (\eta_1 \wedge \eta_2), \,\,\, \mu_i \in \Lambda(V^{*}), \eta_i \in \Lambda(W). $$
The resulting bi-graded algebra is called sometimes the mixed exterior algebra. It has inside a copy of $\Lambda(V^{*})$ and $\Lambda(W)$. If you identify maps $f,g \colon V \rightarrow W$ as $(1,1)$ elements of the mixed exterior algebra, take their product and identify the resulting $(2,2)$ element with a map from $\Lambda^2(V)$ to $\Lambda^2(W)$, you get the definition I gave in the beginning of my answer.