This answer provides a scheme how to construct a constructive proof, though I'm still working to actually explicitly extract the constructive proof, so please don't accept the answer just yet. (Update: See below.) We'll prove the following statement:
Let $R$ be a reduced ring. Let $A$ be a finitely generated $R$-module and let $B$ be an arbitrary $R$-module. Let injections $\alpha : R \to A$ and $\beta : R \to B$ be given. Then the canonical map $R \to A \otimes_R B$ is injective.
The general case, with $A$ not necessarily being finitely generated, follows formally from this case, since $A$ is the directed union of its finitely generated submodules which contain the image of $\alpha$ and tensoring with $B$ commutes with colimits.
We'll prove this statement by working internal to the little Zariski topos of $R$, that is the topos of sheaves on $\operatorname{Spec}(R)$, as explained in these notes. In this topos $R$, $A$, and $B$ have mirror images $R^\sim$, $A^\sim$, and $B^\sim$ such that $R \to A \otimes_R B$ is injective if and only if $R^\sim \to A^\sim \otimes_{R^\sim} B^\sim$ is a monomorphism in the topos. In order to ultimately be able to extract a fully explicit, constructive, non-toposophic proof, the little Zariski topos needs to be defined in a constructibly sensible way; but this is possible. I presume that the extracted proof will look convoluted at first, but it's possible that it could be simplified even to the point that one wonders why one didn't see it without help of tools.
The point is that working internal to that topos simplifies the situation to the easiest case, namely that the base ring is a field, such that the proof is almost trivial. This is because the internal universe of the Zariski topos has the following peculiarities:
- The ring $R^\sim$ is a field in the sense that $1 \neq 0$ and $\forall x {:} R^\sim. \neg(\text{$x$ invertible}) \Rightarrow x = 0$.
- From this it follows that $\forall x{:}R^\sim. \neg\neg(x = 0) \Rightarrow x = 0$. This is a huge simplification, since it's much easier to verify doubly negated statements: In order to show that $\neg\neg\varphi \Rightarrow \neg\neg\psi$, it suffices to show that $\varphi \Rightarrow \neg\neg\psi$. Note that this is really a peculiarity of the Zariski topos. The analogous statement $\forall x \in R. \neg\neg(x = 0) \Rightarrow x = 0$ is in general not intuitionistically justified.
- Any finitely generated module over $R^\sim$ is not not finite free. (There does not not exist a minimal generating family. The usual proof shows that such a family is linearly independent and therefore a basis.)
Without further ado, here is the internal proof. Let $r:R^\sim$ such that $r \cdot (\alpha(1) \otimes \beta(1)) = 0$ in $A^\sim \otimes B^\sim$. We want to verify that $r = 0$, but it suffices to verify that $\neg\neg(r = 0)$. Therefore we may assume that $A^\sim$ is finite free. Let $(x_1,\ldots,x_n)$ be a basis. Write $\alpha(r) = \sum_i r_i x_i$. Since $A^\sim \otimes B^\sim \cong (B^\sim)^n$, it follows that $r_i \beta(1) = 0$ for all $i$. Since $\beta$ is injective, it follows that $r_i = 0$ for all $i$. Thus $\alpha(r) = 0$. Since $\alpha$ is injective, it follows that $r = 0$.
Update: Here is a fully explicit constructive proof, obtained by working with @HeinrichD in the comments to unravel the scheme sketched above. Unfortunately it's rather convoluted and not particularly memorable; I hope that it can be simplified.
Lemma 1. Let $R$ be a ring. Let $A$ be an $R$-module with generating family $(x_1,\ldots,x_n)$. Assume that the only $g \in R$ such that one of the $x_i$ is an $R[g^{-1}]$-linear combination of the others in $A[g^{-1}]$ is $g = 0$. Then $A$ is free with $(x_1,\ldots,x_n)$ as a basis.
Proof: Let $\sum_i r_i x_i = 0$. Let $i$ be arbitrary. In $A[r_i^{-1}]$, the generator $x_i$ is a linear combination of the others. By assumption it follows that $r_i = 0$.
Lemma 2. Let $R$ be a reduced ring. Let $A$ be a finitely generated $R$-module. Assume that the only $f \in R$ such that $A[f^{-1}]$ is a free $R[f^{-1}]$-module is $f = 0$. Then $R = 0$.
Proof: By induction on the length $n$ of a given generating family $(x_1,\ldots,x_n)$ of $A$. Note that we'll apply the induction
hypothesis not to the ring $R$, but to some localizations of $R$.
If $n = 0$, then $A = 0$. Thus we can finish by using the assumption for $f := 1$.
If $n \geq 1$, then we want to verify the assumptions of Lemma 1.
Thus let $g \in R$ be given such that one of the $x_i$ is an $R[g^{-1}]$-linear combination of the others in $A[g^{-1}]$. Therefore the $R[g^{-1}]$-module $A[g^{-1}]$ can be generated by $n-1$ elements. By the induction hypothesis (applied to the reduced ring $R[g^{-1}]$ and its module $A[g^{-1}]$, which are easily seen to satisfy the assumptions of the induction hypothesis) it follows that $R[g^{-1}] = 0$ (in this step the assumption enters for many different $f$'s). Therefore $g = 0$.
Thus, by Lemma 1, $A$ is free. We can finish by using the assumption for $f := 1$.
Corollary. Let $R$ be a reduced ring. Let $A$ be a finitely generated
$R$-module. Let $B$ be an arbitrary $R$-module. Let injections $\alpha : R \to
A$ and $\beta : R \to B$ be given. Then the canonical map $\alpha \otimes \beta : R \to A \otimes_R B$
is injective.
Proof. Let $r \in R$ such that $r \cdot (\alpha(1) \otimes \beta(1)) = 0$. To verify that $r = 0$, we'll apply Lemma 2 to the ring $R' := R[r^{-1}]$ and the $R'$-module $A' := A[r^{-1}]$. Let therefore $f \in R'$ be given such that $A'[f^{-1}]$ is a free $R'[f^{-1}]$-module. The canonical map $R'[f^{-1}] \to A'[f^{-1}] \otimes B[r^{-1}][f^{-1}]$ is injective (the easy case!). Therefore $r = 0$ in $R'[f^{-1}]$. Since $r$ is invertible in $R'$, it follows that $R'[f^{-1}] = 0$ and therefore $f = 0$.
Best Answer
If you are willing to assume that $0$ has a primary decomposition, then take one which is minimal say $0 = \bigcap_{i = 1}^n Q_i$ with radicals $P_i = \sqrt{Q_i}$. Now the minimal primes are the $P_i$.
Let me show that $P = P_1$ is associated. Let $R = \bigcap_{i = 2}^n Q_i$, $Q = Q_1$. So $0 = Q \cap R$, and
$$R = \frac{R}{Q \cap R} \cong \frac{R + Q}{Q} \subset \frac{A}{Q}.$$
So $\mathop{Ass}(R) \subset \mathop{Ass}(A/Q) = \{ P \}$, where the last equality uses the assumption that $A$ is noetherian. Again, since $A$ is Noetherian, $\mathop{Ass}(R) \neq \emptyset$, so $P \in \mathop{Ass}(R) \subset \mathop{Ass}(A)$.
Until now, the proof is non-constructive. Now, what it the element $f$ you are looking for? It all lies in the claim that since $A$ is Noetherian, $\mathop{Ass}(R) \neq \emptyset$.
To show this, consider the set of ideals $X = \{ \mathop{Ann}(f) \mid f \in R \}$. Since $A$ is Noetherian, $X$ has a maximal element, and it is not difficult to show that such maximal element has to be prime. Indeed let $I \in X$ be maximal, say $I = \mathop{Ann}(f)$, and let $ab \in I$. Then $abf = 0$, and either $bf = 0$, in which case $b \in I$, or $a \in \mathop{Ann}(bf) \supset I$. By maximality, $\mathop{Ann}(bf) = I$, so $a \in I$.
In conclusion your desired element $f$ is defined by the condition that $\mathop{Ann}(f)$ is maximal in $X$. I leave to you the choice whether this is or not more constructive than the proofs you have seen. In any case the very Noetherian condition guarantees a non-constructive existence of a maximal ideal, so it is entirely possible thatone cannot get anything more explicit than this.