We factorize first the 2 given polynomials such that :
$x^5−2x^2+1=(x−1)(x^4+x^3+x^2−x−1)$ and $x^3+3x^2−x−3=(x−1)(x^2+4x+3)$
They have the common root $1$. The others roots of the second polynomials are {-3,-1} but they are not roots of the first one, i.e $x-1$ is the only (then the greatest) common divisor of those 2 polynomials.
Keep in mind we use the corollary given in the original post to justify this argument.
Thanks for your help
Hint $ $ gcds in a PID $\rm D$ such as $\,\Bbb Q[x]\,$ persist in extension rings because the gcd may be specified by the solvability of (linear) equations over $\rm D$ and such solutions always persist in extension rings, i.e. roots in $\rm D$ remain roots in rings $\rm\,R \supset D.\:$ More precisely, the Bezout identity for the gcd yields the following ring-theoretic equational specification for the gcd
$$\begin{eqnarray} \rm\gcd(a,b) = c &\iff&\rm (a,b) = (c)\ \ \ {\rm [equality\ of\ ideals]}\\
&\iff&\rm a\: \color{#C00}x = c,\ b\:\color{#C00} y = c,\,\ a\:\color{#C00} u + b\: \color{#C00}v = c\ \ has\ roots\ \ \color{#C00}{x,y,u,v}\in D\end{eqnarray}$$
Proof $\ (\Leftarrow)\:$ In any ring $\rm R,\:$ $\rm\:a\: x = c,\ b\: y = c\:$ have roots $\rm\:x,y\in R$ $\iff$ $\rm c\ |\ a,b\:$ in $\rm R.$ Further if $\rm\:c = a\: u + b\: v\:$ has roots $\rm\:u,v\in R\:$ then $\rm\:d\ |\ a,b$ $\:\Rightarrow\:$ $\rm\:d\ |\ a\:u+b\:v = c\:$ in $\rm\: R.\:$ Hence we infer $\rm\:c = gcd(a,b)\:$ in $\rm\: R,\:$ being a common divisor divisible by every common divisor. $\ (\Rightarrow)\ $ If $\rm\:c = gcd(a,b)\:$ in $\rm D$ then the Bezout identity implies the existence of such roots $\rm\:u,v\in D.\ $ QED
Rings with such linearly representable gcds are known as Bezout rings. As above, gcds in such rings always persist in extension rings. In particular, coprime elements remain coprime in extension rings (with same $1$). This need not be true without such Bezout linear representations of the gcd. For example, $\rm\:\gcd(2,x) = 1\:$ in $\rm\:\mathbb Z[x]\:$ but the gcd is the nonunit $\:2\:$ in $\rm\:\mathbb Z[x/2]\subset \mathbb Q[x]$.
Best Answer
A greatest common divisor of $f$ and $g$ is a polynomial $d$ such that
(A) Suppose $d$ is a greatest common divisor of $f$ and $g$; then by (1), we can write $f=df_1$ and $g=dg_1$; therefore $$ r=fq-g=d(f_1q-g_1) $$ and so $d$ satisfies property (1) with respect to $g$ and $q$. Next, suppose $e$ divides both $g$ and $r$: $g=eg_2$, $r=er_2$; then $f=gq+r=e(g_2q+r_2)$, which means that $e$ divides both $f$ and $g$, so by (2) we conclude that $e$ divides $d$. Hence $d$ satisfies also property (2) with respect to $g$ and $r$.
The converse direction is very similar.
(B) At each step of the Euclidean algorithm the greatest common divisor is preserved because of (A). The last step gives a zero remainder, so the divisor is obviously a greatest common divisor of itself and the dividend. But this divisor is exactly the last nonzero remainder.
(C) Traverse the steps in the algorithm.