Gcd of two linear functions with same slope

elementary-number-theory

The general question I'm trying to answer is how to (if possible),
given $M$ integer vectors of length $N$ where $M < N$,
where the $gcd$ of each vector is $1$,
construct an $N\times N$ unimodular matrix, where the first $M$ rows are the $M$ vectors.

For example, say $N = 3$, $M=1$, and our vector is $\begin{bmatrix}6 & 10 & 15\end{bmatrix}$
a valid matrix would be:
$\begin{bmatrix}
6 & 10 & 15\\
0 & 1 & 1\\
1 & -1 & 0\\
\end{bmatrix}$

or, if given $\begin{bmatrix}102& 190& 345\end{bmatrix}$, an acceptable answer is:
$\begin{bmatrix}
102 & 190 & 345\\
0 & 1 & 1\\
25 & -38 & 0
\end{bmatrix}$

FWIW, among the space of possible solutions when infinitely many exist, the maximum number of 0s and probably also smaller absolute value of the scalars are also preferable.

Currently, I only have a few specific, not-so-general solutions.
E.g., if $N = 3$ and $M = 1$, we first try and find the second row.
Say we have $\begin{bmatrix}a& b& c\end{bmatrix}$, and let the second row be $\begin{bmatrix}x& y& z\end{bmatrix}$.
If we arbitrarily let $x=1$, then we can subtract $a$ times the second row from the first without changing the determinant of the final matrix, meaning unimodularity should be preserved (this can be reversed later; we're just trying to find valid values for $y$ and $z$).
If we do so, we get $\begin{bmatrix}0 & b – a\cdot y & c – a\cdot z\end{bmatrix}$ as the first row.
For the matrix to be unimodular, we must have $gcd$ of each row be $1$, so we require

$\text{gcd}(b – a\cdot y, c – a\cdot z) = 1$

From here, I'm not quite sure what to do.
Given particular values, we can try a series of checks, e.g. if $\text{gcd}(b,c) = 1$, then $y=z=0$ is a valid solution.
But I'm not sure how to find if this is solve-able, or if it is, the family of solutions from here.

Then, I am also not sure how to generalize this to larger problems.
My idea there is to try row reduction with row and column pivoting to build diagonal blocks as we iteratively append rows, and maintain a determinant of $\pm 1$ for each of these diagonal blocks.
If the blocks can be kept small enough (e.g $\le 3$), then a series of specific solutions for those block sizes may be enough.
I'm open to any ideas, thoughts, suggestions!

EDIT:
Here is a simple implementation in the Julia programming language implementing lonza leggiera's approach:

using AbstractAlgebra, LinearAlgebra
function unimod(A)
    M, N = size(A)
    MS = MatrixSpace(ZZ, M, N)
    S, T, U = snf_with_transform(MS(A))
    S = Matrix(S)
    all(isone, @views S[diagind(S)]) || throw(ArgumentError("Not possible!"))
    Uinv = Int.(inv(U.//1))
    [Int.(inv(T.//1)) * Uinv[1:M, :]; Uinv[M+1:end, :]]
end

A few examples:

julia> unimod([6 8 15]).//1 |> det |> Int
1

julia> unimod([6 8 15; 1 1 0]).//1 |> det |> Int
-1

julia> unimod([1 0 0; 1 2 2])
ERROR: ArgumentError: Not possible!
```

Best Answer

It's possible to extend an $\ M\times N\ $ matrix with integer entries to a unimodular matrix if and only if all its elementary divisors are $\ 1\ $—that is, if and only if its Smith normal form is $$ \begin{bmatrix} I_{M\times M}&0_{M\times(N-M)} \end{bmatrix}\ . $$ This is not necessarily true just because the greatest common divisors of all its rows are $\ 1\ $. The greatest common divisors of both rows if the matrix $$ \pmatrix{1&0&0\\1&2&2} $$ are $\ 1\ $, for example, but its Smith normal form is $$ \pmatrix{1&0&0\\0&2&0}\ , $$ so it doesn't have a unimodular extension.

To prove the above assertion, first note that if all the elementary divisors of $\ A\ $ are $\ 1\ $, then there exist an $\ M\times M $ unimodular matrix $\ U\ $ and an $\ N\times N $ unimodular matrix $\ V\ $ such that $$ UAV=\begin{bmatrix} I_{M\times M}&0_{M\times(N-M)} \end{bmatrix}\ . $$ Let $\ W_1\ $ be the $\ M\times N\ $ matrix formed by the first $\ M\ $ rows of $\ V^{-1}\ $, and $\ W_2\ $ the $\ (N-M)\times N\ $ matrix formed by its last $\ N-M\ $ rows. Then $$ \pmatrix{U^{-1}&0_{M\times(N-M)}\\ 0_{(N-M)\times M}&I_{(N-M)\times(N-M)}}V^{-1}=\pmatrix{U^{-1}W_1\\W_2} $$ is a unimodular matrix. But $\ U^{-1}W_1=A\ $, so $\ \pmatrix{A\\W_2}\ $ is its required unimodular extension.

Now suppose that $\ A\ $ has an extension to an $\ N\times N\ $ unimodular matrix $\ B\ $. Then $\ A\ $ must be non singular, and therefore all its elementary divisors must be non-zero. Let $\ d\ $ be any elementary divisor of $\ A\ $. Then $\ d\ $ must divide the $\ M^\text{th}\ $ elementary divisor of $\ A\ $, which is in turn a divisor of all the $\ M\times M\ $ minors of $\ A\ $. If $\ B_i\ $ is the $\ (M+i)\times N\ $ matrix formed by the first $\ M+i\ $ rows of $\ B\ $ then every $\ (M+1)\times(M+1) $ minor of $\ B_1\ $ is an integer linear combination of the $\ M\times M\ $ minors of $\ A\ $, and so must be divisible by $\ d\ $. Likewise, every $\ (M+i)\times(M+i) $ minor of $\ B_i\ $ is an integer linear combination of the $\ (M+i-1)\times(M+i-1)\ $ minors of $\ B_{i-1}\ $, so, by induction, all these minors must be divisible divisible by $\ d\ $. It follows that $\ d\ $ must divide the determinant of $\ B_{N-M}=B\ $. But since $\ B\ $ is unimodular, its determinant must be $\ {\pm}1\ $, and $\ d\ $ must be $\ 1\ $.

Related Question