[Math] Row and Column Sum-Preserving Matrix Tranformations

linear algebralinear-transformationsmatrices

I have an $\infty \times n$ matrix with entries $a_{i,j}\ge 0$, with $1\le i<\infty, 1\le j\le n$ and the sum of the entries in the matrix is 1: $\sum_{i,j}a_{i,j}=1$.

What's the set of transformations (not necessarily linear) that I can apply that will provide another matrix of nonnegative entries that will preserve the row and column sums of the original matrix?

For a particular problem, I'm looking for a transform (if one exists) that maps $a_{i,j}\mapsto 0$ when $\frac{j}{\gcd(i,j)}$ is compositie while not altering the row and column sums: For each $j$, $\sum_{i\ge 1}a_{i,j} \mapsto
\sum_{i\ge 1}a_{i,j}$ and for each $i$, $\sum_{1\le j\le n}a_{i,j} \mapsto \sum_{1\le j\le n}a_{i,j}$.

Best Answer

Here's a version of an algorithm I've used for a similar problem finding an $m \times n$ matrix with prescribed row and column sums and $0$'s in some specified locations. It's an iteration that converges to an answer. Since there's no closed form you'd have to be satisfied with a program (easy to write in any language). That could cause problems since you have infinitely many rows. But the tail must be small since the matrix elements sum to $1$. Maybe it can be managed.

Given your starting $a_{i,j}$, record and remember the row and column sums. Then set all the entries that must be $0$ to $0$. Now loop on the rows, rescaling the nonzero values left so that the row sums are right. Then do the same for the columns.

Of course that will destroy the row sums. So rescale the rows again. Continue alternately rescaling rows and columns.

Most of the time this process will converge to a matrix with the right row and column sums.

I say "most of the time" because it may fail when the specified $0$ entries disconnect the matrix, in the following sense. Construct the bipartite graph with one vertex set for the rows and one for the columns. Join two vertices when their intersection is not one of the entries that must be $0$. Hope that graph is connected. If it's not you may run into problems. I haven't checked connectedness for the particular zero set in your problem.


Addition, in answer to comment.

I'm not surprised that any such transformation must be nonlinear. The only algebraic invariant that makes sense is that it's homogeneous: you can scale all the matrix entries and the corresponding sums by the same positive amount and the answer scales the same way.

The algorithm is meant to provide a reasonable answer to a restricted transportation problem. Suppose you have quantities $r_i$ at $m$ sources and want to move a total of $c_j$ to each of $j$ sinks. Of course the $r_i$ and $c_j$ must have the same sum, which you can assume is $1$. Then each possible routing scheme $A$ is an $m \times n$ matrix with the prescribed row an column sums: ship $A_{ij}$ units from source $i$ to sink $j$. The set of possible schemes is a convex polytope in matrix space: a transportation polytope. In operations research there's usually a cost function involved in shipping from source $i$ to sink $j$ and a question of optimization; I was't interested in that aspect so made shipping free. Then in some sense the assignment $A_{ij} = r_i c_j$ is the most natural solution assuming some kind of independence in routing choices. The most extreme solutions correspond to vertices of the polytope and in turn to spanning trees in the bipartite graph defined above.

The algorithm finds what I think is the most natural solution when some routes are forbidden: some matrix entries that must be $0$. There are probably theorems that codify the notion of "most natural"; I didn't need them so never looked for them.

Google finds many links to papers on transportation polytopes.

I hope this background helps.

Related Question