Nearest signed permutation matrix to a given matrix $A$

discrete optimizationmatricesoptimizationpermutation-matrices

Given $A \in \mathbb{R}^{n \times n}$, let $Q \in O(n)$ be the orthogonal matrix nearest to $A$ in the Frobenius norm, i.e.,

$$Q := \text{arg}\min_{M \in O(n)} \| A – M \|_{F}^2$$

It's well known that $Q = U V^{T}$, where $A = U\Sigma V^{T}$ is the SVD of $A$ (see Orthogonal_Procrustes, Nearest orthogonal matrix).

I'm trying to solve a similar problem:

$$S := \text{arg}\min_{M \in \mbox{SP}(n)} \| A – M \|_{F}^2$$

where $\mbox{SP}(n)$ is a group of signed permutation matrices.

I know that in the case of permutation matrices, the problem reduces to linear sum assignment and can be solved using the Hungarian algorithm. I suspect in the signed permutation case it will reduce to some linear program. Is it possible to somehow solve this problem using SVD or the Hungarian algorithm?

I would really like to avoid general LP solvers, if possible.

Best Answer

This reduces in linear time to the assignment problem in much the same way as the unsigned case, at least for the simple reduction I have in mind. Let me thus reiterate the unsigned case as I see it: form a cost matrix $c_{ij}$ as the norm contribution of assigning the $i$th row of $M$ to the $j$th elementary row vector $e_j$, namely

$$c_{ij} = \sum_{k=1}^n (a_{ik} - e_{jk})^2.$$

Then the problem is equivalent to minimizing the cost of assigning all rows of $M$ to distinct elementary vectors.

The only difference in your example is that you calculate the cost function as a min of just two possibilities: $e_j$ or $-e_j$. So take

$$c_{ij} = \min \{ \sum_{k=1}^n (a_{ik} - e_{jk})^2 , \sum_{k=1}^n (a_{ik} + e_{jk})^2 \}$$

and you’re done. There are of course efficient ways to compute $c_{ij}$ without summing the entire row every time.

Related Question