[Math] Can Shor’s Algorithm be modified to run efficiently on a classical computer

algorithmscomputational complexitycomputer sciencent.number-theoryquantum-computation

Shor's algorithm is an algorithm which factors integers in polynomial time on a quantum computer. If one tries to run it on a classical computer, one runs into the problem that the state vector that is being operated on is of exponential size, so it cannot be run efficiently.

However, let us make the following observation with respect to MCMC (Markov Chain Monte Carlo) algorithms; see example 1 in Persi Diaconis' paper "The MCMC revolution" available at http://statweb.stanford.edu/~cgates/PERSI/papers/MCMCRev.pdf.

Notice that the state space in example 1 is of exponential size just like the state space of Shor's algorithm, yet the MCMC converges rapidly to the desired state. Also, the matrix involved in the MCMC is of exponential size just like the unitary matrices in Shor's algorithm. From this observation, I was thinking that we could do something similar with Shor's algorithm to get it to run in polynomial time on a classical computer, via Monte Carlo simulation.

Of course, one obvious problem with such an approach is that the unitary matrices in Shor's algorithm involve complex numbers, so they cannot be stochastic matrices. Also, the $l2$ norm, not the $l1$ norm, of the columns of the unitary matrices in Shor's algorithm equal one. (Furthermore, the unitary matrices in Shor's algorithm are all different, so it is not even close to a Markov Chain!)

But why should this stop us? Here is how I was thinking we could potentially fix this problem: First, convert the unitary matrices used in Shor's algorithm to orthogonal matrices so that each entry $a+bi$ gets converted to the matrix

$\left( \begin{array}{cc}
a & b \\
-b & a \end{array} \right)$.

But these new matrices cannot be stochastic, since they involve negative numbers. To fix this problem, I thought about converting each entry $a$ in the orthogonal matrix to the matrix of only nonnegative entries

$\left( \begin{array}{cc}
a & 0 \\
0 & a \end{array} \right)$, if $a>0$ and

$\left( \begin{array}{cc}
0 & -a \\
-a & 0 \end{array} \right)$, if $a<0$.

(The idea here is that the matrix acts only on vectors of form $(x,0)^t$ when $x>0$ and $(0,-x)^t$ when $x<0$.)

So we have quadrupled the column and row dimensions of the unitary matrices in Shor's algorithm. Next, we can normalize the columns of the new matrices, so that they all add to one.

(Added: As one of the commenters, zeb, pointed out after my original post, normalizing the columns of the new matrices could potentially mess up the algorithm, but I was thinking that since in Shor's original paper, http://arxiv.org/abs/quant-ph/9508027, the quantum gates used to make up the Fourier transform are so incredibly simple, this might not be such a big problem.)

My question is: is there anything fundamentally stopping us from using these new matrices to efficiently factor integers via Monte Carlo simulation in much the same way as is described in Diaconis' paper (except of course, all of the matrices here are different)? Did I miss something?

Best Answer

If this kind of simple trick involving just the Fourier transform, and not taking advantage of special properties of multiplication modulo $n$, worked, then it would provide a fast classical algorithm not just for factoring but for the more general abelian hidden subgroup problem - e.g. identifying the period of an arbitrary periodic sequence.

But there is no fast classical algorithm for the abelian hidden subgroup problem, since if you measure the value of a function $\mathbb Z \to \pm 1$ at $n$ integers, then you have not ruled out the possibility of the function being periodic with period some prime $p$ that does not divide the difference of any two of your numbers. Many such primes of size around $n^2$ necessarily exist, so you have no hope of figuring out the period in time less than the square root of the size of the period.

So MCMC can't work for this problem.

Related Question