Let $a_n$ be the number of $2 \times n$ -matrices avoiding constant 2*2-submatrices.
Then
$$a_n = \frac{2^{-n} \left(4 \left(17+4 \sqrt{17}\right)
\left(3+\sqrt{17}\right)^n+\left(\sqrt{17}-17\right)
\left(\sqrt{17}-3\right)^n e^{i \pi n}\right)}{17
\left(3+\sqrt{17}\right)}$$
This should be fairly straightforward to prove,
let $v(n)=(e_{01}(n),e_{10}(n),e_{00}(n),e_{11}(n))$ be the vector of number of $2\times n$-matrices ending with column 01, 10, 00 resp. 11.
We then have the recursion
$$v(n+1)=\begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 \\ \end{pmatrix} v(n)$$
Since this is symmetric, we may diagonalize this and from here, it should be straightforward to find the formula above.
(I cheated a bit in Mathematica).
EDIT: Of course, $e_{01}(n)=e_{10}(n)$ and $e_{00}(n)=e_{11}(n)$ by symmetry,
so one can of course reduce the above to a 2 by 2 matrix recursion instead, with entries 2,2 and 2,1. Eigenvalues of this matrix are $1/2 (3 + \sqrt{17}), 1/2 (3 - \sqrt{17})$
which explains the strange formula above.
The example given by Anthony Quas reveals a phenomenon discussed in Kato's book Perturbation Theory for Linear Differential Operators. The point is the following:
- If the symmetric matrix depends analytically upon one parameter, then you can follow analytically its eigenvalues and its eigenvectors. Notice that this requires sometimes that the eigenvalues cross. When this happens, the largest eigenvalues, as the maximum of smooth functions, is only Lipschitz.
- On the contrary, if the matrix depends upon two or more parameters, the eigenvalues are at most Lipschitz when crossing happens, and the eigenvectors cannot be chosen continuously. A typical example is
$$(s,t)\mapsto\begin{pmatrix} s & t \\\\ t & -s \end{pmatrix},$$
whose eigenvalues are $\pm\sqrt{s^2+t^2}$. Up to the shift by $I_2$, Quas' example is just a piecewise $C^1$ section of this two-parameters example, and it inherits its lack of continuous selection of eigenvectors.
- Likewise, if analyticity is dropped, a $C^\infty$-example by Rellich shows that eigenvectors need not be continuous functions of a single parameter. Of course, Quas' example can be recast as a $C^\infty$ one, by flatening the parametrisation at $t=0$, say by replacing $t$ by $s$ such that $t={\rm sgn}(s)\cdot e^{-1/s^2}$.
Side remark: Kato's result is only local. If the domain is not simply connected, it could happen that a global continuous selection of eigenvectors is not possible. This is classical in the exemple above if you restrict to the unit circle $s^2+t^2=1$; then the eigenvalues $\pm1$ are global continuous functions, but when following an eigenvector, it experiences a flip $v\mapsto -v$ as one makes one turn.
Best Answer
As mentioned by Will Sawin, a necessary condition is that $p$ divides $n$. Thus let us assume that $n=pk$. Denoting $e_1,\dotsc,e_n$ the canonical basis, the knowledge of the $p\times p$ minors is the knowledge of the $p$-vectors $$(Ae_{i_1})\wedge\cdots\wedge(Ae_{i_p})\in\Lambda^p(K^n),$$ where $K$ is the field of scalars (e.g. $\mathbb C$).
Splitting $$(Ae_1)\wedge\cdots\wedge (Ae_n) = [(Ae_1)\wedge\cdots\wedge (Ae_p)]\wedge\cdots\wedge[(Ae_{n-p+1})\wedge\cdots\wedge (Ae_n)], $$ we see that $(Ae_1)\wedge\cdots\wedge (Ae_n)$ is a polynomial function in the $p\times p$ minors. Since $(Ae_1)\wedge\dotsb\wedge (Ae_n)=(\det A)e_1\wedge\dotsb\wedge e_n$, we deduce the value of $\det A$.
Let me describe how it works when $n=4$ and $p=2$. The minors are denoted $$A\binom{i\alpha}{j\beta}=a_{i\alpha}a_{j\beta}-a_{i\beta}a_{j\alpha}.$$ Then $$(Ae_\alpha)\wedge(Ae_\beta)=\sum_{i<j}A\binom{i\alpha}{j\beta}e_i\wedge e_j.$$ We thus obtain $$\det A=\sum_{\substack{\rho\in{\frak S}_4 \\ \rho(1)<\rho(2),\rho(3)<\rho(4)}}A\binom{\rho(1)1}{\rho(2)2}A\binom{\rho(3)3}{\rho(4)4}.$$