Dimension of the manifold of symmetric rank $r$ $n\times n$ matrices

differential-geometrydifferential-topologymatrix analysisnon-convex-optimization

I'm currently reading through the paper "Low-rank matrix completion by Riemannian
optimization—extended version" by Vandereycken, and in this paper the author states that the set

$\mathcal{M}_k = \{X\in\mathbb{R}^{n\times n}\vert \text{rank}(X) = k\}$

is a smooth manifold of dimension $k(2n-k)$. For the proof, the author cites Example 5.30 on page 117 of "Introduction to Smooth Manifolds" by Lee, which uses a fairly standard submersion proof to show that $\mathcal{M}_k$ is a submanifold with the dimension given above. My question is twofold:

First, is the set $\mathcal{M}_k^{sym} = \{X\in\mathbb{R}^{n\times n}\vert X = X^T, \text{rank}(X) = k\}$ a manifold as well? If one assumes positive semi-definiteness, the answer to this post (Is the set of symmetric positive semi-definite matrices a smooth manifold with boundary) indicates that this is a manifold. I'm wondering if the relaxation away from positive semi-definite breaks this.

Second, if this is in fact a manifold (which I think it is), what is the embedded dimension of said manifold? The proof in Lee shows that the codimension of $\mathcal{M}_k$ is $(n-k)^2$, and I thought the proof should generalize to symmetric matrices, but the codimension would have to change as the dimension of symmetric $n\times n$ matrices is $\frac{n(n+1)}{2}$, which is for small $k$ is smaller than $(n-k)^2$.

I'd appreciate any input on this question/pointers in the right direction.

Best Answer

I still don't have any idea of whether this set forms a manifold and I admittedly care a bit less about this than I did when I originally asked the question, but I have a sort of dimension counting argument that at least gives me the dimension of this set. First, I forgot the following condition on my set of interest, and I'm changing the notation in what follows: \begin{equation} \mathbb{S}_r = \{X\in\mathbb{R}^n\vert X = X^T, \text{rank}(X) = r,X\cdot\mathbf{1} = \mathbf{0}\} \end{equation} where $\mathbf{1}$ is just a vector full of 1s of length $n$.

Proceeding from here, I can construct an eigenvalue decomposition of this matrix as I know it is symmetric. Let $X\in\mathbb{S}_r,~X = UDU^T$ for some diagonal $r\times r$ matrix and a matrix $U\in\mathbb{R}^{n\times r}$ where the columns of $U$ are orthonormal. Now clearly the dimension of this set is at least $r$, as I have $r$ degrees of freedom for my eigenvalues. Ignoring the condition that $X\in\mathbb{S}_r$ implies that $X\cdot\mathbf{1}=0$, we can see that, for the first column vector of $U$, I must pick a vector that lives on the sphere $S^{n-1}$ embedded in $\mathbb{R}^{n}$. This gives me $n-1$ degrees of freedom for the first column vector. As the second column vector is orthogonal to the first, it must lie on a sphere $S^{n-2}$ embedded in $\mathbb{R}^n$ with all components orthogonal to the first chosen vector. This process follows inductively, until I am left with \begin{equation} \sum_{i=1}^r (n-i) = rn - \sum_{i=1}^r i = rn - \frac{r(r+1)}{2} \end{equation} choices for my vectors. This combined with my existing $r$ degrees of freedom from the eigenvalues I may choose gives me my final dimension (again, omitting my new constraint $X\cdot\mathbf{1} = \mathbf{0}$) of $\frac{(2n-r+1)r}{2}$.

This argument is pretty standard for NLA people, nothing interesting has really been done here, but I'm curious if someone more geometrically inclined has any comments about this. To incorporate my new argument, which I readily admit is rather hand-wavey and I hope someone might be able to codify this a bit more clearly, notice that $X\cdot\mathbf{1} = \mathbf{0}$ must imply that $U^T\mathbf{1} = \mathbf{0}$, as this not being the case implies that $\text{rank}(X)<r$. This condition amounts that each column vector of $U$ must have each of its components sum to 0, i.e. \begin{equation} \sum_{j=1}^n u_{ij} = 0 ~~\forall i\in\{1,...,n\} \end{equation} Now, this constraint defines a dimension $n-1$ hyperplane that passes through the origin. It's not hard to argue that this hyperplane will intersect the sphere $S^{n-1}$ at more than a point. In fact, the dimension of this set following this intersection is $n-2$. Continuing the argument, we now need to consider something that lies in the intersection of my hyperplane and the sphere $S^{n-1}$. This amounts to picking a vector orthogonal to the first vector chosen in the $n-2$ dimensional set, which gives me $n-3$ dimensions to pick from. Inductively it follows that my new dimension for the eigenvectors is now $\sum_{i=1}^r n-i-1 = nr - \frac{r(r+1)}{2}-r$, so the total dimension of $\mathbb{S}_r$ is $\frac{r(2n-r-1)}{2}$, so adding that last constraint changes the dimension by $r$, which makes intuitive sense as you're effectively adding one constraint to $r$ different eigenvectors.

If anyone has thoughts or wants to critique the rigor of my argument and help me construct something stronger, let me know!

Related Question