Geometric Probability – Projecting a Point onto a Random 2D Plane in Higher Dimensions

ag.algebraic-geometrygeometric-probabilitymg.metric-geometrypr.probability

We are given $\mathbf{p}\in\mathbb{R}^d$, where $d\gg 1$. Let $\mathbf{v}$ be a point selected uniformly at random from the unit $(d-1)$-sphere $\mathcal{S}^{d-1}$ centered at the origin $\mathbf{0}\in\mathbb{R}^d$, and $H:=\{\mathbf{x}\in\mathbb{R}^d : \langle\mathbf{x},\mathbf{v}\rangle=0\}$ be the random hyperplane orthogonal to $\mathbf{v}$ and passing through the origin.

A commonly used method to project $\mathbf{p}$ onto $H$ consists in generating $d$ Gaussian random variables $z_1, z_2, \ldots, z_d$, defining $\mathbf{v}=\frac{\mathbf{z}}{\|\mathbf{z}\|_2}$, and then finding the projection $\mathbf{p}'$ of $\mathbf{p}$ onto $H$ by calculating $\mathbf{p}'=\mathbf{p}-\langle\mathbf{v},\mathbf{p}\rangle\,\mathbf{v}$.


Question: What is a natural way to extend the above technique to project $\mathbf{p}$ onto a random $2$-dimensional plane $P$ passing through the origin – thereby defining $P$ by extending the definition of $H$ from $d$ to $2$ dimensions still in $\mathbb{R}^d$, viz., in such a way that selecting a point uniformly at random from the intersection $\mathcal{S}^{d-1} \cap P$ is equivalent to selecting a point uniformly at random from $\mathcal{S}^{d-1}$?



Proposed solution: We can select uniformly at random two points $\mathbf{v}'$ and $\mathbf{v}''$ from the unit $(d-1)$-sphere $\mathcal{S}^{d-1}$ centered at the origin. We can then find the (unique) plane $P$ containing $\mathbf{v}'$, $\mathbf{v}''$ and the origin. To project $\mathbf{p}$ onto $P$ defined this way, we could finally represent any point $\mathbf{x}$ of $P$ as $\mathbf{x}:=a\,\mathbf{v}'+b\,\mathbf{v}''$ with $a,b\in\mathbb{R}$, and find the (unique) value of $a$ and the (unique) value of $b$ minimizing the distance $\|\mathbf{p}-\mathbf{x}\|_2$.
Questions: How can we prove that this solution is correct? Is there a simpler and faster solution?

Best Answer

Given $v_1$ and $v_2$ in $\mathbb{R}^d$ linearly independent, their Gram matrix $G$ is defined to be $$ G = V^T V, $$ where $V = (v_1 v_2)$ is the $d \times 2$ matrix having $v_1$ as first column and $v_2$ as second column. More explicitly, we have $$ G = \begin{pmatrix} (v_1, v_1) & (v_1, v_2) \\ (v_2, v_1) & (v_2, v_2) \end{pmatrix}, $$ where $(-,-)$ denotes the Euclidean inner product in $\mathbb{R}^d$.

To project a vector $p \in \mathbb{R}^d$ onto the linear span of $v_1$ and $v_2$ amounts to solving $$ V x = p $$ in the least-square sense, i.e. finding $x = (x_1, x_2)^T$ such that the $\lVert Vx - p \rVert^2$ is minimized.

Hence you want $Vx - p$ to be orthogonal to $v_1$ and $v_2$ (for details, read about the least-square method). Hence you want $$ (Vx - p, Vy) = 0, $$ for any $y \in \mathbb{R}^2$. This implies that $$ (V^T(Vx - p), y) = 0 $$ for any $y \in \mathbb{R}^2$. Hence $$ V^TV x = V^T p $$ or $$ G x = V^T p, $$ so that $$ x = G^{-1} V^T p. $$

More explicitly, if $$ G^{-1} = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, $$ then $$ \begin{align} x_1 &= a (p, v_1) + b (p, v_2) \\ x_2 &= c (p, v_1) + d (p, v_2), \end{align} $$ and $x_1 v_1 + x_2 v_2$ is the desired orthogonal projection of $p$ onto the linear span of $v_1$ and $v_2$.

I did not answer your questions about uniform distribution etc., but this was too long as a comment.