The vectors $\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r$ in general will not be linearly independent because it might be that $\mathbf{a}^{r+1}=-\mathbf{a}^1$, so you get the zero vector in the set, making it automatically dependent.
However, if it is the case that $\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r$ are dependent, then the vectors $\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r$ will be independent. The proof is below, I tried hiding it with the spoiler tag so that you can try it out by yourself first, but the tag doesn't seem to work when MathJax is involved.
Suppose that both sets $\{\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r\}$ and $\{\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r\}$ are dependent. Then there are two groups of coefficients, $\alpha_1,\dots,\alpha_r$ and $\beta_1,\dots,\beta_r$ such that in each group not all coefficients are zero and the following holds:
$$\alpha_1(\mathbf{a}^1+\mathbf{a}^{r+1})+\alpha_2\mathbf{a}^2+\cdots+\alpha_r\mathbf{a}^r=0,\\
\beta_1\mathbf{a}^{r+1}+\beta_2\mathbf{a}^2+\cdots+\beta_r\mathbf{a}^r=0.
$$
If $\beta_1=0$, then all other betas are zero because any subset of $\{\mathbf{a}^1,\mathbf{a}^2, \dots,\mathbf{a}^r\}$ is independent, so we have $\beta_1\neq0$ and we divide the second equation by $\beta_1$. To reduce clutter, I'll write it as
$$\mathbf{a}^{r+1}=\gamma_2\mathbf{a}^2+\cdots+\gamma_r\mathbf{a}^r.$$
Now plug that into the first equation and regroup to obtain:
$$\alpha_1\mathbf{a}^1+(\alpha_1\gamma_2+\alpha_2)\mathbf{a}^2+\cdots+(\alpha_1\gamma_r+\alpha_r)\mathbf{a}^r=0.$$
The independence of $\{\mathbf{a}^1,\mathbf{a}^2, \dots,\mathbf{a}^r\}$ forces all these coefficients to be zero, so $\alpha_1=0$ and
$$0=\alpha_1\gamma_j+\alpha_j=\alpha_j,\text{ for }j=2,\dots,r.$$
But this contradicts that at least one of the alphas must be nonzero. Therefore, at least one of $\{\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r\}$ and $\{\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r\}$ is independent.
This proves that the row-rank of the matrix $B$ is at least $r$. We now prove that it is exactly $r$
Suppose first that $\{\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{a}^{r+1}\}$ is independent in $B$ but not maximal. Then it is contained in some maximal independent set for $B$, say
$$\{\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{a}^{r+1},\mathbf{b}^1,\dots,\mathbf{b}^s\}.$$
Any subset of this maximal independent set is itself independent, so if any of the rows $\mathbf{b}^1,\dots,\mathbf{b}^s$ is also a row of $A$, it would mean that $A$ has rank larger than $r$. Therefore, there can be at most one row in that set, the only row which is not in $A$, i.e. $\mathbf{b}^1=\mathbf{a}^1+\mathbf{a}^{r+1}$.
So suppose that
$$\{\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{a}^{r+1}\}$$
is maximal independent in $B$. Then both sets
$$\{\mathbf{a}^1,\mathbf{a}^2, \dots,\mathbf{a}^r\}\text{ and}\\
\{\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{a}^{r+1}\}
$$
are maximal independent in $A$, therefore from the first set we have
$$\mathbf{a}^{r+1}=\alpha_1\mathbf{a}^{1}+\dots+\alpha_r\mathbf{a}^{r}$$
and from the second set we have
$$\mathbf{a}^{1}=\beta_2\mathbf{a}^{2}+\dots+\beta_{r+1}\mathbf{a}^{r+1}.$$
This means that we can write $\mathbf{a}^{1}+\mathbf{a}^{r+1}$ as
$$\mathbf{a}^{1}+\mathbf{a}^{r+1}=\mathbf{a}^1+\alpha_1\mathbf{a}^1+\cdots+\alpha_r\mathbf{a}^r=\\
(\text{plug in the expression for $\mathbf{a}^1$})=\\ \gamma_2\mathbf{a}^{2}+\cdots+\gamma_{r+1}\mathbf{a}^{r+1}.$$
But if we plug that into $\{\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{a}^{r+1}\}$, we get a dependent set!
If $\{\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r\}$ is independent in $B$ but not maximal, then again it is contained in some maximal independent set for $B$ which, for the same reasons as previously, can contain only one additional row $\mathbf{b}$, which is also a row of $A$ and such that $\{\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{b}\}$ is independent (in $A$). Now the same reasoning applies: if the set
$$\{\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{b}\}$$
is independent in $B$, then both sets
$$\{\mathbf{a}^1,\mathbf{a}^2, \dots,\mathbf{a}^r\}\text{ and}\\
\{\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{b}\}
$$
are maximal independent in $A$, so we can use the first set to express $\mathbf{a}^{r+1}$ and the second set to express $\mathbf{a}^1$, and then combine the two expressions so that we get
$$\mathbf{a}^1+\mathbf{a}^{r+1}=\delta_2\mathbf{a}^2+\cdots++\delta_{r}\mathbf{a}^{r}+\delta\mathbf{b},$$
implying that $\{\mathbf{a}^1+\mathbf{a}^{r+1},\mathbf{a}^2, \dots,\mathbf{a}^r,\mathbf{b}\}$ is dependent.
Best Answer
The first matrix is $\operatorname{diag}(X X^{\mathrm T}) \cdot \vec1$, where $\operatorname{diag}(X X^{\mathrm T})$ is a vector with the diagonal entries of $X X^{\mathrm T}$, and $\vec1$ is an all-ones matrix (with as much entries as $X$ has rows.) The second matrix is just the first one transposed, then.
For instance, if $X$ has three rows: \begin{align} X &= \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} \\ \\ X^{\mathrm T} &= \Big[ \matrix{ x_1^{\mathrm T} & x_2^{\mathrm T} & x_3^{\mathrm T} } \Big] \\ \\ X X^{\mathrm T} &= \begin{bmatrix} \|x_1\|^2 & x_1 x_2^{\mathrm T} & x_1 x_3^{\mathrm T} \\ x_2 x_1^{\mathrm T} & \|x_2\|^2 & x_2 x_3^{\mathrm T} \\ x_3 x_1^{\mathrm T} & x_3 x_2^{\mathrm T} & \|x_3\|^2 \end{bmatrix} \\ \\ \operatorname{diag} (X X^{\mathrm T}) &= \begin{bmatrix} \|x_1\|^2 \\ \|x_2\|^2 \\ \|x_3\|^2 \end{bmatrix} \\ \\ \operatorname{diag} (X X^{\mathrm T}) \cdot \vec1 &= \begin{bmatrix} \|x_1\|^2 \\ \|x_2\|^2 \\ \|x_3\|^2 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 \end{bmatrix} \\ &= \begin{bmatrix} \|x_1\|^2 & \|x_1\|^2 & \|x_1\|^2 \\ \|x_2\|^2 & \|x_2\|^2 & \|x_2\|^2 \\ \|x_3\|^2 & \|x_3\|^2 & \|x_3\|^2 \end{bmatrix}. \end{align}