I guess A is orthogonal and symmetric, so that tells you A−1 = AT = A, but uh, that's not a very common situation to my mind. Maybe someone else has a better "test-taking strategy" explanation, but me personally, I would just row reduce or whatever method you use in general.
An orthogonal matrix is defined to be a matrix whose transpose is its inverse. However, for us the better (almost) definition is a matrix whose rows (or columns) are orthogonal, as in, perpendicular. So (1,1,1,1) is orthogonal to (1,-1,1,-1) since their dot product is (1)(1)+(1)(-1)+(1)(1)+(1)(-1) = 1 - 1 + 1 - 1 is zero. You also should check that the length of the vector in each row is 1, $\sqrt{(1/2)^2 + (1/2)^2 + (1/2)^2 + (1/2)^2} = 1$ so good, but even if not, that part is easily fixed.
Sometimes you can tell just by looking that a matrix is orthogonal.
As far as parsing goes:
Here is the matrix:
$A = \frac12 \begin{pmatrix}
1 & 1 & 1 & 1 \\\\
1 & 1 & -1 & -1 \\\\
1 & -1 & 1 & -1 \\\\
1 & -1 & -1 & 1
\end{pmatrix}$
$A = \frac12 \begin{pmatrix}1&1&1&1\\1&1&-1&-1\\1&-1&1&-1\\1&-1&-1&1\end{pmatrix}$
Here is using the array environment:
$A = \frac12 \left(\begin{array}{rrrr}
1 & 1 & 1 & 1 \\\\
1 & 1 & -1 & -1 \\\\
1 & -1 & 1 & -1 \\\\
1 & -1 & -1 & 1
\end{array}\right)$
$A = \frac12 \left(\begin{array}{rrrr}1&1&1&1\\1&1&-1&-1\\1&-1&1&-1\\1&-1&-1&1\end{array}\right)$
The backslashes get eaten by the markdown software, so you just double them.
So, let us suppose that $A$ is a square matrix, and that $B$ is a matrix such that $BA=I$. You want to show that $B$ is the unique left inverse of $A$ (that is).
Note that a system $A\mathbf{x}=\mathbf{b}$ has at most one solution, namely $B\mathbf{b}$: if $A\mathbf{x}=\mathbf{b}$, then
$$\mathbf{x} = I\mathbf{x} = BA\mathbf{x} = B\mathbf{b}.$$
If $CA=I$, then again a system $A\mathbf{x}=\mathbf{b}$ has at most one solution, namely $C\mathbf{b}$. Thus, $B\mathbf{b}=C\mathbf{b}$ for any $\mathbf{b}$ for which the system has a solution.
If we can show that $A\mathbf{x}=\mathbf{e}_i$ has a solution for each $i$, where $\mathbf{e}_i$ is the $i$th standard basis vector ($1$ in the $i$th entry, $0$s elsewhere) this will show that $B=C$, since they have the same columns.
Because $A\mathbf{x}=\mathbf{0}$ has a solution, that solution must be $B\mathbf{0}=\mathbf{0}$. That means that the reduced row-echelon form of $A$ is $I$. Because the reduced row-echelon form of $A$ is $I$, performing row reduction on the augmented coefficient matrix $[A|\mathbf{e}_i]$ yields the matrix $[I|\mathbf{y}]$ for some $\mathbf{y}$, with $\mathbf{y}$ being the solution to $A\mathbf{x}=\mathbf{e}_i$. Since this vector is equal to both $\mathbf{b}_i=B\mathbf{e}_i$ (the $i$th column of $B$) and to $\mathbf{c}_i=C\mathbf{e}_i$, as noted above, then the $i$th columns of $B$ and $C$ are equal; thus, $B=C$, and the matrix has a unique left inverse.
Now, let us suppose that $A$ is a square matrix and has a right inverse, $AB=I$. We want to show that $B$ is the unique right inverse of $A$. Taking transposes, we get $I = I^T = (AB)^T = B^TA^T$. By what was proven above, $B^T$ is the unique left inverse of $A^T$. If $AC=I$, then $C^TA^T=I^T = I$, so $C^T=B^T$, hence $C=B$. Thus, $B$ is the unique right inverse of $A$.
Best Answer
As mentioned in one of the comments, you should consider using the Sherman-Morrison formula or the Woodbury identity which states that for a nonsingular matrix $A$ and column vectors $ b, c$ such that $ {A+bc^\top}$ is nonsingular,
$$( {A+bc^\top})^{-1}={A^{-1}}-\frac{1}{1+{c^\top A^{-1}b}} {A^{-1}bc^\top A^{-1}}$$
Therefore,
\begin{align} (n I+\mathbf{11}^\top)^{-1}&=\frac{1}{n} I-\frac{1}{1+\frac{1}{n}{ 1^\top 1}}\frac{1}{n^2}\mathbf{11}^\top \\&=\frac{1}{n} I-\frac{1}{n+n}\frac{1}{n}\mathbf{11}^\top \\&=\frac{1}{n}\left( I-\frac{1}{2n}\mathbf{11}^\top\right) \end{align}
To see how the general formula is derived, first note that
$$\det (A+bc^\top)\ne 0 \implies 1+{c^\top A^{-1}b}\ne 0$$
Suppose $A$ is of order $p\times p$, and $b$ and ${c}$ are both $p\times 1$ column vectors.
Let $$d={A+bc^\top}$$
Then,
\begin{align} {dA^{-1}}&={I_p}+{bc^\top A^{-1}} \\\\&\implies {dA^{-1}b}=b+{bc^\top A^{-1}b}= b (1+{c^\top A^{-1}b}) \\\\&\implies ({dA^{-1}b})(1+{c^\top A^{-1}b})^{-1}=b \\\\&\implies ({dA^{-1}b})(1+{c^\top A^{-1}b})^{-1} c^\top={bc^\top} \\\\&\implies A+({dA^{-1}b})(1+{c^\top A^{-1}b})^{-1} c^\top= A+{bc^\top}= d \\\\&\implies A= d (1-{A^{-1}b}(1+{c^\top A^{-1}b})^{-1} c^\top) \\\\&\implies {I_p}= d (1-{A^{-1}b}(1+{c^\top A^{-1}b})^{-1} c^\top){A^{-1}} \\\\&\implies {d^{-1}}=(1-{A^{-1}b}(1+{c^\top A^{-1}b})^{-1} c^\top){A^{-1}} \end{align}
That is,
\begin{align} ( {A+bc^\top})^{-1}&={A^{-1}}-{A^{-1}b}(1+{c^\top A^{-1}b})^{-1} c^\top{A^{-1}} \\\\&={A^{-1}}-\dfrac{1}{1+{c^\top A^{-1}b}} {A^{-1}bc^\top A^{-1}} \end{align}