I guess A is orthogonal and symmetric, so that tells you A−1 = AT = A, but uh, that's not a very common situation to my mind. Maybe someone else has a better "test-taking strategy" explanation, but me personally, I would just row reduce or whatever method you use in general.
An orthogonal matrix is defined to be a matrix whose transpose is its inverse. However, for us the better (almost) definition is a matrix whose rows (or columns) are orthogonal, as in, perpendicular. So (1,1,1,1) is orthogonal to (1,-1,1,-1) since their dot product is (1)(1)+(1)(-1)+(1)(1)+(1)(-1) = 1 - 1 + 1 - 1 is zero. You also should check that the length of the vector in each row is 1, $\sqrt{(1/2)^2 + (1/2)^2 + (1/2)^2 + (1/2)^2} = 1$ so good, but even if not, that part is easily fixed.
Sometimes you can tell just by looking that a matrix is orthogonal.
As far as parsing goes:
Here is the matrix:
$A = \frac12 \begin{pmatrix}
1 & 1 & 1 & 1 \\\\
1 & 1 & -1 & -1 \\\\
1 & -1 & 1 & -1 \\\\
1 & -1 & -1 & 1
\end{pmatrix}$
$A = \frac12 \begin{pmatrix}1&1&1&1\\1&1&-1&-1\\1&-1&1&-1\\1&-1&-1&1\end{pmatrix}$
Here is using the array environment:
$A = \frac12 \left(\begin{array}{rrrr}
1 & 1 & 1 & 1 \\\\
1 & 1 & -1 & -1 \\\\
1 & -1 & 1 & -1 \\\\
1 & -1 & -1 & 1
\end{array}\right)$
$A = \frac12 \left(\begin{array}{rrrr}1&1&1&1\\1&1&-1&-1\\1&-1&1&-1\\1&-1&-1&1\end{array}\right)$
The backslashes get eaten by the markdown software, so you just double them.
For an $n\times n$ matrix $A$, a scalar $\lambda$ is an eigenvalue of $A$ if and only if it is a zero of the characteristic polynomial of $A$.
Why is this? Remember that $\lambda$ is an eigenvalue of $A$ if and only if there is a nonzero vector $\mathbf{v}$ such that $A\mathbf{v}=\lambda\mathbf{v}$; this is equivalent to the existence of a nonzero vector $\mathbf{v}$ such that $(A-\lambda I)\mathbf{v}=\mathbf{0}$. That means that the nullspace of the matrix $A-\lambda I$ (remember, $I$ is the identity matrix) is not just the zero vector, which means, necessarily, that $A-\lambda I$ is not invertible. Since it is not invertible, that means that its determinant is $0$; its determinant happens to equal the characteristic polynomial evaluated at $\lambda$, so this shows that if $\lambda$ is an eigenvalue of $A$, then $\lambda$ is a zero of the characteristic polynomial.
Conversely, if $\lambda$ is a zero of the characteristic polynomial of $A$, then the determinant of $A-\lambda I$ is zero, which means that $A-\lambda I$ is not invertible, which means there is a nonzero vector $\mathbf{w}$ such that $(A-\lambda I)\mathbf{w}=\mathbf{0}$. This shows that $\mathbf{w}$ is an eigenvector of $A$ with eigenvalue $\lambda$, so $\lambda$ is an eigenvalue.
For the matrix you have,
$$A = \left(\begin{array}{rrr}
3 & -1 & -1\\
-1 & 3 & -1\\
-1 & -1 & 3
\end{array}\right).$$
The characteristic polynomial $p(t)$ is:
$$\begin{align*}
p(t)=\det(A-tI) &= \left|\begin{array}{ccc}
3-t & -1 & -1\\
-1 & 3-t & -1\\
-1 & -1 & 3-t
\end{array}\right|\\
&= (3-t)\left|\begin{array}{cc}
3-t & -1\\
-1 & 3-t
\end{array}\right| +\left|\begin{array}{cc}
-1 & -1\\
-1 & 3-t
\end{array}\right| - \left|\begin{array}{cc}
-1 & -1\\
3-t & -1
\end{array}\right|\\
&= (3-t)\Bigl((3-t)^2-1\Bigr) + (t-4) - (4-t)\\
&= (3-t)\Bigl(t^2 -6t +8\Bigr) +2(t-4)\\
&= (3-t)(t-4)(t-2) + 2(t-4)\\
&= (t-4)\Bigl(2 - (t-2)(t-3)\Bigr) \\
&= -(t-4)(t^2-5t+6-2)\\
&= -(t-4)(t^2-5t+4)\\
&= -(t-4)^2(t-1).
\end{align*}$$
Since $\lambda$ is an eigenvalue of $A$ if and only if $p(\lambda)=0$, this says that the $A$ matrix has two distinct eigenvalues: $\lambda=4$, with algebraic multiplicity $2$, and $\lambda=1$.
What are the corresponding eigen vectors?
For $\lambda=1$, you want vectors $(a,b,c)^t$ such that $A(a,b,c)^t = (a,b,c)^t$ ($t$ is the transpose). Equivalently, you want the nullspace of $A-I$, except for $\mathbf{0}$. It is not hard to verify that $(1,1,1)^t$ is an eigenvector corresponding to $\lambda=1$, and that every eigenvector corresponding to $\lambda=1$ is a nonzero scalar multiple of $(1,1,1)^t$ (is this where you got confused? This is a vector, not a list of eigenvalues).
For $\lambda=4$, you want vectors $(a,b,c)^t$ that le in the nullspace of $A-4I$. Here, you want $a+b+c=0$, so the nullspace is spanned by the vectors $(1,0,-1)^t$ and $(0,1,-1)^t$; you can verify that each of these is an eigenvector corresponding to $\lambda=4$ and they are linearly independent, so the eigenvectors corresponding to $\lambda=4$ are the nonzero linear combinations of these two.
Best Answer
One can do an "in-place" inversion of the $4\times 4$ matrix $A$ as a bordering of the $3\times 3$ submatrix in the upper left corner, if that submatrix is itself invertible. The general technique is described by In-place inversion of large matrices.
Suppose we have:
$$ A = \begin{bmatrix} B & u \\ v' & z \end{bmatrix} $$
where $B$ is the $3\times 3$ invertible submatrix, $u$ is a $3\times 1$ "column" block, $v'$ is a $1\times 3$ "row" block, and $z$ is a single entry in the lower right corner.
We have the following steps to compute:
(1) Invert $B$, overwriting it with $B^{-1}$:
$$ \begin{bmatrix} B^{-1} & u \\ v' & z \end{bmatrix} $$
(2) Multiply column $u$ by $B^{-1}$ and replace $u$ with the negation of that matrix-vector product:
$$ \begin{bmatrix} B^{-1} & -B^{-1}u \\ v' & z \end{bmatrix} $$
(3) Next, multiply the row $v'$ times the column result from the previous step, which gives a scalar, and add this to the lower right entry:
$$ \begin{bmatrix} B^{-1} & -B^{-1}u \\ v' & z - v'B^{-1}u \end{bmatrix} $$
(4) To keep the expressions simple, let's define $s = z - v'B^{-1}u$, so that is just the current lower right entry. Take its reciprocal:
$$ \begin{bmatrix} B^{-1} & -B^{-1}u \\ v' & s^{-1} \end{bmatrix} $$
(5) Multiply the row $v'$ on the right by $B^{-1}$:
$$ \begin{bmatrix} B^{-1} & -B^{-1}u \\ v'B^{-1} & s^{-1} \end{bmatrix} $$
(6) Multiply that row result by the negative scalar reciprocal $-s^{-1}$:
$$ \begin{bmatrix} B^{-1} & -B^{-1}u \\ -s^{-1}v'B^{-1} & s^{-1} \end{bmatrix} $$
(7) Construct a $3\times 3$ matrix (simple tensor) by multiplying the column $-B^{-1}u$ and the row $-s^{-1}v'B^{-1}$, adding this to the upper left corner submatrix:
$$ \begin{bmatrix} B^{-1} + B^{-1}u s^{-1}v'B^{-1} & -B^{-1}u \\ -s^{-1}v'B^{-1} & s^{-1} \end{bmatrix} $$
(8) The last step is an easy one, we simply multiply the upper right column entries $-B^{-1}u$ by $s^{-1}$:
$$ A^{-1} = \begin{bmatrix} B^{-1} + B^{-1}u s^{-1}v'B^{-1} & -s^{-1} B^{-1}u \\ -s^{-1}v'B^{-1} & s^{-1} \end{bmatrix} $$