The value of $$
\begin{bmatrix} s+2 & -1 & 0 & 0\\
0 & s+3 & 0 & 0\\
-1 & -2 & s & -1\\
2 & -1 & 1 & 4 \end{bmatrix}^{-1}$$
$$\text{is}$$
$$\begin{bmatrix}
\frac{1}{s + 2} & \frac{1}{s^2 + 5 s + 6} & 0 & 0 \\
0 & \frac{1}{s + 3} & 0 & 0\\ \frac{2}{4 s^2 + 9 s + 2} & \frac{9 s + 20}{4 s^3 + 21 s^2 + 29 s + 6} & \frac{4}{4 s + 1} & \frac{1}{4 s + 1}\\
-\frac{2 s + 1}{4 s^2 + 9 s + 2} & -\frac{ - s^2 + 2 s + 5}{4 s^3 + 21 s^2 + 29 s + 6} & -\frac{1}{4 s + 1} & \frac{s}{4 s + 1}
\end{bmatrix}$$
Multiplying with the row vector retains the 2nd row and eliminates all others.
$$\begin{bmatrix}
0 & \frac{1}{s + 3} & 0 & 0\\
\end{bmatrix}$$
Multiplying with the column vector picks up the last 3 columns and eliminates others
Answer = $\dfrac{1}{s+3}$
This is, however, merely a coincidence. This is not true in general.
Before talking about multiplication of two matrices, let's see another way to interpret matrix $A$. Say we have a matrix $A$ as below,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix}
$$
we can easily find that column $\begin{bmatrix} 3 \\ 2 \\ 3 \\\end{bmatrix}$ is linear combination of first two columns.
$$
1\begin{bmatrix} 1 \\ 1 \\ 1\\\end{bmatrix} +
1\begin{bmatrix} 2 \\ 1 \\ 2\\\end{bmatrix} =
\begin{bmatrix} 3 \\ 2 \\ 3 \\\end{bmatrix}
$$
And you can say $\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$ and $\begin{bmatrix} 2 \\ 1 \\ 2 \\\end{bmatrix}$ are two basis for column space of $A$.
Forgive the reason why you want to decompose matrix $A$ at first place like this,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
\end{bmatrix} +
\begin{bmatrix}
0 & 2 & 2 \\
0 & 1 & 1 \\
0 & 2 & 2 \\
\end{bmatrix}
$$
but you can, and in the end, it looks reasonable.
If you view this equation column wise, each $column_j$ of $A$ is the sum of corresponding $column_j$ of each matrix in RHS.
What's special about each matrix of RHS is that each of them is a rank 1 matrix whose column space is the line each base of column space of $A$ lies on. e,g.
$
\begin{bmatrix}
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
\end{bmatrix}
$
spans only $\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$. And people say rank 1 matrices are the building blocks of any matrices.
If now you revisit the concept of viewing $A$ column by column, this decomposition actually emphasizes the concept of linear combination of base vectors.
If these make sense, you could extend the RHS further,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}
\begin{bmatrix} 1 & 0 & 1 \\\end{bmatrix} +
\begin{bmatrix} 2 \\ 1 \\ 2 \\\end{bmatrix}
\begin{bmatrix} 0 & 1 & 1 \\\end{bmatrix}
$$
Each term in RHS says take this base, and make it "look like" a rank 3 matrix.
And we can massage it a little bit, namely put RHS into matrix form, you get
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 2 \\
1 & 1 \\
1 & 2 \\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1 \\
\end{bmatrix}
$$
Now you can forget matrix $A$, and imagine what you have are just two matrices on RHS. When you read this text backward(I mean logically), I hope matrix multiplication in this fashion makes sense to you now. Or if you prefer, you can start with two matrices in the question.
Best Answer
It has already been pointed out that you can multiply a row vector and a matrix. In fact, the only difference between the two multiplications below is that the numeric values in the first result are stacked in a column vector while the same numeric values are listed in a row vector in the second result:
$$\pmatrix{6& -7& 10 & 1 \\ 0& 3& -1 & 4 \\ 0& 5& -7 & 5 \\ 4&1&0&-2} \pmatrix{2\\-2\\-1\\1} = \pmatrix{17\\-1\\2\\4}$$
$$ \pmatrix{2 &-2&-1&1} \pmatrix{6& 0&0&4\\-7& 3&5&1\\10 & -1&-7&0\\1 & 4 & 5&-2} = \pmatrix{17&-1&2&4}$$
One simple pragmatic difference between these two equations is that the second one is a lot wider when it is fully written out.
It seems to me the first equation "fits" more neatly on the page because we have already committed to making an equation that is four rows tall (because of the $4\times4$ matrix, this is unavoidable), so there is no "cost" in also making the vectors four rows tall; and in return we get vectors that are only one column wide instead of four columns each. Now imagine the dimensions of the matrix were $6\times6$; the multiplication by a column vector would still fit neatly on this page but we might have some difficulty with the multiplication that uses row vectors; it might not fit within the margins of this column of text.
It's also possible that the convention is influenced by the interpretation of the matrix as a transformation to be applied to the vector, along with a preference for writing the names of transformations on the left of the thing they transform (much as we like to write a function name to the left of the input parameters of a function, that is, $f(x) = x^2$ rather than $(x)f = x^2$). But I'm not sure there is a more compelling reason behind this particular observation other than collective force of habit, and these patterns are not universal; sometimes people write the name of the transformation on the right.