Before talking about multiplication of two matrices, let's see another way to interpret matrix $A$. Say we have a matrix $A$ as below,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix}
$$
we can easily find that column $\begin{bmatrix} 3 \\ 2 \\ 3 \\\end{bmatrix}$ is linear combination of first two columns.
$$
1\begin{bmatrix} 1 \\ 1 \\ 1\\\end{bmatrix} +
1\begin{bmatrix} 2 \\ 1 \\ 2\\\end{bmatrix} =
\begin{bmatrix} 3 \\ 2 \\ 3 \\\end{bmatrix}
$$
And you can say $\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$ and $\begin{bmatrix} 2 \\ 1 \\ 2 \\\end{bmatrix}$ are two basis for column space of $A$.
Forgive the reason why you want to decompose matrix $A$ at first place like this,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
\end{bmatrix} +
\begin{bmatrix}
0 & 2 & 2 \\
0 & 1 & 1 \\
0 & 2 & 2 \\
\end{bmatrix}
$$
but you can, and in the end, it looks reasonable.
If you view this equation column wise, each $column_j$ of $A$ is the sum of corresponding $column_j$ of each matrix in RHS.
What's special about each matrix of RHS is that each of them is a rank 1 matrix whose column space is the line each base of column space of $A$ lies on. e,g.
$
\begin{bmatrix}
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
\end{bmatrix}
$
spans only $\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$. And people say rank 1 matrices are the building blocks of any matrices.
If now you revisit the concept of viewing $A$ column by column, this decomposition actually emphasizes the concept of linear combination of base vectors.
If these make sense, you could extend the RHS further,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}
\begin{bmatrix} 1 & 0 & 1 \\\end{bmatrix} +
\begin{bmatrix} 2 \\ 1 \\ 2 \\\end{bmatrix}
\begin{bmatrix} 0 & 1 & 1 \\\end{bmatrix}
$$
Each term in RHS says take this base, and make it "look like" a rank 3 matrix.
And we can massage it a little bit, namely put RHS into matrix form, you get
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 2 \\
1 & 1 \\
1 & 2 \\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1 \\
\end{bmatrix}
$$
Now you can forget matrix $A$, and imagine what you have are just two matrices on RHS. When you read this text backward(I mean logically), I hope matrix multiplication in this fashion makes sense to you now. Or if you prefer, you can start with two matrices in the question.
The most common case, where your $f(v_i, v_j)$ is a bilinear form, would simply become $$ ACB, $$
where $C$ is a square matrix expressing the bilinear form for the bases that gave $A,B.$
In this case, when $C$ is symmetric and $A = B^T,$ we say that $C$ represents $B^T CB$ as quadratic forms.
Best Answer
You make a mistake when multiplying the column vector with the row vector. Remember that $$ \begin{bmatrix} a \\ b \end{bmatrix} \begin{bmatrix} c & d \end{bmatrix} = \begin{bmatrix} ac & ad \\ bc & bd \end{bmatrix} $$