The physical significance depends on the matrix. The primary point is that multiplication by a matrix (in the usual sense, matrix on the left) represents the action of a linear transformation. We'll work with one basis throughout which we'll use to represent our matrices and our vectors. Just note that because of linearity, if we have a vector $x = c_1e_1 + \ldots + c_ne_n$, and a linear transformation $L$, then
$$
\begin{eqnarray*}
L(x) &=& L(c_1e_1 + \ldots + c_ne_n) \\
&=& L(c_1e_1) + \ldots + L(c_ne_n) \\
&=& c_1L(e_1) + \ldots + c_nL(e_n)
\end{eqnarray*}.
$$
This means that any linear transformation is uniquely determined by its effect on a basis. So to define one, we only need to define its effect on a basis. This is the matrix
$$
\left(L(e_1) \ldots L(e_n)\right) =
\left(
\begin{array}{c}
a_{11} & \ldots & a_{1n} \\
\ldots & \ldots & \ldots \\
a_{n1} & \ldots & a_{nn}
\end{array}
\right)
$$
where $a_{ij}$ is the $i$'th compenent of $L(e_j)$.
Let's call this matrix $M_L$. We want to define multiplication of $M_L$ and some vector $x$ so that $M_L \cdot x = L(x)$. But there's only one way to do this. Because the $j$'th column of $M_L$ is just $L(e_j)$ and in light of our decomposition of the action of $L$ in terms of the $L(e_j)$, we can see that
$$
M_L \cdot x = \left(
\begin{array}{c}
a_{11} & \ldots & a_{1n} \\
\ldots & \ldots & \ldots \\
a_{n1} & \ldots & a_{nn}
\end{array}
\right)
\cdot
\left(
\begin{array}{c}
c_1 \\
\ldots \\
c_n
\end{array}
\right)
$$
must equal
$$
c_1\left(
\begin{array}{c}
a_{11} \\
\ldots \\
a_{n1}
\end{array}
\right)
+ \ldots +
c_n\left(
\begin{array}{c}
a_{1n} \\
\ldots \\
a_{nn}
\end{array}
\right)
=
\left(
\begin{array}{c}
c_1a_{11} + \ldots + c_na_{1n} \\
\ldots \\
c_1a_{n1} + \ldots + c_na_{nn}
\end{array}
\right)
$$
which is the standard definition for a vector left-multiplied by a matrix.
EDIT: In response to the question "Is a matrix a scalar thing". Kind of but no.
If you consider the most basic linear equation in one variable, $y = mx$, where everything in sight is a scalar, then a matrix generalizes the role played by $m$ to higher dimensions and a vector generalizes the role played by $y$ and $x$ to higher dimensions. But matrices don't commute multiplicatively. So that's one big thing that's different. But they're strikingly similar in a lot of ways. We can define the function of matrices $f(A) = A^2$ and we can differentiate it with respect to $A$. When we do this in one variable with the map $f(x) = x^2$, we get the linear map $f_x'(h) = 2xh$ but when we do it with matrices, we get the linear map $f_A'(H) = AH + HA$. If matrices commuted, then that would just be $2AH$!
EDIT2:
My "add comment" button isn't working for some reason. The $e_j$'s are a basis, $e_1, \ldots, e_n$. I think the best thing to do would be to wait for your teacher to get around to it. I sometimes forget that people introduce matrices before vector spaces and linear transformations. It will all make much more sense then. The main point of a basis though is that it's a set of vectors so that every vector in the given space can be written as a unique linear combination of them.
Best Answer
The second constraint can be easily shown by the separation of the real and imaginary parts (as you actually noted). I can prove the first constraint similarly but only for a symmetric C matrix (it is not clearly stated but I suppose it is symmetric from your comment).
I start by expanding the left-hand side: $$ (\mathbf{h_1} \pm i\mathbf{h_2})^TC(\mathbf{h_1} \pm i\mathbf{h_2})=(\mathbf{h_1}^T \pm i\mathbf{h_2}^T)C(\mathbf{h_1} \pm i\mathbf{h_2}) = \mathbf{h_1}^TC\mathbf{h_1} \pm i\mathbf{h_1}^TC\mathbf{h_2} \pm i\mathbf{h_2}^TC\mathbf{h_1} + i^2\mathbf{h_2}^TC\mathbf{h_2} = \mathbf{h_1}^TC\mathbf{h_1} \pm i\mathbf{h_1}^TC\mathbf{h_2} \pm i\mathbf{h_2}^TC\mathbf{h_1} - \mathbf{h_2}^TC\mathbf{h_2} = 0 $$
You need to realize that $ 0=0+0i $ and you can balance both real and imaginary part separately. Balancing the real part gives you directly the second constraint: $$ \mathbf{h_1}^TC\mathbf{h_1} - \mathbf{h_2}^TC\mathbf{h_2} = 0 \implies \mathbf{h_1}^TC\mathbf{h_1} = \mathbf{h_2}^TC\mathbf{h_2} $$
Balancing the imaginary part is a little bit more tricky: $$ \pm \mathbf{h_1}^TC\mathbf{h_2} \pm \mathbf{h_2}^TC\mathbf{h_1} = 0 $$ By using the property of transpose operation that $ (AB)^T=B^TA^T $ I get: $$ \pm \mathbf{h_1}^TC\mathbf{h_2} \pm \mathbf{h_2}^TC\mathbf{h_1} = \pm \mathbf{h_1}^TC\mathbf{h_2} \pm (\mathbf{h_1}^TC^T\mathbf{h_2})^T = 0 $$ Both terms are scalar, which can be proved simply by dimensionality analysis, and transpose of a scalar is the scalar itself: $$ \pm \mathbf{h_1}^TC\mathbf{h_2} \pm (\mathbf{h_1}^TC^T\mathbf{h_2})^T = \pm \mathbf{h_1}^TC\mathbf{h_2} \pm \mathbf{h_1}^TC^T\mathbf{h_2} = 0 $$ Now, you get finally your other constraint if $C$ is symmetric (i.e. $ C=C^T $): $$ \pm \mathbf{h_1}^TC\mathbf{h_2} \pm \mathbf{h_1}^TC^T\mathbf{h_2} = \pm 2 \mathbf{h_1}^TC\mathbf{h_2} = 0 \implies \mathbf{h_1}^TC\mathbf{h_2} = 0 $$