An important class of problems in Riemannian geometry is to understand the interaction between the curvature and topology.
An example of such interaction is given by the Gauss-Bonnet theorem which relates the geodesic curvature, the Gaussian curvature to the Euler characteristic of a regular surface of class $C^3$.
When studying the geometry of a smooth manifold we need to introduce the commutator of twice covariant differentiating vector fields which is called Riemannian curvature tensor.
As a matter of fact, if in Euclidean space we can change the order of differentiation, on a Riemannian manifold, the Riemann curvature tensor is in general nonzero.
For surfaces, the Riemann curvature tensor is equivalent to the Gaussian curvature K, which is scalar function.
On the other hand in dimensions larger than two, the Riemann curvature tensor is a tensor–field.
Now there are several curvatures associated to the Riemann curvature tensor.
Given a point $p \in M^n$ and two dimensional plane $\Pi$ in the tangent space of M at p, we can define a surface S in M to be the union of all geodesics passing through p and tangent to $\Pi$.
In a neighborhood of p, S is a smooth 2D submanifold of M. Then it is possible to define the sectional curvature $K(\Pi)$ of the 2D plane to be the Gaussian curvature of S at p.
Thus the sectional curvature $K$ of a Riemannian manifold associates to each 2D plane is a tangent space a real number.
You can imagine the sectional curvature as a kind of generalization of the Gaussian curvature.
The answer to your question is no. In general, the set of asymptotic vectors will be a quadratic cone (and the dimension of the linear rulings of this cone will depend on the rank of the second fundamental form). If the rank of the second fundamental form is $2$ (and the form is indefinite), then you will get a union of hyperplanes, but if the rank is higher, the you will have a "curved cone" ruled by linear spaces of dimension depending on the rank.
By the Spectral Theorem, you can find an orthonormal basis for the tangent space so that, in the new coordinates, we have
$\text{II}(x,x) = \sum\lambda_i x_i^2$, and you can deduce everything from knowing how many positive $\lambda_i$, how many negative $\lambda_i$, and how many zero $\lambda_i$ you have.
Best Answer
As far as I understand your question, you are asking if the components $$ II_{i j} = II(X_i,X_j) $$ are computed by evaluating a matrix $II$ on vectors $X_i$ and $X_j$.
Answer. Yes, they are.
Caveat. In the Wikipedia's article that you likely to have in mind $(X_1, X_2)$ is assumed to be an orthonormal basis. In this case one can say that the principal curvatures are "eigenvalues of the second fundamental form", but this is just because in the case of an orthonormal basis the second fundamental equals to the shape operator.
More precisely, the principal curvatures are eigenvalues of the shape operator which is the differential of the Gauss map.
So, indeed you differentiate some data, form a matrix, and then calculate the components of this matrix in a basis that you like.
I would recommend these lecture notes of Greg Galloway for those who want to get the right definitions and a number of worked examples.