The statement generalizes to all dimensions.
Given a vector space $\mathbb{R}^n$ with the usual Euclidean metric, we represent a $k$-dimensional subplane spanned by the vectors $v_1,\dots,v_k$ with $v_1 \wedge v_2 \wedge \dots \wedge v_k$, where the wedge $\wedge$ is the antisymmetric linear product of the exterior algebra. Also, we have that the volume of an $n$-dimensional "parallelogramm" spanned by $v_1,\dots,v_n$ is just the $\lvert V \rvert$ in
$$ v_1 \wedge v_2 \wedge \dots \wedge v_n = V e_1 \wedge \dots \wedge e_n $$
where $e_i$ is the standard basis. Additionally, we have the Hodge star $\star$, which, in particular, sends a $n$-plane $v_1 \wedge \dots \wedge v_n$ to its volume $\mathrm{vol}(v_1 \wedge \dots \wedge v_n)$.The $n$-dimensional version of Lami's theorem is now:
Given $n+1$ vectors $v_i$ whose sum is zero, $$ \frac{\lvert v_i \rvert}{\mathrm{vol}(\hat{v}_{j_1} \wedge \dots \hat{v}_{j_n} )} = \frac{\lvert v_k\rvert}{\mathrm{vol}(\hat{v}_{l_1} \wedge \dots \wedge \hat{v}_{l_n})}$$
where the $j_{\circ}$ are a permutation of the indices except $i$, and likewise the $l_\circ$ a permutation of the indices except $k$.
Proof: Choose any two indices $i,j$, let them w.l.o.g. be $1,2$, and then apply $\wedge v_3 \wedge v_4 \wedge \dots \wedge v_{n+1}$ to the equation $\sum_i v_i = 0$. Antisymmetry of the wedge implies that a summand becomes zero as soon as one of the $v_i$ appears twice. Only the summands $v_1$ and $v_2$ survive this process, and we get
$$ v_1 \wedge v_3 \wedge \dots \wedge v_n = - v_2 \wedge v_3 \wedge \dots \wedge v_n $$
Now, write $v_i = \lvert v_i \rvert \hat{v}_i$ where $\hat{v_i}$ is a unit vector. Linearity of the wedge implies $v_1 \wedge v_3 \wedge \dots \wedge v_n = \lvert v_1 v_3 \dots v_n \rvert \hat{v}_1 \wedge \hat{v}_3 \wedge \dots \wedge \hat{v}_n$, and thus we arrive at
$$ \lvert v_1 \rvert \hat{v}_1 \wedge \hat{v}_3 \wedge \dots \wedge \hat{v}_n = \lvert v_2 \rvert \hat{v}_2 \wedge \hat{v}_3 \wedge \dots \wedge \hat{v}_n$$
and applying the Hodge star to both sides yields the desired result.
Best Answer
Your top-line question can be answered at many levels. Setting aside issues of forms and covariant/contravariant, the answer is:
No matter what basis you compute that in, you have to get the same answer because it's a physical quantity.
The usual "sum of products of orthonormal components" is then a convenient computational approach, but as you've seen it's not the only way to compute them.
The dot product's properties includes linear, commutative, distributive, etc. So when you expand the dot product
$$(a_x \hat{x}+a_y \hat{y} + a_z \hat{z}) \cdot (b_x \hat{X}+b_y \hat{Y} + b_z \hat{Z})$$
you get nine terms like $( a_x b_x \hat{x}\cdot\hat{X}) + (a_x b_y \hat{x}\cdot\hat{Y})+$ etc. In the usual orthonormal basis, the same-axis $\hat{x}\cdot\hat{X}$ factors just become 1, while the different-axis $\hat{x}\cdot\hat{Y}$ et al factors are zero. That reduces to the formula you know.
In a non-orthonormal basis, you have to figure out what those basis products are. To do that, you refer back to the definition: The product of the size of each, times the cosine of the angle between. Once you have all of those, you're again all set to compute. It just looks a bit more complicated...