Here are two ways to derive the formula for the dot product. I assume that $v_1$ and $v_2$ are vectors with spherical coordinates $(r_1, \varphi_1, \theta_1)$ and $(r_2, \varphi_2, \theta_2)$.
First way: Let us convert these spherical coordinates to Cartesian ones. For the first point we get Cartesian coordinates $(x_1, y_1, z_1)$ like this:
$$
\begin{array}{rcl}
x_1 & = & r_1 \sin \varphi_1 \cos \theta_1, \\
y_1 & = & r_1 \sin \varphi_1 \sin \theta_1, \\
z_1 & = & r_1 \cos \varphi_1.
\end{array}
$$
Similar formulas hold for $(x_2, y_2, z_2)$. Now, the dot product is simply equal to
$$
(v_1, v_2) = x_1 x_2 + y_1 y_2 + z_1 z_2 = \\
= r_1 r_2 ( \sin \varphi_1 \sin \varphi_2 ( \cos \theta_1 \cos \theta_2 + \sin \theta_1 \sin \theta_2) + \cos \varphi_1 \cos \varphi_2) = \\
= r_1 r_2 ( \sin \varphi_1 \sin \varphi_2 \cos (\theta_1 - \theta_2) + \cos \varphi_1 \cos \varphi_2)
$$
Second way: Actually, we could have done it without coordinate conversions at all. Indeed, we know that $(v_1, v_2) = r_1 r_2 \cos \alpha$, where $\alpha$ is the angle between $v_1$ and $v_2$. But $\cos \alpha$ can be immediately found by the Spherical law of cosines, which yields exactly the same formula that we just proved. Basically, our first way is itself a proof for the spherical law of cosines.
PS: I'm not saying anything about cross products, but my guess is that the correct formula will look terrible. Not only will it contain sines and cosines, it is likely that it will also contain arc functions (they will appear when we try to convert the result back to spherical coordinates). Unless those arc functions magically cancel out with all the sines and cosines. But it is highly unlikely, and I don't feel like going through the trouble of checking.
PPS: One more thing. Cross products are not the only scary thing about spherical coordinates. If you think about it, even addition of two vectors is extremely unpleasant in spherical coordinates. Multiplication by a number is alright though, because it only changes $r$ and doesn't affect $\varphi$ and $\theta$ (at least when we multiply by a positive number).
You can define a tensor agrees with the Levi-Civita symbol for orthogonal coordinate systems but that has the correct components for non-orthonormal systems.
This, and other results, can be derived in the setting of clifford algebra.
Clifford algebra deals with a "quotient" of the tensor algebra--an interesting subset, if you will, of tensors that correspond to vectors, planes, and other objects that can often be interpreted geometrically.
To facilitate this, clifford algebra introduces a "geometric product" of vectors, which has the following laws:
- If two vectors $a, b$ are orthogonal, then $ab = -ba$ under the product.
- The product of a vector with itself is a scalar, i.e. $aa = |a|^2$.
- The product is associative: $(ab)c = a(bc)$ for all vectors $a, b, c$.
- The product is distributive over addition: $a(b+c) = ab + ac$.
From this definition, we can build up various objects that are not vectors but are produced from products of vectors under the geometric product.
With the geometric product in place, consider two vectors $a, b$, and write $b = b_\parallel + b_\perp$, the parts of $b$ parallel and perpendicular to $a$, respectively. Now then, we can write the product $ab$ as
$$ab = a b_\parallel + a b_\perp$$
The first term, $a b_\parallel$ is a scalar: $b\parallel = \alpha a$ for some scalar $\alpha$, and $aa = |a|^2$, a scalar, under rule 2.
The second term cannot be reduced, but we know from rule 1 that it anticommutes: $a b_\perp = -b_\perp a$. This is just like the cross product.
Indeed, if you write out this product with components, you get the following:
$$ab = (a^1 b^1 + a^2 b^2 + a^3 b^2) + (a^1 b^2 - a^2 b^1) e_1 e_2 + \ldots = a \cdot b + \frac{1}{2} a^i b^j e_i e_j$$
(summation implied). The latter term is called a bivector and is traditionally denoted $a \wedge b$.
You might have noticed now that we have at least three different kinds of objects: vectors, scalars, and bivectors. In clifford algebra, we number these objects by the number of vectors needed to form them, and we call this number the grade of the object. Scalars are grade-0, vectors grade-1, and bivectors grade-2. In 3d, you can also form a grade-3 object, a trivector. One choice might be $\epsilon = e_1 e_2 e_3$.
Now, what happens when you multiply a bivector with $\epsilon$?
First, the result must be a vector. Each bivector can be written as a linear combination of $e_1 e_2, e_2 e_3, e_3 e_1$, and $\epsilon$ has all of those in it. You can see that $e_1 e_2 \epsilon = e_1 e_2 e_1 e_2 e_3 = -e_3$ (use rule 1 for anticommuting swaps and rule 2 for same vectors to annihilate). The same holds for all other terms.
By convention, then, we can define a product
$$a \times b = -\epsilon (a \wedge b)$$
This coincides with the usual definition of the cross product. You can verify this term by term of you like; it's not that interesting to do algebraically, but geometrically, one comes to understand that multiplication by $\epsilon$ produces orthogonal complements of subspaces: a vector goes to its complementary plane, a plane to its normal vector, and so on. That is why I called this 3-vector $\epsilon$, as well: its components are those of the correct Levi-Civita tensor (not the symbol) that should have different components in nonorthonormal coordinate systems. And this is exactly what is meant in differential forms parlance when one uses the Hodge star operator.
Outside of 3d, the dual of a bivector is no longer a vector (4d: a bivector has another bivector totally complementary to it), and so the cross product as we typically imagine it no longer makes sense.
Best Answer
Yes it is correct, indeed we have that
therefore
$$\vec a-\vec b=\vec 0 \implies \vec a=\vec b$$