This thread on physicsforums elaborates a bit on the difference between Levi-Civita symbols and tensors. Based on that, I conclude...
1) Your index notation formula for the magnetic field should use the Levi-Civita tensor, then. The "symbol" is a convenient thing, but this expression must be written with tensors.
2) Carroll likely made a mistake and meant to talk about the Levi-Civita tensor's transformation properties.
3) Any expression where the same index appears on the bottom twice (or the top twice) is just laziness on the part of the author. It's a common laziness, especially in contexts where one doesn't discriminate between covariant and contravariant components, however.
Actually, I'm not sure what your question is here.
4) Again, I would say that this expression should use the tensor, not the symbol.
5) I see no reason why you wouldn't.
Just some general remarks on the Levi-Civita tensor/symbol and what they represent: a flat space has a unique "volume form" or "pseudoscalar". There is a unit volume element that all other volumes are scalar multiples of. This volume has an orientation (think of a volume spanned by vectors according to the right-hand rule vs. a left-hand rule).
The Levi-Civita tensor and symbol are related to this notion. The tensor represents the components of the unit volume element with respect to volume elements built by combinations of basis vectors.
The volume element can be used to perform duality operations. This is the foundation of the Hodge star operator. Using the $N$-dimensional Levi-Civita tensor on a tensor object with $k$ free indices yields a new object with $N-k$ free indices. This can be described geometrically, too. In 3d, scalars would go to volumes, vectors to planes, planes to vectors, and volumes to scalars under this duality operation. It is the explicit mapping of planes to vectors that is so often performed with duality--it allows us to cheat in 3d by dealing with only scalars and vectors. Planes and volumes can then be mapped back to vectors and scalars by duality. This is exactly what is done with the magnetic field and angular momentum. You should see clearly that both of these vectors, if not for the use of the Levi-Civita tensor, would be expressions with 2 free indices, and antisymmetric on those indices. These objects are called bivectors.
The Levi-Civita tensor and symbol are often used in physics and math to treat expressions through duality rather than directly--even when, in my opinion, this obscures the real physics of the problem or covers up for a shortcoming of the notation. Just the other day around here we had a question about building up 4-volumes from a single plane. Geometrically, this is obvious--you can't build a 4-volume from a single plane. But in index notation, it was cumbersome at best, involving finding the dual plane through use of the Levi-Civita tensor and taking traces.
Overall, the Levi-Civita tensor and its many indices can be difficult to work with, especially in arbitrary coordinate systems. I once heard a professor bemoan that all the identities another professor had taught students with Levi-Civita had only used the symbol--i.e. the tensor in cartesian coordinates--and so they weren't valid in arbitrary coordinate systems. The solution suggested was to teach students about tensor densities, which was met with skepticism at best, since there were only three professors in the whole department that, in the other professor's view, either cared about or even knew about tensor densities. I think part of this view is why the Levi-Civita symbol is often used instead; it's just easier to prove some things in cartesian coordinates, even if the resulting expression is not really correct (not really a tensor because the metric determinant has been ignored, etc.).
Don't use indices. Use a sophisticated and efficient notation for tensor manipulations: through clifford algebra.
Let $\gamma^0, \gamma^1, \gamma^2, \gamma^3$ be basis vectors. Under the clifford product operation, they obey the following:
$$\gamma^\mu \gamma^\nu = \begin{cases}\eta^{\mu \nu} & \mu = \nu \\ -\gamma^\mu \gamma^\nu & \mu \neq \nu\end{cases}$$
The product operation is also associative.
Write $E$ and $B$ in terms of $\gamma^1, \gamma^2, \gamma^3$ as you would write any vectors: $E = E_x \gamma^1 + \ldots$ for instance. Then $F$ can be written
$$F = \gamma^0 E - \gamma^1 \gamma^2 \gamma^3 B$$
Define $\epsilon = -\gamma^0 \gamma^1 \gamma^2 \gamma^3$. Then the tensor equation $\epsilon^{\alpha \beta \gamma \delta} F_{\alpha \beta} F_{\gamma \delta}$ can be written
$$\epsilon^{\alpha \beta \gamma \delta} F_{\alpha \beta} F_{\gamma \delta} = \langle \epsilon FF\rangle_0$$
The brackets denote that we're looking for the scalar part of this expression. This is equivalent to $\epsilon \langle FF \rangle_4 = \epsilon (F \wedge F)$ in the common shorthand.
Now we can use the associativity of the product to our advantage. For brevity, let $i = \gamma^1 \gamma^2 \gamma^3$, such that $\gamma^0 i = \epsilon$, and see that
$$\langle \epsilon FF \rangle_0 = \langle \epsilon (\gamma^0 E - iB)(\gamma^0 E - iB) \rangle_0$$
Multiply out the terms:
$$\langle \epsilon FF \rangle_0 = \langle \epsilon (\gamma^0 E \gamma^0 E -\gamma^0 E iB - iB \gamma^0 E + iBiB )\rangle_0$$
Use the properties of the product to deduce that $E\gamma^0 = -\gamma^0 E$, $\gamma^0 B = -B \gamma^0$, $Ei = iE$ and $iB = Bi$, and $ii = 1$. The result reduces to
$$\langle \epsilon FF \rangle_0 = \langle \epsilon (-EE - 2 \epsilon EB + BB) \rangle_0$$
Finally, deduce that $\epsilon \epsilon = -1$, $EE = -|E|^2$ and $BB = -|B|^2$ to get
$$\langle \epsilon FF \rangle_0 = \langle \epsilon (|E|^2 - |B|^2) + 2 E \cdot B \rangle_0$$
The only term that survives the angled brackets is the $2 E \cdot B$ term. Of course, staring us right in the face is the other well-known invariant, $E^2 - B^2$. That is one advantage of this method: both invariants can be derived from the same approach, and at the same time. Moreover, we never have to break into components to evaluate the expression. We merely use the commutation properties of basis elements in the clifford algebra--an algebra already widely used in quantum mechanics.
Best Answer
One way to see this is to consider the fact that the vector space of rank (3,3) completely antisymmetric tensors ($ \Lambda_3^3(R^3) $) has dimension one (it's just a linear algebra exercise). Then define the tensor: $$ M_{ijk}^{lmn} = \delta_i^{[l} \, \delta_j^m \, \delta_k^{n]} = \frac{1}{3!} \sum_{\sigma \in S_3} sgn(\sigma) \, \delta_i^{\sigma(l)} \, \delta_j^{\sigma(m)} \, \delta_k^{\sigma(n)} $$ where we are summing over all the permutations of three numbers $\sigma$, and $sgn(\sigma)$ denotes the sign of the permutation. It's worthy to note that $$ M_{ijk}^{lmn}=\frac{1}{3!}\begin{vmatrix} \delta_i^l & \delta_i^m & \delta_i^n\\ \delta_j^l & \delta_j^m & \delta_j^n \\ \delta_k^l & \delta_k^m & \delta_k^n \end{vmatrix} $$ by the Leibniz formula of the determinant (http://en.wikipedia.org/wiki/Leibniz_formula_for_determinants). So we have $M \in \Lambda_3^3(R^3) $. Since $M \neq 0$, $M$ is a basis for the space $ \Lambda_3^3(R^3) $. Now consider the tensor $$ \epsilon_{ijk} \, \epsilon^{lmn} = B_{ijk}^{lmn} $$ Since $B \in \Lambda_3^3(R^3) $ and $M$ is a basis, there exists a constant $k$ such that $$ B_{ijk}^{lmn} = k \, M_{ijk}^{lmn} \implies \epsilon_{ijk} \, \epsilon^{lmn} = k \, \delta_i^{[l} \, \delta_j^m \, \delta_k^{n]} $$ Now to determine $k$ contract $\epsilon_{lmn}$ on both sides and use the fact that $\epsilon_{lmn} \, \epsilon^{lmn} =3!$ (since you sum $3!$ terms equal to one) $$ \epsilon_{ijk} \, \epsilon^{lmn} \, \epsilon_{lmn} = 3! \, \epsilon_{ijk} = k \, \delta_i^{[l} \, \delta_j^m \, \delta_k^{n]} \epsilon_{lmn} = k \, \delta_i^{l} \, \delta_j^m \, \delta_k^{n} \, \epsilon_{[lmn]} = k \, \delta_i^{l} \, \delta_j^m \, \delta_k^{n} \, \epsilon_{lmn} = k \, \epsilon_{ijk} $$ So we finally get $k=3!$ and $$ \epsilon_{ijk} \, \epsilon^{lmn}=\begin{vmatrix} \delta_i^l & \delta_i^m & \delta_i^n\\ \delta_j^l & \delta_j^m & \delta_j^n \\ \delta_k^l & \delta_k^m & \delta_k^n \end{vmatrix} $$ And you can get the identity you want by contracting. The same argument could be used in any dimension to show that $$ \epsilon_{i_1,\dots,i_n} \, \epsilon^{j_1,\dots,j_n} = \begin{vmatrix} \delta_{i_1}^{j_1} & \dots & \delta_{i_1}^{j_n}\\ \vdots & & \vdots \\ \delta_{i_n}^{j_1} & \dots & \delta_{i_n}^{j_n} \end{vmatrix} $$