First I'd like to share my understanding of abstract index notation. I think this understanding is simpler and more intuitive than Penrose and Rindler's original definition. Your question will be answered later with an example.
Abstract index notation is merely a labelling of the "slots" of the tensor. For example, $T_{ab}{}^c$ is just an abbreviation of
$$T(-_a, -_b, -^c).$$
Each "slot" is a parameter as the tensor is viewed as a multilinear map $T:V\times V \times V^* \to \mathbb R$.
You may be already familiar with the labelling slots interpretation. But what does "labelling" a slot exactly mean? Here is my understanding: it means we can fill a specific slot with a vector (or dual vector) by specifying the label of the slot. For example, if we fill the slot labelled $a$ with a vector $u$, and fill the slot labelled $b$ with a vector $v$, and fill the slot labelled $c$ with a dual vector $\omega$, we get $T(u, v, \omega)$, that is
$$
T(-_a, -_b, -^c)(\{a=u, \; b=v, \; c=\omega\}) = T(u, v, \omega).
$$
Note here a set-like notation $\{a=u, \; b=v, \; c=\omega\}$ is used meaning that the order is irrelevant, $\{b=v, \; a=u, \; c=\omega\}$ and $\{a=u, \; b=v, \; c=\omega\}$ are just the same.
There are two observations from this definition of filling slots:
The position order of the slots is significant.
$$S_{ab} \neq S_{ba}$$
in the sense that
$$
S(-_a, -_b)(\{a=u, \; b=v\}) = S(u, v) \neq S(v, u) = S(-_b, -_a)(\{a=u, \; b=v\}).
$$
An index letter can be substituted with any Latin letter, since it's just a label for the slot. For example $T_{ab} = S_{ab}$ implies $T_{cd} = S_{cd}$. Because $T_{ab} = S_{ab}$ means for any vector $u$ and $v$
$$
T(-_a, -_b)(\{a=u, \; b=v\}) = S(-_a, -_b)(\{a=u, \; b=v\}),
$$
that is
$$
T(u, v) = S(u, v).
$$
And $T_{cd}=S_{cd}$ is equivalent to $T(u, v) = S(u, v)$ too. Note this index substitution is different from index reordering in observation 1, index substitution must be applied on both sides of an equation, We can't exchange $a$ and $b$ from only one side of the equation $S_{ab}=S_{ab}$ to get $S_{ab}=S_{ba}$.
Now we can use abstract index notation to denote tensor product, and contraction operation in a coordinate-free way. For example $U_{abcd} = T_{ab}S_{cd}$ denotes the tensor product
$$
U(-_a, -_b, -_c, -_d) = T(-_a, -_b) \cdot S(-_c, -_d).
$$
And $T_{ae}{}^{ed}$ denotes the contraction with respect of slots $b$ and $c$ of $T_{ab}{}^{cd}$
$$
T_{ae}{}^{ed} = C_b{}^c(T_{ab}{}^{cd}) =
\sum_{\sigma}T(-_a, \frac{\partial}{\partial x^\sigma}, \mathrm dx^\sigma, -^d).
$$
Another important operation is (partial) application, but since it's equivalent to a tensor product followed by a contraction, there is no need to introduce a new notation. For example applying a vector $u$ to the slot $a$ of $T_{ab}$ is
$$
T(u, -_b) = T_{ab}u^a.
$$
where $T_{ab}u^a$ is a tensor product of $T$ and $u$ followed by a contraction: $C_a{}^c(T(-_a, -_b)\cdot u(-^c))$. The result has one free slot left, so it's a (0, 1) tensor. That is, a (0, 2) tensor partially applied with a vector is a (0, 1) tensor.
Example
Consider the following problem: suppose
$$T_{abc} = \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c,$$
where $\sigma$, $\mu$, and $\nu$ are 3 concrete numbers, then what is $T_{abc} + T_{bca}$?
This example more or less answers your question: what is the practical difference between abstract index notation and “ordinary” index notation. Abstract index notation is easier to read and understand especially when abstract indices and concrete indices are mixed.
In abstract index notation, there is a convention that abstract indices use Latin letters while concrete indices use Greek letters. So in this example we can easily see that $a$, $b$, and $c$ are abstract indices while $\sigma$, $\mu$, and $\nu$ are concrete indices.
The notation $\mathrm dx^\sigma_a$ is not very common, but it makes sense. The dual vector $\mathrm dx^\sigma$ is naturally a function that can act on a vector. The equation $T_{abc} = \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c$ which is just an abbreviation of
$$
T(-_a, -_b, -_c) = \mathrm dx^\sigma(-_a) \cdot \mathrm dx^\mu(-_b) \cdot \mathrm dx^\nu(-_c)
$$
means when slot $a$ is filled with a vector $u$, $\mathrm dx^\sigma$ will act on that $u$.
To solve the problem, use index substitution, we can get $T_{bca} = \mathrm dx^\sigma_b \mathrm dx^\mu_c \mathrm dx^\nu_a$. So
$$
\begin{aligned}
T_{abc} + T_{bca} &= \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c +
\mathrm dx^\sigma_b \mathrm dx^\mu_c \mathrm dx^\nu_a \\
&= \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c +
\mathrm dx^\nu_a \mathrm dx^\sigma_b \mathrm dx^\mu_c \\
&= (\mathrm dx^\sigma \otimes \mathrm dx^\mu \otimes \mathrm dx^\nu + \mathrm dx^\nu \otimes \mathrm dx^\sigma \otimes \mathrm dx^\mu)(-_a, -_b, -_c).
\end{aligned}
$$
This is quite straightforward. On the other hand, if you use concrete index notation to solve this problem, first you need to figure out that the components of $T$ are all zero except
$$T_{\xi\eta\zeta} = 1, \text{when}\; \xi=\sigma, \eta=\mu, \zeta=\nu.$$
Or $T_{\sigma\mu\nu}=1$. But what is $T_{bca}$? $T_{\mu\nu\sigma}$? No. You need to define another tensor $S_{\xi\eta\zeta}=T_{\eta\zeta\xi}$, and figure out that its components are all zero except
$$
S_{\xi\eta\zeta} = 1, \text{when}\; \xi=\nu, \eta=\sigma, \zeta=\mu.
$$
Then finally find the sum $T_{\xi\eta\zeta} + S_{\xi\eta\zeta}$. This procedure is quite complex and error-prone.
If you'd like to translate the equation
$$
T_{abc} = \mathrm dx^\sigma_a \mathrm dx^\mu_b \mathrm dx^\nu_c
$$
to concrete index notation
$$T_{\xi\eta\zeta} = \mathrm dx^\sigma_\xi \mathrm dx^\mu_\eta \mathrm dx^\nu_\zeta.
$$
It doesn't help much. Now $\mathrm dx^\sigma_\xi$ is a tensor whose components are all zero except
$$
\mathrm dx^\sigma_\xi = 1, \text{when}\; \xi=\sigma.
$$
You still need to concern about components. This is not natural. And 6 indices are mixed together, 3 of them are fixed numbers, very confusing.
Best Answer
You need to differentiate between the components of a vector and the vector itself.
The $n$ coordinate vectors $\partial_j$ constitute a basis of the n-dimensional tangent space $T_p(M)$ at each point $p\in M$. Sometimes these coordinate bases induced by the charts $(U,h)$ are referred to as the natural or canonical bases.
Relative to a given chart (U,h) with coordinate functions $x^1,\dots, x^n$ any vector $v\in T_p(M)$ admits the representation $$v=v^i\frac{\partial }{\partial x^i}$$
When we say that $v^i$ is a contravariant vector we actually mean that the components $v^i$ transforms like a contravariant vector at $p$ (and that $v\in T_p(M)$). The coefficients are uniquely defined by $$v^i=vx^i$$
For two overlapping charts the vector $v$ is represented by $v=v^i\frac{\partial}{\partial x^i}$ and $v=\bar{v}^j\frac{\partial}{\partial {\bar{x}}^j}$ respectively. With $v^i=v x^i$ and $\bar{v}^j=v\bar{x}^j$ we can make two important observations $$\bar{v}^j=v^{i}\frac{\partial \bar{x}^j}{\partial x^i}\tag{1}$$
$$v=\bar{v}^j\frac{\partial }{\partial \bar{x}^j}=v^h\frac{\partial\bar{x}^j}{\partial x^h}\frac{\partial}{\partial \bar{x}^j}=v^h\frac{\partial}{\partial x^h}\tag{2}$$
Now $(1)$ tells us that indeed the components $v^i$ transforms like the components of a contravariant vector. Further, since $(2)$ is valid for any arbitrary $v^i$ we conclude that
$$\frac{\partial }{\partial x^h}=\frac{\partial \bar{x}^j}{\partial x^h}\frac{\partial }{\partial \bar{x}^j}$$
So the basis elements for the (contravariant) tangent space itself transform like covariant vectors!
I will not go into details on how to construct the dual basis $\{dx^k,\,k=1\dots n\}$ of $T^*_p(M)$ but it follows naturally from the definition of the unique element $df\in T^*_p(M)$ such that $\langle d f,v\rangle=vf$ that any $\omega\in T^*_p(M)$ can be expressed as
$$\omega=\omega_jdx^j$$
Notice again that $\omega_j$ are the components of a covariant tensor, while $dx^j$ are the basis (which transforms contravariantly). The dual tangent space $T^*_p(M)$ is called the cotangent space, and its elements $\omega$ are referred to as covectors or 1-forms.
Specifically with $df=f_hdx^h$ we have $df=\partial_jfdx^j$ (since $\langle df,\partial_j\rangle=f_j$) which is consistent with the customary expression for the differential of a function.