By convention, vectors are written as column vectors, whereas dual vectors are written as row vectors. This means that in principle, upper indices should index columns and lower indices should index rows. However, in practice, we normally translate rank-2 tensors to matrices by order of the indices, the first one indexing rows, the second one indexing columns.
The only way I can think of to make this translation from tensors to matrices structurally well-defined (which I've never seen done in the literature), is to force all rank-2 tensors into the form $\cdot\;^\mu{}_\nu$, which can be achieved by contraction with appropriate 'Kronecker tensors', by which I mean rank-2 tensors whose components are 1 if the indices agree and 0 otherwise.
Let's call these tensors $\overline\delta^{\mu\nu}$ and $\underline\delta_{\mu\nu}$.
Then, the matrix product given in your question
$$
x^T\cdot\eta\cdot y
$$
would translate to
$$
\left(x^\mu\underline\delta_{\mu\nu}\right)\cdot\left(\overline\delta^{\nu\alpha}\,\eta_{\alpha\beta}\right)\cdot\left(y^\beta\right)
$$
The first term has a single free lower index (aka a row vector), the second term a free upper and lower index (aka a matrix) and the third one a free upper index (aka a column vector).
As all Kronecker tensors can be removed through index adjustement, this is equivalent to the far simpler expression
$$
x^\mu\,\eta_{\mu\beta}\,y^\beta
$$
As you can see, while there is no special symbol for transposition in index notation - it is normally implied by which index is summed over - it could be made explicit by using the 'Kronecker tensors' - but all you'd gain is adding unnecessary complexity.
Now after this round of useless musings, let's get back to something that actually is important when reading literature:
Indices are lowered and raised by contraction with the metric tensor and its inverse. So for example given a tensor $A^\mu{}_\nu$, then
$$
A_\mu{}^\nu \equiv A^\alpha{}_\beta\; \eta_{\alpha\mu}\; (\eta^{-1})^{\beta\nu}
$$
For the metric tensor itself, we have
$$
(\eta^{-1})^{\mu\nu} = \eta^{\mu\nu}
$$
proven over here and for Lorentz transformations
$$
(\Lambda^{-1})^\tau{}_\mu = \Lambda_\mu{}^\tau
$$
proven over here.
This is a special property of these specific tensors and does not hold for arbitrary ones.
In addition to the suggestion of walber97, I'd also propose that the transpose should not lower or raise indices. It just interchanges the order of the indices. So your 4th equation should perhaps be
$$\Lambda^\alpha_{\,\,\mu}=(\Lambda^T)_\mu^{\,\,\alpha} . $$
The matrix multiplication then remains as a contraction between an upper and a lower pair of indices
$$(\Lambda^T)_\mu^{\,\,\alpha} \Lambda_\alpha^{\,\,\,\sigma}
= \delta_\mu^{\,\,\sigma}$$
Then the Lorentz transformation properties remain in tact.
Best Answer
You may have noticed this is a matrix equation, which might be more succinctly written as $x'=\Lambda x$. However, when you write such equations with explicit indices, and you sum over repeated indices, you need one upstairs and one downstairs, or equivalently you connect them with a metric tensor, viz. $x'^\mu=\Lambda^{\mu\rho}\eta_{\rho\nu}x^\nu=\Lambda^{\mu\rho}x_\rho$.
Before you study relativity you're familiar with an analogous calculation for which the metric is Euclidean, so the metric tensor is just the identity matrix. But outside of that context, you need to think very carefully about which indices are upstairs.
You also need to think very carefully about how to denote index-raising/lowering on non-symmetric matrices. Starting from $\Lambda_{\alpha\beta}$, if I raise one index but not the other I can get $\Lambda^\gamma_{\,\,\,\,\beta}=\eta^{\gamma\alpha}\Lambda_{\alpha\beta}$ or $\Lambda_\beta^{\,\,\,\,\gamma}=\eta^{\gamma\alpha}\Lambda_{\beta\alpha}$. Don't confuse the two!