What you are asking for is a special case of the joint distribution of the maximum and sum of a sequence of random variables. The fact that the variables are stable simplifies this and allows us to compute the joint distribution in the limit as n goes to infinity. This limit can be described in terms of the joint distribution of the value of a stable Lévy process at time $t=1$ and its maximum jump size over $t\le1$. Lévy processes are right continuous with left limits (cadlag) and have stationary independent increments. Stable Lévy processes have increments with stationary distributions.
If $X_i$ each have the $\alpha$-stable distribution denoted by $S(\alpha,\beta,\gamma,\delta)=S(\alpha,\beta,\gamma,\delta;0)$ in the text you link, then the sum $S=\sum_{i=0}^{n-1}X_i$ has the $S(\alpha,\beta,n^{1/\alpha}\gamma,\delta_n)$ distribution where
$$
\delta_n=\begin{cases}
n\delta + \gamma\beta\tan\frac{\pi\alpha}{2}(n^{1/\alpha}-n),&\textrm{if }\alpha\not=1,\\
n\delta +\gamma\beta\frac{2}{\pi}n\log n,&\textrm{if }\alpha=1.
\end{cases}\qquad\qquad{\rm(1)}
$$
This is equation (1.9) on page 20 of the linked notes. It will be helpful to rescale the sum to remove the dependence on n. The random variable $Y=n^{-1/\alpha}(S-\delta_n)$ has the $S(\alpha,\beta,\gamma,0)$ distribution (by Proposition 1.16).
Now, to relate this to Lévy processes. Let $Z_t$ be the Lévy process with $Z_1\sim Y$. We can take $X_i=n^{1/\alpha}(Z_{(i+1)/n}-Z_{i/n})+\delta_n/n$, which will be independent with the required $S(\alpha,\beta,\gamma,\delta)$ distribution. In fact, in this case we have $Z_1=Y$ and $S=n^{1/\alpha}Z_1+\delta_n$.
Writing $M=\max_iX_i$ and $\Delta Z^*_t$ for the maximum of $\Delta Z_s=Z(s)-Z(s-)$ over $s\le t$ gives $n^{-1/\alpha}(M-\delta_n/n)\to\Delta Z^*_1$ as n goes to infinity.
We can then compute the probability P that you are asking for to leading order as n becomes large,
$$
\begin{align}
P&\equiv\mathbb{P}\left(M > S-M\right)\\
&=\mathbb{P}\left(2n^{-1/\alpha}(M-\delta_n/n) > n^{-1/\alpha}(S-2\delta_n/n)\right)\\
&\sim\mathbb{P}\left(2\Delta Z^*_1 > Z_1+n^{-1/\alpha}\delta_n(1-2/n)\right)
\end{align}
$$
So,
$$
P\sim\mathbb{P}\left(2\Delta Z^*_1 - Z_1 > n^{-1/\alpha}\delta_n\right)
$$
and the leading term in the asymptotics for P are determined by the distribution of $2\Delta Z^*_1-Z_1$.
For $\alpha < 1$ it can be seen from (1) that $n^{-1/\alpha}\delta_n\to\gamma\beta\tan\frac{\pi\alpha}{2}$ as n goes to infinity, so we get a finite and positive limit
$$
P\to\mathbb{P}\left(2\Delta Z^*_1-Z_1 > \gamma\beta\tan\frac{\pi\alpha}{2}\right)
$$
as $n\to\infty$. The tangent has only appeared in the expression here because of the particular parameterization that we are using. It is a bit simpler if we rewrite this in terms of $\tilde Z_t=Z_t+\gamma\beta t\tan\frac{\pi\alpha}{2}$. Then, for the $\beta=1$ case you are using, $\tilde Z_t$ has support $[0,\infty)$ (Lemma 1.10 in the linked text). This means that $\tilde Z$ is a stable subordinator and the required probability is
$$
P\to\mathbb{P}\left(2\Delta\tilde Z^*_1 > \tilde Z_1\right)=\frac{\sin(\pi\alpha)}{\pi\alpha}.
$$
[I've been a bit sneaky here, and substituted in the expression that Shai Covo gave in his answer for $\mathbb{P}(2\Delta\tilde Z^*_1 > \tilde Z_1)$ and simplified a bit using Euler's reflection formula.]
On the other hand, if $\alpha > 1$ then (1) gives $n^{-1/\alpha}\delta_n\sim n^{1-1/\alpha}(\delta-\gamma\beta\tan\frac{\pi\alpha}{2})$ and, if $\alpha=1$, then $n^{-1/\alpha}\delta_n\sim\gamma\beta\frac{2}{\pi}\log n$. In this case, P tends to zero at a rate dependent on the tail of the distribution function of $2\Delta Z^*_1-Z_1$.
It is also possible to use the Lévy-Khintchine representation to compute the tail of the distribution function of $2\Delta Z^*_1-Z_1$. The jumps of a Lévy process are described by a Lévy measure $\nu$ on the real numbers, so that the jumps of size belonging to a set $A$ occur according to a Poisson process of rate $\nu(A)$. An $\alpha$-stable process has Lévy measure $d\nu(x)=c\vert x\vert^{-\alpha-1}\,dx$, where $c$ only depends on the sign of $x$. We can back out the value of $c$ by calculating the characteristic function of $Z_1$ using the Lévy-Khintchine formula (essentially the Fourier transform of $\nu$, see Wikipedia, which uses W for the Lévy measure) and comparing with expression (1.4) from your text. Going through this computation gives
$$
d\nu(x)=\pi^{-1}\alpha\gamma^\alpha\sin\frac{\pi\alpha}{2}\Gamma(\alpha)(1+\beta{\rm sign}(x))\vert x\vert^{-\alpha-1}\,dx.
$$
The number of jumps of size greater than $x$ is Poisson distributed of rate $\nu((x,\infty))$ giving,
$$
\begin{align}\mathbb{P}(\Delta Z^*_1> x) &= 1-e^{-\nu((x,\infty))}\\
&\sim\nu((x,\infty))\\
&=\pi^{-1}\gamma^\alpha\sin\frac{\pi\alpha}{2}\Gamma(\alpha)(1+\beta)x^{-\alpha}.
\end{align}\qquad{\rm(2)}
$$
Comparing with your linked text (Theorem 1.12), this is exactly the same as the tail of $Z_1$! That is, $\mathbb{P}(\Delta Z^*_1 > x)\sim\mathbb{P}(Z_1 > x)$. This means that the tail of the distribution is entirely due to the occasional big jump of the Lévy process. This does actually correspond to experience. I have plotted Cauchy process sample paths before, and you do get some paths with a single big jump drowning out all other behaviour. We can also see why this should be the case. The density of the jump distribution is sub-exponential, proportional to $\vert x\vert^{-\alpha-1}$. So one big jump of size $x$ has a much bigger probability than n small jumps of size $x/n$. It also suggests that, in your calculation of $X_i$, the samples contributing to the probability come mainly from the case where all of them are reasonably close to the mean except for one extreme (positive or negative) $X_i$.
So, from the argument above, the leading asymptotic of the probability P comes from the case where $\Delta Z_1^*$ is very large compared to $Z_1-\Delta Z_1^*$, or when the largest negative jump $\Delta(-Z)^*_1$ is large compared to $Z_1+\Delta(-Z)^*_1$. This gives
$$
\begin{align}
\mathbb{P}(2\Delta Z^*_1-Z_1 > x)
&\sim\mathbb{P}(\Delta Z^*_1 > x)
+\mathbb{P}(\Delta (-Z)^*_1 > x)\\
&\sim 2\pi^{-1}\gamma^{\alpha}\sin\frac{\pi\alpha}{2}\Gamma(\alpha)x^{-\alpha},
\end{align}
$$
where I substituted in (2). Plugging in the asymptotic form for $x=n^{-1/\alpha}\delta_n$ calculated above gives
$$
P\sim2\pi^{-1}\gamma^{\alpha}\sin\frac{\pi\alpha}{2}\Gamma(\alpha)\left(\delta-\gamma\beta\tan\frac{\pi\alpha}{2}\right)^{-\alpha}n^{1-\alpha}
$$
for $\alpha > 1$. This confirms your $O(1/n^{\alpha-1})$ guess from the numerical simulation! Also, using $x=n^{-1/\alpha}\delta_n\sim\gamma\beta\frac{2}{\pi}\log n$ for $\alpha=1$ gives
$$
P\sim\frac{1}{\beta\log n}.
$$
Hopefully this is all correct, but I can't rule out making an arithmetic slip above. Edit: The limit above for $\alpha > 1$ does not look very good. It appears to be slightly above the plot in the original post. However, even for n=1000, it should not be very close to the asymptotic limit described. Replacing the asymptotic limit for $\delta_n$ by its exact form effective replaces the $n^{1-\alpha}$ term for P by $(n^{1-1/\alpha}-1)^{-\alpha}$. For n=1000 this approximately doubles the calculated number so, the more accurate value for $\delta_n$ moves you further away from the plot. Something has gone wrong somewhere...
Best Answer
For $M>1$, $$\int_{M}^{\infty}x^p|log(x)|^qf(x)dx$$ $$=\int_{M}^{\infty}x^p(log(x))^q \frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})dx$$ Now, $$\lim_{x \rightarrow\infty}\frac{x^p(log(x))^q \frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})} {x^{p+1}\frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})}=0$$
Therefore, for sufficiently large $M$, we have
$$\int_{M}^{\infty}x^p(log(x))^q \frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})dx \le \int_{M}^{\infty}x^{p+1}\frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})dx$$ $$\le \int_{0}^{\infty}x^{p+1}\frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})dx=\alpha^{\frac{1+p}{\beta}}\Gamma\left(\frac{\beta+p+1}{\beta}\right)$$
Similarly, for $\epsilon>0$ sufficiently close to 0,
$$\int_{0}^{\epsilon}x^p(-log(x))^q \frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})dx \le \int_{0}^{\epsilon}\frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})dx$$ $$\le \int_{0}^{\infty}\frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})dx=1$$
because
$$\lim_{x \rightarrow0^+}\frac{x^p(-log(x))^q \frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})} {\frac{\beta}{\alpha}x^{\beta-1}exp(-\frac{x^{\beta}}{\alpha})}=0$$
Then, just break up the integral into 3 pieces to show the whole integral is finite.