I have a data set and I have to fit this data set with a stable distribution. The problem is that the stable distributions are known analytically only in the form of the characteristic function (Fourier transform). How can I do this?

# Solved – Fitting the parameters of a stable distribution

distributionsfourier transformrstable-distribution

#### Related Solutions

The short answer is that your $\delta$ is fine, but your $\gamma$ is wrong. In order to get the positive stable distribution given by your formula in R, you need to set $$ \gamma = |1 - i \tan \left(\pi \alpha / 2\right)|^{-1/\alpha}. $$

The earliest example I could find of the formula you gave was in (Feller, 1971), but I've only found that book in physical form. However (Hougaard, 1986) gives the same formula, along with the Laplace transform
$$
\mathrm{L}(s) = \mathrm{E}\left[\exp(-sX)\right] = \exp\left(-s^\alpha\right).
$$
From the `stabledist`

manual (`stabledist`

is used in `fBasics`

), the `pm=1`

parameterization is from (Samorodnitsky and Taqqu, 1994), another resource whose online reproduction has eluded me. However (Weron, 2001) gives the characteristic function in Samorodnitsky and Taqqu's parameterization for $\alpha \neq 1$ to be
$$
\varphi(t) = \mathrm{E}\left[\exp(i t X) \right] = \exp\left[i \delta t - \gamma^\alpha |t|^\alpha \left(1 - i \beta \mathrm{sign}(t) \tan{\frac{\pi \alpha}{2}} \right) \right].
$$
I've renamed some parameters from Weron's paper to coinside with the notation we're using. He uses $\mu$ for $\delta$ and $\sigma$ for $\gamma$. In any case, plugging in $\beta=1$ and $\delta=0$, we get
$$
\varphi(t) = \exp\left[-\gamma^\alpha |t|^\alpha \left(1 - i \mathrm{sign}(t) \tan \frac{\pi \alpha}{2} \right) \right].
$$

Note that $(1 - i \tan (\pi \alpha / 2)) / |1 - i \tan(\pi \alpha / 2)| = \exp(-i \pi \alpha / 2)$ for $\alpha \in (0, 1)$ and that $i^\alpha = \exp(i \pi \alpha / 2)$. Formally, $\mathrm{L}(s)=\varphi(is)$, so by setting $\gamma = |1 - i \tan \left(\pi \alpha / 2\right)|^{-1/\alpha}$ in $\varphi(t)$ we get $$ \varphi(is) = \exp\left(-s^\alpha\right) = \mathrm{L}(s). $$ One interesting point to note is that the $\gamma$ that corresponds to $\alpha=1/2$ is also $1/2$, so if you were to try $\gamma=\alpha$ or $\gamma=1-\alpha$, which is actually not a bad approximation, you end up exactly correct for $\alpha=1/2$.

Here's an example in R to check correctness:

```
library(stabledist)
# Series representation of the density
PSf <- function(x, alpha, K) {
k <- 1:K
return(
-1 / (pi * x) * sum(
gamma(k * alpha + 1) / factorial(k) *
(-x ^ (-alpha)) ^ k * sin(alpha * k * pi)
)
)
}
# Derived expression for gamma
g <- function(a) {
iu <- complex(real=0, imaginary=1)
return(abs(1 - iu * tan(pi * a / 2)) ^ (-1 / a))
}
x=(1:100)/100
plot(0, xlim=c(0, 1), ylim=c(0, 2), pch='',
xlab='x', ylab='f(x)', main="Density Comparison")
legend('topright', legend=c('Series', 'gamma=g(alpha)'),
lty=c(1, 2), col=c('gray', 'black'),
lwd=c(5, 2))
text(x=c(0.1, 0.25, 0.7), y=c(1.4, 1.1, 0.7),
labels=c(expression(paste(alpha, " = 0.4")),
expression(paste(alpha, " = 0.5")),
expression(paste(alpha, " = 0.6"))))
for(a in seq(0.4, 0.6, by=0.1)) {
y <- vapply(x, PSf, FUN.VALUE=1, alpha=a, K=100)
lines(x, y, col="gray", lwd=5, lty=1)
lines(x, dstable(x, alpha=a, beta=1, gamma=g(a), delta=0, pm=1),
col="black", lwd=2, lty=2)
}
```

$\hskip1in$

- Feller, W. (1971).
*An Introduction to Probability Theory and Its Applications*,**2**, 2nd ed. New York: Wiley. - Hougaard, P. (1986).
*Survival Models for Heterogeneous Populations Derived from Stable Distributions*,*Biometrika***73**, 387-396. - Samorodnitsky, G., Taqqu, M.S. (1994).
*Stable Non-Gaussian Random Processes*, Chapman & Hall, New York, 1994. - Weron, R. (2001).
*Levy-stable distributions revisited: tail index > 2 does not exclude the Levy-stable regime*, International Journal of Modern Physics C, 2001, 12(2), 209-223.

**Linear combinations of Poisson random variables**

As you've calculated, the moment-generating function of the Poisson distribution with rate $\lambda$ is $$ m_X(t) = \mathbb E e^{t X} = e^{\lambda (e^t - 1)} \>. $$

Now, let's focus on a linear combination of independent Poisson random variables $X$ and $Y$. Let $Z = a X + b Y$. Then, $$ m_Z(t) = \mathbb Ee^{tZ} = \mathbb E e^{t (a X + b Y)} = \mathbb E e^{t(aX)} \mathbb E e^{t (bY)} = m_X(at) m_Y(bt) \>. $$

So, if $X$ has rate $\lambda_x$ and $Y$ has rate $\lambda_y$, we get $$ m_Z(t) = \exp({\lambda_x (e^{at} - 1)}) \exp({\lambda_y (e^{bt} - 1)}) = \exp(\lambda_x e^{at} + \lambda_y e^{bt} - (\lambda_x + \lambda_y))\>, $$ and this cannot, in general, be written in the form $\exp(\lambda(e^t - 1))$ for some $\lambda$ unless $a = b = 1$.

**Inversion of moment-generating functions**

If the moment generating function exists in a neighborhood of zero, then it also exists as a complex-valued function in an infinite strip around zero. This allows inversion by contour integration to come into play in many cases. Indeed, the **Laplace transform** $\mathcal L(s) = \mathbb E e^{-s T}$ of a nonnegative random variable $T$ is a common tool in stochastic-process theory, particularly for analyzing stopping times. Note that $\mathcal L(s) = m_T(-s)$ for real valued $s$. You should prove as an exercise that the Laplace transform *always* exists for $s \geq 0$ for nonnegative random variables.

Inversion can then be accomplished either via the Bromwich integral or the Post inversion formula. A probabilistic interpretation of the latter can be found as an exercise in several classical probability texts.

Though not directly related, you may be interested in the following note as well.

J. H. Curtiss (1942), A note on the theory of moment generating functions,

Ann. Math. Stat., vol. 13, no. 4, pp. 430–433.

The associated theory is more commonly developed for characteristic functions since these are fully general: They exist for *all* distributions without support or moment restrictions.

## Best Answer

As suggested in the comments, you can use

`fitdistr`

, with the density function from`fBasics`

.