Law of Large Numbers – How Much More Powerful is Weak Law vs Strong Law

convergence-divergencelaw-of-large-numbersprobability

In mathematics, we often learn the following statements:

  • Convergence in Probability implies Convergence in Distribution
  • Strong Law of Large Numbers implies Weak Law of Large Numbers

I am trying to create mathematical examples to compare the strength (i.e. implication) for both of these statements.

  • For example – perhaps we can show simulate some random data and show that for the same data: Strong Law of Large Numbers requires fewer samples to achieve the same results as the Weak Law of Large Numbers? (I am not sure if this is possible or would even serve to demonstrate and compare the strength of Weak Law vs Strong Law)

  • Maybe we can simulate some random data and show that for the same data: Convergence in Probability requires fewer samples to achieve the same results as Convergence in Distribution? (I am not sure if this is possible or would even serve to demonstrate and compare the strength of Convergence in Probability vs Convergence in Distribution)

For example – here is an R simulation that I think might be able to show the Law of Large Numbers (I am not sure if Strong Law or Weak Law): The sample average becomes closer and closer to the population average as the size of the sample increases:

set.seed(123)
n <- 1000
sample_means <- numeric(n)
for (i in 1:n) {
  x <- rnorm(i, mean = 0, sd = 1)
  sample_means[i] <- mean(x)
}
plot(sample_means, type = "l", main = "Law of Large Numbers: Convergence of Sample Mean to Population Mean", xlab = "Sample Size", ylab = "Sample Mean")
abline(h = 0, col = "red")

enter image description here

Is it somehow possible to "overlay" a second example on this plot and compare an example for the other Law of Large Numbers – and thus compare the strengths of both laws? Can something similar be done for Convergence in Probability vs Convergence in Distribution?

Thanks!

References:

Best Answer

It's not about fewer or more samples, as both SLLN and WLLN are about limiting behavior of samples as they become infinitely big. Both also say that larger sample get you closer to the true mean.

Where the two differ is that all but a finite number of sample paths must have some $N$ such that $|X_i|<\epsilon\;\forall n>N$ in the strong case.

In the weak case, there is no guarantee that the sample path will stay within any given bound for all $n>N$.

Related Question