Probability Theory – How to Understand Joint Distribution of a Random Vector

measure-theoryprobability theory

  1. Given a random vector, what are the
    domain, range and sigma algebras on
    them for each of its components to
    be a random variable i.e. measurable
    mapping? Specifically:

    • is the domain of each component random variable same as the domain
      of the random vector, and are the
      sigma algebras on their domains
      also the same?
    • Is the range of each component random variable the component
      space in the cartesian product
      for the range of the random
      vector? What is the sigma algebra
      on the range of each component
      random variable and how is it
      induced from the sigma algebra on
      the range of the random vector?
  2. Please correct me if I am wrong. If
    I understand correctly, given a
    random vector, the probability
    measure induced on the range (which
    is a Cartesian product space) by the
    random vector, is called the joint
    probability measure of the random
    vector. The probability measure
    induced on the component space of
    the range by each component of
    the random vector, is called the
    marginal probability measure of the
    component random variable of the
    random vector.
  3. Consider the concept of the component random variables of
    a random vector being
    independent. I read from a webpage that it is said so when the joint probability measure is the product measure of the individual marginal probability measures. I was wondering if the sigma algebra for the joint probability must be the same as the product sigma algebra for the individual probability measures, or the former can just contain the latter?

Thanks and regards!

Best Answer

I'll take a stab at answering these:

  1. (Part a) Yes, the domain probability space of a random vector is the same as the domain probability space of its components. Think of a random vector as a vector-valued random variable.

I'm having trouble parsing these questions, because I can't tell whether you're using the word "range" to mean "image" or "codomain". I'll assume you mean "codomain".

  1. (Part b) Given a probability space $\Omega$ and measurable spaces $E_1,\ldots,E_n$, a random vector is a random variable $\Omega \to E_1\times\cdots\times E_n$. Typically, each $E_i$ starts with a $\sigma$-algebra on it, and then the $\sigma$-algebra on $E = E_1\times\cdots\times E_n$ is defined as the product algebra. (That is, the $\sigma$-algebra on $E$ is generated by all products of the form $A_1\times\cdots\times A_n$, where $A_i$ is measurable in $E_i$ for each $i$.) Sometimes it is helpful to expand the $\sigma$-algebra on the product slightly, e.g. if you want some probability measure on the product to be complete.

  2. That seems right. "Marginal probability measure" would also refer to the measure obtained on a product like $\prod_{i\in S} E_i$, where $S$ is some subset of $\{1,\ldots,n\}$.

  3. I suppose it's fine for the $\sigma$-algebra on the product to be slightly larger than the product $\sigma$-algebra, e.g. if we want the measure on the product to be complete. However, the measure on the product should have the property that it is the unique extension of the product measure to the $\sigma$-algebra on the product. That is, the measure on the product should either be the product measure, or the completion of the product measure, or something in between.

Related Question