In probability theory, conditional probability is a measure of the
probability of an event occurring given that another event has (by
assumption, presumption, assertion or evidence) occurred.
"Probability of an event occurring given that another event has occurred." – Is this the same as saying these two events are dependent? Doesn't this mean that conditional probability is a measure for dependent events?
Two events are independent, statistically independent, or
stochastically independent if the occurrence of one does not affect
the probability of occurrence of the other (equivalently, does not
affect the odds).
Are conditional probability and Bayes' theorem applicable to independent events? What is their role in solving problems involving independent events, if any?
I'm starting out with probability and a lot of these concepts aren't clear to me yet. What is the intuitive explanation behind the formulae?
Best Answer
No.
If $A$ denotes some fixed event with $P(A)>0$ then it induces a new probability measure on the same collection of events. At first hand for any event $B$ we are interested in the map $B\mapsto P(A\cap B)$. However this map is not in general a probability measure because it sends the outcome set $\Omega$ to $P(A)$ and it is not excluded that $P(A)\neq1$. To repair this we divide by $P(A)$ and the function $P_A$ on events prescribed by $B\mapsto P(A\cap B)/P(A)$ is a probability measure. Instead of $P_A(B)$ we use a different notation: $P(B\mid A)$. This is how the conditional probability measure wrt event $A$ is "born". It answers the question:
Independence comes in if we observe that $P(B\mid A)$ and $P(B)$ appear to be the same. That happens iff $P(A\cap B)=P(A)P(B)$ which means exactly that $A$ and $B$ are independent events.
What is asked above can in that case indirectly be answered with: