Solved – Overlapping probability distributions

distributionsprobability

i am working with a set of distributions. I have so far analyzed several probability distributions with respect to each other, that means i have made a t-test, to see how probable it is that two events happen at the same time according to those distributions.
That means i have computed:

let x1 and x2 be probability distributions    
prob(x1=Z, x2=Z), z be any value

thats what the t-test gave me. though i used something similar so the distributions can have different variances. I have now computed the overlap of both distributions with respect to each other.

what i want to do now is look at the whole thing backwards. I want to look at a certain range and be able to give a probability that there is 1 or two of those occurrences.
perhaps another way to look at it is using dice. I have two dice with differing probability distributions (probably gaussian for me, but lets assume one throws 6 more often, the other one 3 more often). The first way to look at it would be to tell whether both dice throw the same number. What i want to do is to tell if both dice throw a number in a certain range, for example 5 and 6.

let x1 and x2 be probability distributions    
prob(x1=Z, x2=Z), z be a value a certain range 

I think it would help if you could tell me what the name of the thing is that i am trying to do, so i can read up. But i would also be happy to have a solution that works out of the box.

any pointers how i can compute this?

cheers and thanks for advice

edit:
provided an example.

Best Answer

I can't make sense of any of the statements in the question, but you're asking for terminology and that request at least I can understand and appreciate, so here's some to get you started.

I will italicize technical terms and key concepts to draw them to your attention.

The upward face of a die after it is thrown or the side of a coin that shows after it is flipped are outcomes. The assignment of numbers to outcomes, such as labeling the faces of a die with 1, 2, ..., 6 or the faces of a coin 0 and 1, is called a random variable. A range (or, generally, a set) of values for which we would like to know a probability is called an event.

With two dice you have two random quantities and are using two random variables to analyze them. This is a multivariate setting. Now the outcomes consist of ordered pairs: one outcome for the first variable, one outcome for the second. These outcomes have joint probabilities. The most fundamental issue concerns whether the outcomes of one are somehow connected with the outcomes of the other. When they are not, the two variables are independent. Throws of two dice are usually physically unrelated, so we expect them to be independent. If you are using dice as a model for some other phenomenon, though, watch out! You need to determine whether it is appropriate to treat your variables as independent.

With two independent variables $X$ and $Y$, the probabilities multiply. Specifically, let $E$ be an event for $X$ (that is, a set of numbers that $X$ might attain) and let $F$ be an event for $Y$. Then the joint probability of $E$ and $F$, meaning the probability that $X$ has a value in $E$ (written $\Pr_X[E]$) and $Y$ has a value in $F$, equals the product of the probabilities, $\Pr_X[E] \quad \Pr_Y[F]$. When the variables are not independent, the joint probability of $E$ and $F$ may differ (considerably) from this product.

It sounds to me like you are asking about either the theoretical computation of joint probabilities or about how to estimate joint probabilities from observations.

An excellent place to get up to speed quickly on all this stuff, as well as sort out what t-tests really do, what probability distributions are, and what "gaussian" really means, is to read through Gonick & Smith's A Cartoon Guide to Statistics (seriously!).

Related Question