I can't make sense of any of the statements in the question, but you're asking for terminology and that request at least I can understand and appreciate, so here's some to get you started.
I will italicize technical terms and key concepts to draw them to your attention.
The upward face of a die after it is thrown or the side of a coin that shows after it is flipped are outcomes. The assignment of numbers to outcomes, such as labeling the faces of a die with 1, 2, ..., 6 or the faces of a coin 0 and 1, is called a random variable. A range (or, generally, a set) of values for which we would like to know a probability is called an event.
With two dice you have two random quantities and are using two random variables to analyze them. This is a multivariate setting. Now the outcomes consist of ordered pairs: one outcome for the first variable, one outcome for the second. These outcomes have joint probabilities. The most fundamental issue concerns whether the outcomes of one are somehow connected with the outcomes of the other. When they are not, the two variables are independent. Throws of two dice are usually physically unrelated, so we expect them to be independent. If you are using dice as a model for some other phenomenon, though, watch out! You need to determine whether it is appropriate to treat your variables as independent.
With two independent variables $X$ and $Y$, the probabilities multiply. Specifically, let $E$ be an event for $X$ (that is, a set of numbers that $X$ might attain) and let $F$ be an event for $Y$. Then the joint probability of $E$ and $F$, meaning the probability that $X$ has a value in $E$ (written $\Pr_X[E]$) and $Y$ has a value in $F$, equals the product of the probabilities, $\Pr_X[E] \quad \Pr_Y[F]$. When the variables are not independent, the joint probability of $E$ and $F$ may differ (considerably) from this product.
It sounds to me like you are asking about either the theoretical computation of joint probabilities or about how to estimate joint probabilities from observations.
An excellent place to get up to speed quickly on all this stuff, as well as sort out what t-tests really do, what probability distributions are, and what "gaussian" really means, is to read through Gonick & Smith's A Cartoon Guide to Statistics (seriously!).
Best Answer
Statistics is concerned with phenomena that can be considered random. Even if you are studying a deterministic process, the measurement noise can make the observations random. We can simplify many problems by using simple models that considered all the unobserved factors as “random noise”. For example, the linear regression model
$$ \mathsf{height}_i = \alpha + \beta \,\mathsf{age}_i + \varepsilon_i $$
does say that we model height as a function of age and consider whatever else could influence it as “random noise”. It doesn't say that we consider it as completely “random” meaning “chaotic”, “unpredictable”, etc. For another example, if you toss a coin, the outcome would be deterministic and depend only on the rules of physics, but it is influenced by many factors that contribute to its chaotic nature so we can as well consider it as a random process.
If you have a deterministic process and noiseless measurements of all the relevant data, you wouldn't need statistics for it. You would need other mathematics, for example, calculus, but not statistics. If you need to consider the noise and need to assume randomness, you do so. Nothing “arises” from probability distributions, they are only mathematical tools we use to model real-world phenomena.