Suppose I have two events which each have a fixed duration, and occur once in a given fixed time window.

E1 has a duration of 10us, and occurs at a random position in a 10 minute window.

E2 has a duration of 1ms, and occurs at a random position in an 8 hour window.

How often will the two occur simultaneously?

My reasoning: to rephrase, how often will E1 occur while E2 is occurring?

At any instant, the probability of E1 being active is

`(10 x 10^-6)/600 = 16.67 x 10-9`

E2 (1ms) has 100 times the duration of E1 (10us). The probability that E1 is active in any given 1ms period is therefore 100 times greater:

`1666.67 x 10^-9`

As E2 occurs 3 times a day, the probability of it happening on any given day is 3 times that.

`5 x 10^-6`

In other words, the two events will occur simultaneously once every 200 thousand days.

Is my reasoning correct? Thanks in advance for your reply.

**EDIT**

I know (for reasons below) that my reasoning is **NOT** correct. I have tried reframing the problem, as a "discrete time" problem, but the argument gets into a mess.

We can say at at any moment the probability of either E1 or E2 being "on". WE should be then able to say that the probability of BOTH being on is just teh product of the two possibilities. Hmmm. Maybe. But when we try to say in concrete terms (i.e. with time units) how often this comp[ound event occurs, we have a problem. The division cancelled out our time units.

A colleague and friend suggests that convolution is the solution – maybe, but I haven''t figured that out yet.

So what IS the correct solution? How often does the compound event occur?

**EDIT 2**

The question has been closed – I have been asked to provide additional context, or explain why the question is relevant. OK then.

The question is relevant to analysis of reliability in computing systems, particularly embedded platforms. It is possible for fault modes to exist which require two infrequent events to occur simultaneously. I had direct experience of this years ago, when a system, marketed as high reliability, turned out to have a bug which was triggered by this kind of situation. The way the system failed was so catastrophic that it actually looked very much like a hardware failure – which it was not. (If you are curious, look up what the unix command "kill -1" does.)

The events triggering the bug were probably not truly random, but they were certainly not periodic or easy to predict. The system had been on the market for some years and it was notable that the fault was only even reported and (eventually) diagnosed because of the persistence of one particular technician.

My theory is that because of the very infrequent nature of the coincidence of the two events, this fault would never have been caught in the normal development cycle of prototyping and validation. (Obviously it wasn't, but I mean that the normal process of "requirements testing" was not badly done.) However, once a greater number of the devices were in the market, it was likely that the fault would happen to somebody somewhere, once in a while. It may also be the case that this fault would be most unlikely to ever happen to the same user twice – and thus, that we were lucky it was ever reported (many admins might scratch their head, shrug, reboot and, after a week or two, forget about it).

This type of scenario has implications on test and validation methodology for systems where the highest levels of reliability are required.

This is the context for the question. Thank you.

## Best Answer

If an event of duration $d_1$ starts at time $t_1$ and an event of duration $d_2$ starts at time $t_2$, then they overlap if $t_2\in[t_1-d_2,t_1+d_1]$. This interval has length $d_1+d_2$, so if the events of type 2 occur at rate $\lambda_2$ (with onsets following a Poisson process), then the expected number of them that overlap with the event of type 1 is $\lambda_2(d_1+d_2)$. If the events of type 1 are happening at rate $\lambda_1$, then overlapping pairs occur at rate $\lambda_1\lambda_2(d_1+d_2)$.