According to the problem you described, you want to set the death rate =20% for the reference or control group, effect.size=-3 (this will help set the death rate in the treated group to 80%) in LRPower() function:
LRPower(100, reference.group.incidence=0.2, effect.size = -3, simulation.n = 5000)
[1] 0.9948
LRPower(40, reference.group.incidence=0.2, effect.size = -3, simulation.n = 5000)
[1] 0.797
Thus, you need 20 control and 20 treated animals to distinguish 80% death rate in treated from 20% death rate in control group with 79.7% power while holding significance level at 0.05. In LRPower() function, the default type I error is 0.05, and the default group.sample.size.ratio=1.
For the reason why you need to set effect.size = -3, please check out the method file here: https://github.com/RongUTSW/Methods/blob/master/LRPowerSimulation.pdf
My line of thought goes like this. As per the AG model, each individual is represented by a counting process $N_i(t)$ with intensity $\lambda_i(t)$ which can be written as $\lambda_i(t) = \lambda_0(t) \exp(\beta' x_i(t))$. By this it is implicit that only the "current" value of $x_i$ matters for the intensity. The counting process grows when an individual has an event. I also assume independent censoring (actually stopping of the process). Note that these assumptions are implicit from the way that you fitted the model.
The probability of no events in $(a,b)$ (improperly, a "survival") is then
$$
S_i(t|s) = \exp \left( -\int_s^t \lambda_0(u) \exp(\beta'x_i(u)) du \right)
$$
Note that the probability of no events does not directly relate to the probability of one event, as the complementary event to "no events in a period" is "at least one event during a period".
Then say you are interested in $S(t|s_0)$. Then the idea would be to fix $x_i = x_i(s_0)$ (i.e. assume the covariates don't change after $s_0$), and we would get
$$
S_i(t|s_0) = S_i(t) / S_i(s_0)
$$
Since the part before $s_0$ cancels out, together with my assumption that only the current value of $x_i$ matters for the intensity, means that this is equal to
$$
S_i(t|s_i) = \exp \left( -\exp(\beta'x_i(s_0)) \int_{s_0}^t \lambda_0(u)du \right)
$$
Then in R you can do it like that (I give an example with a data set):
library(frailtypack)
data(readmission)
mod1 <- coxph(Surv(t.start, t.stop, event) ~ sex + charlson + cluster(id), data = readmission)
# set the covariate values at s_0
mycov <- data.frame(sex = "Female", charlson = "1-2")
sf <- survfit(mod1, newdata = mycov)
# with different s_0
par(mfrow=c(2,2))
time <- c(300, 500, 700, 1000)
for(i in time) {
pos <- which.max(sf$time[sf$time <= i])
S_s0 <- sf$surv[pos]
with(sf, plot(time[pos:length(time)], surv[pos:length(surv)] / S_s0), type = "l")
}
Here you get the plots of the survival curves, which correspond to the values
$(t, S(t|s_0))$ for $t\geq s_0$.
The other comments that I would give on this are the following: it is difficult to talk about the distribution of the next event. This is because in the AG formulation the time since previous event does not play any role. In other words, if you would like to take that into account, more complicated stochastic models should be used, where for example you include the "previous number of events" as a time-dependent covariate. This does complicate things a lot and the interpretation of the estimated quantities is most likely very difficult.
The second comment I have is about the nature of the time dependent covariates. Mostly, the AG works nicely with "external" covariates, such as air pollution, or something which is not directly measured on the subject ("external" of the recurrent event process). This is mostly because the first expression I wrote here is the probability of no events during $(a,b)$ relies on the assumption that the number of events in any given interval is Poisson distributed. This is true if the covariates are external. A discussion on this can be found in several textbooks, for example in Cook & Lawless at section 2.5. If your time-dependent covariates do depend on the recurrent event process, then it should be modeled jointly with the recurrent events process.
Best Answer
Yes, your power will change based on the ratio of exposed to unexposed. For example, in a recent study I did the power calculations for, at an equal sample size, an Exposed:Unexposed ratio of 1:2 achieved power = 0.80 at a HR of ~1.3. It took until HR ~1.6 or so for a ratio of 1:10.
In your case, since the sample size will vary but your HR won't, the smaller the ratio, the larger your sample size will need to be.