I have a question regarding survival analyses: Imagine I have a large cohort, where most participants have been given one treatment (treatment A), and a much much smaller subpopulation which has been given another treatment (say, treatment B).

We follow both groups for 5 years. In group receiving treatment B, no one dies. In group A, 10% die.

Now, there are obvious potential issues here around bias and sampling that could be going on. But let's say both groups are well-matched, apart say for one or two covariates. What method would be best to get some insight into whether treatment B may actually be superior, or if it's just a sampling effect or something else going on, particularly given the small sample size of group B? Thank you!

# Survival Analysis – Addressing No Failures in One Group Through Survival Analysis

regressionsurvival

## Best Answer

If you don't have to correct for covariates, then you can evaluate the difference with a standard log-rank test. See this answer.

If you are doing Cox modeling to control for covariates, the Wald test typically reported on the coefficient for the treatment will be useless. Therneau and Grambsch discuss this situation in Section 3.5, "Infinite Coefficients," of Modeling Survival Data--Extending the Cox Model, and use the following (contrived) example that is sure to lead to an infinite coefficient:

They say:

I think that the likelihood-ratio test for individual coefficients is implemented directly in SAS. With R, you can examine the log-likelihood as a function of treatment-coefficient values

`beta`

(the profile likelihood) as Therneau and Grambsch outline in Section 3.4.1:Here's how to do this for their contrived example:

For the intersection point:

So the 95% CI for the Cox regression coefficient would be 2.75 to infinity.

Whether you are using a log-rank test without covariate adjustment or a Cox model with it, repeat the modeling and analysis on multiple bootstrapped samples of your data to evaluate the robustness of the result.