Yes, these are survival analysis/event history analysis data.
The beginning of time in survival analysis is rarely calendar time, but is the first day the individual was observed in the study. This affects your interpretation in that intervention/treatment effects are understood to affect person time (e.g. to affect the hazard function in a given abstracted notion of "days since start of observation", or "days since diagnosis" or "days since treatment"... depending on the nature of your study design), rather than affecting the hazard function in terms of calendar time (i.e. you are not trying to estimate change in hazard due to treatment on June, 3rd, 2014).
If you only followed people for 6 months: that's 180 days; unless everyone experienced readmission by 180 days, there should be some right censoring, and the survival curve should not plummet to 0 at 180 days.
We could rephrase your question asking whether methods based on full data (i.e. noncensored data) are necessarily more efficient than methods based on observed data (i.e. censored data). This question can be answered in general by semiparametric efficiency theory.
Let $Z$ denote the full data (such as covariates and failure time). Suppose we have a data set of i.i.d. draws $Z_1, \dots Z_n$. A full data estimator $\hat\beta$ for an estimand $\beta^*$ is asymptotically linear with influence function $\varphi^F$ if $$\sqrt{n} ( \hat\beta - \beta^*) = \frac{1}{\sqrt{n}} \sum_{i=1}^n \varphi^F(Z_i) + o_P(n^{-1/2}).$$ Such an estimator has asymptotic variance $\mathrm{var}\left\{ \varphi^F(Z) \right\}$. Likewise, let $\mathcal{O}$ be the observed data, which denotes the full data $Z$ subject to coarsening or missingness. We can similarly define the influence function $\varphi$ for an observed data estimator.
This suggests that we can compare the efficiency of observed data estimators and full data estimators through comparisons of their influence functions. Rather than studying the influence function of a given estimator, we can study the class of influence functions of all regular estimators of the estimand $\beta^*$.
Lemma 7.4 in Tsiatis (2006) establishes the relationship between the class of influence functions of observed data estimators and the corresponding class for full data estimators. He shows that the class of observed data influence functions equals
\begin{equation*}
\frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) + L_2(\mathcal{O}),
\end{equation*}
where $\mathcal{C}=\infty$ denotes that the full data is observed ( i.e. $T \leq C$ in survival analysis), $\varpi(\infty, Z) = \mathbb{P}[\mathcal{C}=\infty \mid Z]$ is the conditional probability of observing the full data $L_2$ is an arbitrary function satisfying $\mathbb{E}[L_2(\mathcal{O})\mid Z] = 0$, and $\varphi^F$ is an arbitrary full data influence function.
Based on this identity, we can derive the asymptotic variance of an observed data asymptotically linear estimator with influence function $\varphi$ as
\begin{align*}
& \mathrm{var} \left\{ \varphi(\mathcal{O}) \right\} \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) + L_2(\mathcal{O}) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \mathbb{E} \left\{ \frac{I(\mathcal{C}=\infty)}{\varpi(\infty, Z)} \varphi^F(Z) \mid Z \right\} \right] + \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
=\, & \mathrm{var} \left[ \varphi^F(Z) \right]
+ \mathbb{E} \left[ \mathrm{var} \left\{ \varphi(\mathcal{O}) \mid Z \right\} \right] \\
\succcurlyeq\, & \mathrm{var} \left[ \varphi^F(Z) \right] &
\end{align*}
This shows that any observed data estimator has higher variance than its corresonding full data estimator. The inequality is tight when the second summand has conditional variance zero: this means that the observed data equals the full data. In a survival analysis setting, this shows that whenever censoring is present, the observed data estimators are less efficient than the full data estimators.
Best Answer
Survival bias occurs in retrospective studies where inclusion is in some sense outcome dependent (through outcomes or their moderators) but is treated as representative of a population at-risk at baseline. Your description does not give any details suggesting survival bias is an issue here.
Censoring leads to a different type of bias, censoring bias, when not properly accounted for. Your analytic plan of using a Cox model does properly account for censoring thus eliminating censoring bias. Despite that, censoring does reduce the power of an analysis. Suppose you monitor 5,000 people but only 10 experience an outcome (death or other), the Cox model does not afford much more power than a survival analysis of only 10 people.
Your description of the exposure is not exactly clear to me. It sounds like participants are eligible to participate in the study only if they have not yet begun a certain therapy. After a period of self-determined time, they begin a therapy. You then follow participants for an outcome (at the time of which they may be either on or off such a therapy).
This is an analysis that should be done using time-varying covariates with some caveats. When I enter the study, irrespective of calendar time or age, my survival "clock" is at time 0. If I initiate therapy at day 10, and then die at day 20 I contribute two correlated observations to the sample: the first I live 0-10 days with no therapy and am censored at time 10, the second I live 0-10 days and die at time 10. The clock resets when I initiate therapy. Frailties are the Cox model equivalent of random effects that allow you to account for clustered observations in such a format. If age and/or calendar year are significant predictors of survival in such a study, you should consider adding them as covariates in the model.
The caveat to time-varying covariates is as follows: initiation of therapy almost always depends on latent disease state. Patients whose initial hospitalization requires high acuity will initiate therapy more quickly, and likely die more quickly even if the therapy is beneficial. This leads to use-bias. If you measure indicators of disease state longitudinally (like blood pressure, physical functioning, or other), latent variable models or marginal structural models may be used to reduce such a bias.