The i.i.d. assumption about the pairs $(\mathbf{X}_i, y_i)$, $i = 1, \ldots, N$, is often made in statistics and in machine learning. Sometimes for a good reason, sometimes out of convenience and sometimes just because we usually make this assumption. To satisfactorily answer if the assumption is really necessary, and what the consequences are of not making this assumption, I would easily end up writing a book (if you ever easily end up doing something like that). Here I will try to give a brief overview of what I find to be the most important aspects.
A fundamental assumption
Let's assume that we want to learn a probability model of $y$ given $\mathbf{X}$, which we call $p(y \mid \mathbf{X})$. We do not make any assumptions about this model a priory, but we will make the minimal assumption that such a model exists such that
- the conditional distribution of $y_i$ given $\mathbf{X}_i$ is $p(y_i \mid \mathbf{X}_i)$.
What is worth noting about this assumption is that the conditional distribution of $y_i$ depends on $i$ only through $\mathbf{X}_i$. This is what makes the model useful, e.g. for prediction. The assumption holds as a consequence of the identically distributed part under the i.i.d. assumption, but it is weaker because we don't make any assumptions about the $\mathbf{X}_i$'s.
In the following the focus will mostly be on the role of independence.
Modelling
There are two major approaches to learning a model of $y$ given $\mathbf{X}$. One approach is known as discriminative modelling and the other as generative modelling.
- Discriminative modelling: We model $p(y \mid \mathbf{X})$ directly, e.g. a logistic regression model, a neural network, a tree or a random forest. The working modelling assumption will typically be that the $y_i$'s are conditionally independent given the $\mathbf{X}_i$'s, though estimation techniques relying on subsampling or bootstrapping make most sense under the i.i.d. or the weaker exchangeability assumption (see below). But generally, for discriminative modelling we don't need to make distributional assumptions about the $\mathbf{X}_i$'s.
- Generative modelling: We model the joint distribution, $p(\mathbf{X}, y)$, of $(\mathbf{X}, y)$ typically by modelling the conditional distribution $p(\mathbf{X} \mid y)$ and the marginal distribution $p(y)$. Then we use Bayes's formula for computing $p(y \mid \mathbf{X})$. Linear discriminant analysis and naive Bayes methods are examples. The working modelling assumption will typically be the i.i.d. assumption.
For both modelling approaches the working modelling assumption is used to derive or propose learning methods (or estimators). That could be by maximising the (penalised) log-likelihood, minimising the empirical risk or by using Bayesian methods. Even if the working modelling assumption is wrong, the resulting method can still provide a sensible fit of $p(y \mid \mathbf{X})$.
Some techniques used together with discriminative modelling, such as bagging (bootstrap aggregation), work by fitting many models to data sampled randomly from the dataset. Without the i.i.d. assumption (or exchangeability) the resampled datasets will not have a joint distribution similar to that of the original dataset. Any dependence structure has become "messed up" by the resampling. I have not thought deeply about this, but I don't see why that should necessarily break the method as a method for learning $p(y \mid \mathbf{X})$. At least not for methods based on the working independence assumptions. I am happy to be proved wrong here.
Consistency and error bounds
A central question for all learning methods is whether they result in models close to $p(y \mid \mathbf{X})$. There is a vast theoretical literature in statistics and machine learning dealing with consistency and error bounds. A main goal of this literature is to prove that the learned model is close to $p(y \mid \mathbf{X})$ when $N$ is large. Consistency is a qualitative assurance, while error bounds provide (semi-) explicit quantitative control of the closeness and give rates of convergence.
The theoretical results all rely on assumptions about the joint distribution of the observations in the dataset. Often the working modelling assumptions mentioned above are made (that is, conditional independence for discriminative modelling and i.i.d. for generative modelling). For discriminative modelling, consistency and error bounds will require that the $\mathbf{X}_i$'s fulfil certain conditions. In classical regression one such condition is that $\frac{1}{N} \mathbb{X}^T \mathbb{X} \to \Sigma$ for $N \to \infty$, where $\mathbb{X}$ denotes the design matrix with rows $\mathbf{X}_i^T$. Weaker conditions may be enough for consistency. In sparse learning another such condition is the restricted eigenvalue condition, see e.g. On the conditions used to prove oracle results for the Lasso. The i.i.d. assumption together with some technical distributional assumptions imply that some such sufficient conditions are fulfilled with large probability, and thus the i.i.d. assumption may prove to be a sufficient but not a necessary assumption to get consistency and error bounds for discriminative modelling.
The working modelling assumption of independence may be wrong for either of the modelling approaches. As a rough rule-of-thumb one can still expect consistency if the data comes from an ergodic process, and one can still expect some error bounds if the process is sufficiently fast mixing. A precise mathematical definition of these concepts would take us too far away from the main question. It is enough to note that there exist dependence structures besides the i.i.d. assumption for which the learning methods can be proved to work as $N$ tends to infinity.
If we have more detailed knowledge about the dependence structure, we may choose to replace the working independence assumption used for modelling with a model that captures the dependence structure as well. This is often done for time series. A better working model may result in a more efficient method.
Model assessment
Rather than proving that the learning method gives a model close to $p(y \mid \mathbf{X})$ it is of great practical value to obtain a (relative) assessment of "how good a learned model is". Such assessment scores are comparable for two or more learned models, but they will not provide an absolute assessment of how close a learned model is to $p(y \mid \mathbf{X})$. Estimates of assessment scores are typically computed empirically based on splitting the dataset into a training and a test dataset or by using cross-validation.
As with bagging, a random splitting of the dataset will "mess up" any dependence structure. However, for methods based on the working independence assumptions, ergodicity assumptions weaker than i.i.d. should be sufficient for the assessment estimates to be reasonable, though standard errors on these estimates will be very difficult to come up with.
[Edit: Dependence among the variables will result in a distribution of the learned model that differs from the distribution under the i.i.d. assumption. The estimate produced by cross-validation is not obviously related to the generalization error. If the dependence is strong, it will most likely be a poor estimate.]
Summary (tl;dr)
All the above is under the assumption that there is a fixed conditional probability model, $p(y \mid \mathbf{X})$. Thus there cannot be trends or sudden changes in the conditional distribution not captured by $\mathbf{X}$.
When learning a model of $y$ given $\mathbf{X}$, independence plays a role as
- a useful working modelling assumption that allows us to derive learning methods
- a sufficient but not necessary assumption for proving consistency and providing error bounds
- a sufficient but not necessary assumption for using random data splitting techniques such as bagging for learning and cross-validation for assessment.
To understand precisely what alternatives to i.i.d. that are also sufficient is non-trivial and to some extent a research subject.
From your previous question you learned that GLM is described in terms of probability distribution, linear predictor $\eta$ and link function $g$ and is described as
$$
\begin{align}
\eta &= X\beta \\
E(Y|X) &= \mu = g^{-1}(\eta)
\end{align}
$$
where $g$ is a logit link function and $Y$ is assumed to follow a Bernoulli distribution
$$ Y_i \sim \mathcal{B}(\mu_i) $$
each $Y_i$ follows Bernoulli distribution with it's own mean $\mu_i$ that is conditional on $X$. We are not assuming that each $Y_i$ comes from the same distribution, with the same mean (this would be the intercept-only model $Y_i = g^{-1}(\mu)$), but that they all have different means. We assume that $Y_i$'s are independent, i.e. we do not have to worry about things such as autocorrelation between subsequent $Y_i$ values etc.
The i.i.d. assumption is related to errors in linear regression (i.e. Gaussian GLM), where the model is
$$
y_i = \beta_0 + \beta_1 x_i + \varepsilon_i = \mu_i + \varepsilon_i
$$
where $\varepsilon_i \sim \mathcal{N}(0, \sigma^2)$, so we have i.i.d. noise around $\mu_i$. This is why are interested in residuals diagnostics and pay attention to the residuals vs. fitted plot. Now, in case of GLM's like logistic regression it's not that simple since there is no additive noise term like with Gaussian model (see here, here and here). We still want residuals to be "random" around zero and we don't want to see any trends in them because they would suggest that there are some effects that are not accounted for in the model, but we don't assume that they are normal and/or i.i.d.. See also the On the importance of the i.i.d. assumption in statistical learning thread.
As a sidenote, notice that we can even drop the assumption that each $Y_i$ comes from the same kind of distribution. There are (non-GLM) models that assume that different $Y_i$'s can have different distributions with different parameters, i.e. that your data comes from a mixture of different distributions. In such case we would also assume that the $Y_i$ values are independent, since dependent values, coming from different distributions with different parameters (i.e. typical real-world data) is something that in most cases would be too complicated to model (often impossible).
Best Answer
Remember that the error terms in the regression measure the deviations of the response variable from its conditional mean (given knowledge of the explanatory variables). Indeed, under the stipulated model form for regression, this is essentially the definition of what the error terms are. So you have:
$$\varepsilon_i \equiv Y_i - \mathbb{E}(Y_i|X_i).$$
Observe that each error term is a function of both the response variable and explanatory variable for that data point. Now, if these values are IID then this means that the deviations-from-the-conditional-mean are independent and identically distributed. This does not lead to an IID response variable, except in the trivial case where the explanatory variable has zero variance (i.e., it has a point-mass distribution).
With regard to your specific questions, the answers are as follows:
No. As you correctly point out, under the regression assumptions there is no common distribution for the response variable (and no common mean either). Other than in the trivial case where the explanatory variable has zero variance (i.e., it has a point-mass distribution) the response variable has a conditional mean that depends on the explanatory variable.
The latter. The response variables are conditionally independent conditional on the explanatory variable. As a shorthand we may say that the values $Y_i|X_i$ are independent (though not identically distributed).
This is a much stronger assumption than in regression analysis. It is equivalent to making the standard regression assumptions, but also assuming that the underlying explanatory variables are IID. Once you assume that $X_1,...,X_n \sim \text{IID}$, the regression assumptions imply that the response variable is also marginally IID, which gives the joint IID result.
In regression analysis, all your distributional assumptions are conditional on the explanatory variables, so the actual assumption is that:
$$\varepsilon_1,...,\varepsilon_n | \mathbf{X} \sim \text{IID N}(0,\sigma^2).$$