This is an open area of research. It helps to establish that the multinomial models are just sequences of logistic regression models that have a pooled calculation of deviance. With logistic regression, there are reasons to evaluate the sample size: 1) to achieve adequate power, and 2) to avert small sample bias.
With all models, if the sample size is small, the power will be low. However, unlike linear regression, point estimates in logistic regression models are biased in underpowered analyses. Small sample bias describes the tendency for log odds ratios in logistic regression to bias by a factor of 2. This is discussed in Breslow and Day's IARC publication Statistical Methods in Cancer Research as a justification for conditional analyses.
Small sample bias is a serious issue. It should upturn the laissez-fare research approach of "run the underpowered analysis and let the confidence intervals summarize the uncertainty, we can just pool the analyses in a meta-analysis later on". Alas, we rarely see criticisms of obviously biased ORs in papers and publications. I don't know whether current meta-analysis methods are set-up to handle this type of issue.
One of the earliest and most cited articles addressing small sample bias is Peduzzi, et al 1996 who talks about the number of events-per-variable (EPV) as a metric to evaluate logistic regression. It turns out the Hosmer Lemeshow criterion of inspecting cross-tabulated outcome frequencies is too stringent. Peduzzi gives 15 as an EPV that he believed would mitigate small sample bias. This means that with 30 events (and presumably a larger number of non-events), you could confidently model two predictors even if they are highly collinear. It turns out the EPV is neither sufficient nor necessary. EPV has been refuted several times in the literature. One nice article from Ewout Steyerberg and Peter Austen states this with some authority and they provide nice alternative approaches. I think these are relevant to your question.
Polytomous logistic regression is just sequences of logistic models. If any one of the categorical comparisons suffers the small-sample issues, it will lead to biased estimates for that factor level. Conveniently, similar approaches outlined in the Steyerberg and Austen article can be applied to assess bias and power.
Their recommendation: bootstrap the data, inspect the sampling distribution of the ORs. If they show heavy skewness and heaping due to discretization, the analysis is underpowered and the estimates are significantly biased. In general you cannot simply look at a crosstabular frequency and immediately assess the precision or bias of these estimates. This is especially problematic with 2 or more predictors. The advantage of a linear model is that one is able to borrow information across groups, so that if two or more predictors are strongly collinear, accurate predictions and inference can still be obtained.
Good question. (I wouldn't endorse the links you cited in your question or in your first comment.)
There is good information at https://stats.stackexchange.com/a/86722/162986, as @user162986 pointed out. If you are looking for a more intuitive, introductory account, read on.
Either logistic regression or log-linear analysis might be used when all variables are categorical. But it's not true that the latter method is globally preferred.
The two methods answer different types of questions. With logistic regression one identifies, and analyzes relationships with, a single dependent variable (response variable, outcome variable, 'Y'). One might develop a predictive equation for Y based on one or more predictors. In connection with this one might classify each observation as taking one value or another with respect to Y. Alternatively, one might attempt to assess the strength of each predictor in causing Y -- as risky as this is.
With log-linear analysis one treats no single variable as dependent but instead assesses what if any patterns of relationships emerge tying together three or more variables. In this sense log-linear analysis superficially shares something with principal components analysis and exploratory factor analysis. However, a key difference from these data reduction techniques is that log-linear analysis incorporates a statistical significance test of the relationship linking the three or more variables.
It's fair to say that log-linear analysis is "an extension of the chi-square test." Let's ask not so much "why" this would be a good thing, but "when": when we want to know the same thing about three or more variables that we would be seeking via a chi-square test of independence of two -- namely, to what degree they are (in)dependent.
Best Answer
The answer is 'no'. The loglinear model is more general than the logistic regression model. See Fienberg, 1980, Analysis of Cross-Classified Categorical Data, section 6.2 on how to specify a loglinear model so that it corresponds to logistic regression.
Actually the reverse is true: If all variables are categorical, then every logistic regression model corresponds to some loglinear model.