The adjustments are only to the standard errors of the regression coefficients, not to the point estimates of the coefficients themselves. So you can gather the requested statistics from the traditional OLS output in SPSS. The Hayes and Cai, 2007 paper elaborates on this, as well.
To note, perhaps it is a difference between fields but I almost always see these types of standard errors referred to by their originators (Huber, White and Eicker). There are other types of "robust" estimates and standard errors though (e.g. estimated by the jack-knife or bootstrapping). Sometimes these other estimators do have different point estimates for the coefficients and the standard errors of the coefficients (not always though).
The definition is completely analogous if you use the so-called working residuals and if regressors and residuals are weighted appropriately. The working weights are part of the usual iteratively weighted least squares (IWLS) algorithm employed for generalized linear models (including logistic regression).
The main idea is to use derivative of the log-likelihood with respect to the mean or the corresponding linear predictor as a measure of deviation. Alternatively, you can use the so-called scores or estimating functions as a measure of deviation, i.e., the derivative of the log-likelihood with respect to the coefficients of the linear predictor. All these ideas are employed in our sandwich
package for R to enable object-oriented implementations of the different sandwich estimators. See Zeileis (2006, Object-Oriented Computation of Sandwich Estimators, doi:10.18637/jss.v016.i09) for the details.
A worked illustration in R using a logistic regression for labor market participation is:
data("SwissLabor", package = "AER")
m <- glm(participation ~ ., data = SwissLabor, family = binomial)
The classical Eicker/Huber/White sandwich covariance matrix can be computed using our sandwich
package:
library("sandwich")
vcov_sandwich <- sandwich(m)
And by hand you can easily extract the working residuals and regressors, appropriately weighted with the working weights:
r <- residuals(m, "working") * sqrt(weights(m, "working"))
X <- model.matrix(m) * sqrt(weights(m, "working"))
Then, the formula for the covariance matrix is just the same as in the linear regression model
vcov_byhand <- solve(t(X) %*% X) %*% t(X) %*% diag(r^2) %*% X %*% solve(t(X) %*% X)
all.equal(vcov_sandwich, vcov_byhand)
## [1] TRUE
Two caveats: (1) Of course, one shouldn't compute the sandwich like this but more efficient and numerically more stable matrix computations can be employed. (2) For data with independent binary responses, these sandwich covariances are not robust against anything. They are consistent, that's ok, but there is no way to misspecify the likelihood without misspecifying the model equation.
Best Answer
When a model is nonlinear in parameters (like your multinomial logit), the MLE of the parameter vector is both biased and inconsistent if the errors are heteroskedastic, unless you modify the likelihood function to correctly take into account the precise form of heteroskedasticity. This is very different from the linear case. Similarly, the MLE of the asymptotic covariance matrix of the parameters will also be inconsistent. The terms "correctly" and "precise" are key, since it is hard to know exactly what sort of heteroskedasticity you have going on. To make things worse, the bias is also in an unknown direction.
I don't know about Biogeme, but other stats packages have commands that allow you to parameterize the heteroskedasticity.