$\newcommand{\variable}{\rm variable}$The link function is link to parameter of the distribution (in this example is $p$ of Bernoulli distribution) to the linear score $\eta$ (in this example is $b_0+b_1\times\variable$)
$\log(p_i/(1-p_i))=b_0+b_1\times\variable$
Then such $p$ derives the outcome of $0$ and $1$ by the binomial probability function $p_i^{y_i}(1-p_i)^{1-y_i}$
The link function is not the link from or to the response directly.
Given the binary response $y_i$ and the covariate $x_i$, $i=1,2,\dots,n$, the likelihood for your model is
$$
L(\beta_0,\beta_1,p_\text{min},p_\text{max})=\prod_{i=1}^n p_i^{y_i}(1-p_i)^{1-y_i}
$$
where each
$$
p_i=p_\text{min} + (p_\text{max} - p_\text{min})\frac1{1+\exp(-(\beta_0 + \beta_1 x_i)}.
$$
Just write a function computing the log of this an apply some general purpose optimization algorithm to maximise this numerically with respect to the four parameters. For example, in R do:
# the log likelihood
loglik <- function(par,y,x) {
beta0 <- par[1]
beta1 <- par[2]
pmin <- par[3]
pmax <- par[4]
p <- pmin + (pmax - pmin)*plogis(beta0 + beta1*x)
sum(dbinom(y, size=1, prob=p, log=TRUE))
}
# simulated data
x <- seq(-10,10,len=1000)
y <- rbinom(n=length(x),size=1,prob=.2 + .6*plogis(.5*x))
# fit the model
optim(c(0, 0.5 ,.1, .9), loglik, control=list(fnscale=-1), y=y,x=x, lower=c(-Inf,-Inf,0,0),upper=c(Inf,Inf,1,1))
Note that to test for evidence of a lower plateau at $p_\text{min}$ in your data, your $H_0:p_\text{min}=0$ is at the boundary of the parameter space and the approximate/asymptotic distribution of $2(\log L(\hat\theta_1)-\log L(\hat\theta_0))$ is going to be a mixture of chi-square distributions with 1 and 0 degrees of freedom, see Self, S. G. & Liang, K. Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under nonstandard conditions J. Amer. Statist. Assoc., 1987, 82, 605-610.
In the simpler case that there is only one plateau (so $p_\text{max}=1$ or $p_\text{min}=0$) the model is equivalent to a zero-inflated binary regression model that can be fitted with e.g. the glmmTMB
R-package.
Best Answer
There are some statistical misunderstandings here.
The mean squared error (MSE) is primarily associated with linear (OLS) models. It isn't really used with logistic regression. For example, calculating the MSE for a model and then multiplying it by the variance-covariance matrix is something that is done in linear regression, but not logistic regression. You should not be trying to get the MSE from a
glm
model fitted withfamily=binomial
.The linear predictor (which I believe is what you mean by "link function" here) is not bound by 0 and infinity, but ranges from -infinity to +infinity.
The
se.fit
value is on the scale of the linear predictor (i.e., the log odds of Y=1 at X=x0). It would be for both the "fitted line at pointx0
" and the "predicted link function value ofy
at pointx0
", as they are the same thing.In general, the standard error of a predicted point on the scale of the linear predictor needs to take into account the uncertainty of the estimated slope and intercept, and also how far the x-value of the predicted point is from the mean of x.