What is the appropriate way to test for significant differences between the same parameter estimate from 2 nonlinear models? An example using R – here are 2 datasets:
library(tidyverse)
#example from ?nls
DNase1 <- subset(DNase, Run == 1)
DNase2 <- subset(DNase, Run == 2)
Both datasets can be fit with a nonlinear function using the nls() function and coefficients extracted:
## fit models and extract coefficients
m1 <- nls(density ~ SSlogis(log(conc), Asym, xmid, scal), DNase1)
m1_coef <- tidy(m1) %>%
mutate(Run = 1)
m2 <- nls(density ~ SSlogis(log(conc), Asym, xmid, scal), DNase2)
m2_coef <- tidy(m2) %>%
mutate(Run = 2)
pars <- rbind(m1_coef, m2_coef) %>%
dplyr::filter(term == "Asym")
print(pars)
For simplicity, some of the results include 2 estimates of the 'Asym' parameter, one estimate for each condition (Run 1 & 2) made by each of the 2 models:
term Estimate Std. Error t value Pr(>|t|) Run
1 Asym 2.345182 0.0781541 30.00715 2.165539e-13 1
2 Asym 2.595948 0.0646589 40.14835 5.109901e-15 2
Is there a way test if the estimate for 'Asym' from Run 2 (2.345) is significantly different than the estimate from Run 1 (2.596)?
Best Answer
Create a model
m12
consisting of separate parameters for each run and a modelm0
where the parameters are the same for each run and then compare those two models using an F test. In R that would be done like this:giving:
If we want to test only whether Asym differs but not whether xmid and scal are the same then create a model a0 where Asym is the same but the other parameters can differ and compare it to m12.
giving: