Yes, you can still use the delta method with correlated variables.
Let us label your function $f(\theta)$, where $\theta = (\beta, \omega)^T$ and $f(\theta) = \beta / (1-\omega)$. The delta method is based upon the Taylor expansion:
$f(\hat{\theta}) \approx f(\theta) + (\hat{\theta} - \theta)^Tf'(\theta)$
Rearranging terms and squaring both sides results in:
$(f(\hat{\theta}) - f(\theta))^2 \approx (\hat{\theta} - \theta)^Tf'(\theta)f'(\theta)^T(\hat{\theta} - \theta)$
Taking expectations:
$\text{Var} f(\hat{\theta}) \approx \mathbb{E}(\hat{\theta} - \theta)^Tf'(\theta)f'(\theta)^T(\hat{\theta} - \theta)$
Taking derivatives of $f$ and evaluating $f'$ at $\hat{\theta}$ gives:
$f'(\hat{\theta})f'(\hat{\theta})^T = \frac{1}{(1-\hat{\omega})^2}
\begin{bmatrix}
1 & \hat{\beta} / (1 - \hat{\omega}) \\
\hat{\beta} / (1 - \hat{\omega}) & \hat{\beta}^2 / (1 - \hat{\omega})^2
\end{bmatrix}
$
Writing out the full expression for $\text{Var}f(\hat{\theta})$ and substituting estimates:
$\widehat{\text{Var}} f(\hat{\theta}) = \frac{1}{(1-\hat{\omega})^2}(\hat{\sigma}^2_{\beta} + 2\hat{\sigma}_{\beta \omega} \hat{\beta} / (1-\hat{\omega}) + \hat{\sigma}^2_{\omega}\hat{\beta}^2 / (1 - \hat{\omega})^2)$
You can see that positive correlation between $\beta$ and $\omega$ is going to increase the variance of the estimate of the long-run effect; it means there's a negative correlation between the estimates of $\beta$ and $1 - \omega$, the numerator and denominator of the long-run effect, so the estimated numerator and denominator tend to move in opposite directions, which naturally increases variability relative to the uncorrelated case.
Note that the delta method can fail miserably, so you might want to check its performance via simulation, e.g., by specifying all the parameters and creating many data sets with different errors, estimating the long run effect for each data set, calculating the standard deviation of the long run effect estimates, and comparing that to the delta method estimates of the standard error for the various data sets.
- In the long run the first differences are taken as zero and the long-run equation reduces to
$\gamma_1y+\gamma_2x$=0 which is the long run relationship between the variables. The $\gamma$'s define this long run relationship. The $\beta$'determine the short run adjustment to this equilibrium.
2.See 1. The $\beta$'s are a measure of the persistence of the variable. I don't think that you need to pay particular attention to individual values. In the context of the model the long run relationship can be interpreted as your panel equation. There is no set rule determining the short and long run. One can estimate the half life of a disturbance to equilibrium from the estimated coefficients. This will be different for every model
3 I am not sure that I understand this question. Are you differentiating between a model where the constant is constrained to the ecm or where the constant is unrestricted?
Best Answer
Below are my responses to your two questions.
Don't these models suffer from omitted variable bias?
What allows time series studies to use only one independent variable as compared to cross sectional and panel studies that rarely ever use less than 2 independent variables? Do time series models have a property that allow researchers to use just one independent variable?
With regards to your specific question on number of independent variables, according to this wonderful article:
The same article also provides a real world example of missing variable bias. Bottom line, use domain knowledge, available literature, experimental evidence, experts to select number of variables.
In addition, I would use Transfer Function within ARIMA framework which is a general form of ARIMA and incorporates AR/ARMA.ARDL and other time series regression.