Question 1
If your outcome variable is integrated, you might consider using a single-equation generalized error correction model (GECM) as per Banerjee (1993) and De Boef (2001), as this model is agnostic to the stationarity of the predictors.
You might evaluate the stationarity of your outcome using:
$\log{(GDP/Labor)_{ti}} \sim \rho_{i}\log{(GDP/Labor)_{t-1i}} + \zeta_{ti} + \mu_{\rho_{i}}$,
where:
$\zeta_{ti}$ measures all disturbances to $\log{(GDP/Labor)_{ti}}$ in each time $t$ (assumed distributed normal), and
$\mu_{\rho_{i}}$ measures state-level variation in $\log{(GDP/Labor)_{ti}}$ (assumed distributed normal).
If $|\rho_{i}| \approx 1$, then you've got nearly integrated data, and the GECM, which also has the attractive properties of disentangling long-run effects, from both instantaneous change short term effects and from lagged short term effects.
The general form of the single equation GECM is:
$\Delta y_{t} = \beta_{0} + \beta_{c}\left[y_{t-1}-\left(\mathbf{X}_{t-1}\right)\right] + \mathbf{B}_{\Delta\mathbf{X}}\Delta\mathbf{X}_{t} + \mathbf{B}_{\mathbf{X}}\mathbf{X}_{t-1} + \varepsilon$,
where:
$\Delta$ is the first difference operator (e.g. $\Delta y_{t} = y_{t} - y_{t-1}$), and $\varepsilon$ may be decomposed into mixed effects (e.g. by including $\beta_{0i}$, for country-level random intercepts).
instantaneous short run effects are given by $\beta_{\Delta\mathbf{X}}$,
lagged short run effects are given by $\beta_{\mathbf{X}} - \beta_{c} - \beta_{\Delta\mathbf{X}}$, and
long run effects are given by $\left(\beta-{c}-\beta_{\mathbf{X}}\right)/\beta_{c}$.
This specification assumes a homogeneity of error correction processes. I haven't yet tried to derive a heterogeneous error correction specification...
In Stata you can perform Hadri's test for unit-root in panel data on the residuals of such a model, to check them for stationarity.
Question 2
I do not know that I can say much useful here.
Question 3
The time dummies can be included in the GECM model, and presumably other dynamic times series models, often they are used as indicators of, for example, policies going into effect. I have done something similar, but used (time-varying) proportions (rather than 0/1 indicator variables) to represent the portion of the time period during which a policy was in effect (e.g. some policies go into effect January 1, some July 1, some December 21, etc.). On the other hand: you don't have tons of data, so I suppose it depends how many new variables you are adding.
References:
Banerjee, A., Dolado, J. J., Galbraith, J. W., and Hendry, D. F. (1993). Co-integration, error correction, and the econometric analysis of non-stationary data. Oxford University Press, USA.
De Boef, S. (2001). Modeling equilibrium relationships: Error correction models with strongly autoregressive data. Political Analysis, 9(1):78–94.
There are 3 main methods of regression for panel data. Pooled OLS, Fixed and Random Effects.
Mainly to select Pooled OLS you'll need to test for individual effects in the error, this mean that Var(u) differs from 0 and E(u) is also different from 0. So there is presence of individual/ Heterogeneous effects in your OLS regression and your estimators will be biased. You can confirm this Breush-Pagan LM test. So you need a futher method of regression.
If such thing happen. then you'll need to select a fixed/random effect above your OLS estimation. and you do that with Hausmann test.
Under these conditions, Modelling data panel analysis has different points about the considerations of unitroots and cointegration. But I like to take what Park (2011) and similars say about the Modelling. That there is no test of unitroots, cointegration needed to model under the fixed/random effects.
However under different estimation methods, I believe it's better to test the unitroot/stationarity of the variables, morelikely if T tends to equal N. In this case spurios regression may outcome. A formal test of unitroot rejected means your data is stational, but if this hypthesis is not rejected, cointegration should be tested. And then regression methods would be a bit more accuracy.
The idea behind the testing of unitroots and cointegration derives from the assumptions of the regression model. Random effects for example according to Woolridge has 3 core assumptions, but noone of them are related to the time series unit root. except for the one where (u_i) shall not be correlate with (u_j) for any time period inclidung i=j, the cross sectional dependence may lead to biased estimators, so the common correlated effects method of regression would be better and there unitroot and cointegration should be tested.
As you see the Panel Data Analysys without including the time regression pattern of test is well detailed. So papers don't bother about including the stationarity or the unitroots presence over the random/fixed effects methods, not even Pooled OLS.
I'd like to stay with the basics from what Park says about modelling.
Park Myumng, Hun (2011) Practical Guides To Panel Data Modeling: A Step by Step. Analysis Using Stata. University of Japan.
Best Answer
At the current moment (version 1.2-10, 2012-05-05) it seems that the unbalanced case is not supported. Edit: The issue of unbalanced panel data is solved in version 2.2-2 of plm on CRAN (2020-02-21).
Rest of the answer is assuming version 1.2-10:
I've looked at the code, and the final data preparation line (no matter what is your initial argument) is the following:
If you pass unbalanced panel, this line will make it balanced by repeating the same values. If your unbalanced panel has time series with lengths which divide each other then even no error message is produced. Here is the example from purtest page:
This panel is balanced:
Disbalance it:
Two different time series length in the panel:
No error message:
Another disbalanced panel:
And the error message: