I believe these are largely historical reasons. In the 1940s, one had to conduct analysis of variance with paper and pencil, so having balanced designs led to simple sums for both means and variances. Any imbalance would require inverting matrices 4x4 or larger (I've done it a couple of times on regression exams, and nearly always screwed up). It is likely that in the 1960s when panel/longitudinal data first came to researchers' attention (probably with PSID), one could reasonably easily run a regression with no structure on errors already, but running GLS required heroic efforts, let alone unbalanced GLS. These days, there aren't any issues, as Dimitriy said, as all estimators are computed in the general form with the most general matrix inversion operations in the background, anyway.
Also, with balanced data sets, you can easily run models with panel autoregressions. With unbalanced panels, these will likely get trickier. I don't think that these models are actually that popular.
It's not very clear what it is you want to implement. But I think you want a dynamic panel model that includes an exogenous variable $x \in (0,1)$. I think your $x$ is a continuous variable (as opposed to a binary variable), is that correct?
If that's correct, then you're contemplating a fixed effects model like,
$$
y_{it} = \alpha y_{i(t-1)} + \beta x_{it} + \eta_i + \epsilon_{it}
$$
where $\eta_i$ is the time-invariant fixed effect and $\epsilon_{it}$ is the i.i.d. error.
If I've understood you correctly then, in answer to your specific questions:
1) Yes, dynamic panels can be easily implemented in Stata with Roodman's xtabond2
. The Stata Journal paper explaining it is here.
2) A continuous variable bounded between 0 and 1 is no problem at all, in fact its quite common (e.g., employment rate data)
3) Of course you can drop the $y_{i(t-1)}$ term and just fit a non-dynamic panel. But whether you should or not depends on a number of things including what correlative relationship are you interesting in analyzing, what (if any) theoretical model are you operationalizing, and whether your data has a dynamic / autoregressive structure in it (which requires pretesting and diagnostic testing which you should always been doing anyway).
A short and sweet overview is here.
Best Answer
The differences are usually more of a historic nature which is related to the matrix algebra involved. However, this was only a concern when econometrics had to be done by pen and paper way back in the days and today these technicalities barely matter. A discussion of this was provided in an earlier answer by StasK which you can find here.
The main concern with unbalanced panel data is the question why the data is unbalanced. If observations are missing at random then this is not a problem - for a good explanation of what "missing at random" means, have a look at this answer by Peter Flom. If the attrition of firms in your data over time is not random, i.e. it is related to the idiosyncratic errors $u_{it}$, then this sample selection may bias your estimates. For an example of such a case see here (the introductory textbook by Wooldridge, the example is also about panel data for firms as in your case).
A simple test for such sample selection was proposed by Nijman and Verbeek (1992) for fixed and random effects models. Generate a selection indicator $s_{it}$ which equals one if a firm is observed in a given year and zero otherwise. Add the the lagged selection indicator $s_{i,t-1}$ to your model and estimate it via fixed effects using the whole data. Then you test whether $s_{i,t-1}$ is significant. The hypothesis is that the error $u_{it}$ is uncorrelated with the lagged selection indicator, so $s_{i,t-1}$ should be insignificant in order to conclude that attrition is random.
If you want to learn more about this topic, Wooldridge (2010) "Econometric Analysis of Cross-Section and Panel Data" devotes an entire chapter (ch. 19) to sample selection and attrition.