Poisson regression is not equivalent to a a Weibull survival model. Instead, it's assuming an exponential distribution, where the baseline hazard is not only proportional, but constant.
The Weibull model relaxes this assumption somewhat. While the two models will give you the same answer if indeed the underlying survival distribution is exponential, as the exponential distribution is a special case of the weibull distribution, they need not under other circumstances.
It appears one such circumstance is your data.
Not to be critical, but this is kind of a strange example. It's not clear that you're really doing time series analysis, nor what the NASDAQ would have to do with the number of games won by some team. If you're interested in saying something about the number of games a team won, I think it would be best to use binary logistic regression, given that you presumably know how many games are played. Poisson regression is most appropriate for talking about counts when the total possible is not constrained well, or at least not known.
How you would interpret your betas depends, in part, on the link used--it is possible to use the identity link, even though the log link is more common (and typically more appropriate). If you are using the log link, you probably wouldn't take the log of your response variable--the link in essence is doing that for you. Let's take an abstract case, you have a Poisson model using the log link as follows:
$$
\hat{y}=\text{exp}(\hat{\beta}_0)*\text{exp}(\hat{\beta}_1)^x
$$
alternatively,
$$
\hat{y}=\text{exp}(\hat{\beta}_0+\hat{\beta}_1x)
$$
(EDIT: I'm removing the "hats" from the betas in what follows, because they're ugly, but they should still be understood.)
With normal OLS regression, you are predicting the mean of a Gaussian distribution of the response variable conditional on the values of the covariates. In this case, you are predicting the mean of a Poisson distribution of the response variable conditional on the values of the covariates. For OLS, if a given case were 1 unit higher on your covariate, you expect, all things being equal, the mean of that conditional distribution to be ${\beta}_1$ units higher. Here, if a given case were 1 unit higher, ceteris paribus, you expect the conditional mean to be $e^{{\beta}_1}$ times higher. For instance, say ${\beta}_1=2$, then in normal regression it is 2 units higher (i.e., +2), and here it is 7.4 times higher (i.e., x 7.4). In both cases, ${\beta}_0$ is your intercept; in our equation above, consider the situation when $x=0$, then exp$({\beta}_1)^x=1$, and the right hand side reduces to exp(${\beta}_0$), which gives you the mean of $y$ when all covariates equal 0.
There are a couple of things that can be confusing about this. First, predicting the mean of a Poisson distribution isn't the same as predicting the mean of a Gaussian. With a normal distribution, the mean is the single most likely value. But with the Poisson, the mean is often an impossible value (e.g., if your predicted mean is 2.7, that's not a count that could exist). In addition, normally the mean is unrelated to the level of dispersion (i.e., the SD), but with the Poisson distribution, the variance necessarily equals the mean (although, it often doesn't in practice, leading to additional complexities). Finally, those exponentiations make it more complicated; if, instead of a relative change, you wanted to know the exact value, you would have to start at 0 (i.e., $e^{{\beta}_0}$) and multiply your way up $x$ times. For predicting a specific value, it's easier to solve the expression inside the parentheses in the bottom equation and then exponentiate; this makes the meaning of the beta less clear, but the math easier and reduces the possibility of error.
Best Answer
The transform on $X$ is not a key difference between the two methods, because, like you have noticed, you can also do it in Poisson regression without a problem. The essential difference is about $Y$: transform is not link (GLM). You see the difference clearly when you write the formulas as a conditional mean.
Linear model transformed with log is:
$$E(\log(Y)|X)=\beta_0+\beta_1X$$
GLM with a log link is (as in Poisson regresion):
$$\log(E(Y|X))=\beta_0+\beta_1X$$
Even if they look the same, they are not the same at all (because $\log$ is not linear).
If you are interested in having no bias on $E(Y|X)$ then GLM is the model to choose. With a transformed linear model, there is a (usually strong) bias. The subtlety happens in the way the noise ($\epsilon$) is transformed in a non linear way. The noise has mean 0, but when transformed by $\log$ it modifies the mean of the estimation.