To Expand: Econometrics includes Multivariate Analysis as a tool (a mathematical one). At the same time it may include many other things, such as economic "fundamental" models.
Econometrics is also a certain spin on (applied) statistics, just as biostatistics (one could say biometrics) or statistics in medicine, information theory or whatever field you can imagine. The unique problems faced in any field shape the tools needed for statistical analysis.
For example Econometrics is a very frequentist, non Bayesian field (at least how it is taught) - or at least one that doesn't teach two distinct approaches. The reason for that is that the data for most problems* lends itself to classic, frequentist analysis (lots of data points available, targets are constant mechanics). It is also a pretty standard approach in econometrics to learn OLS and then the General Method of Moments.
Panel Data models (Fixed/Random Effect) are, until needed, more of a fringe thing that comes up as a sidenote in most classes.
The data is generally considered continous. Logit, Probit and models which target nominal or ordinal variables are also not at the heart of what you learn in most Econometrics courses. Of course they are taught eventually and are important for economic fields which stray more into experiments or social areas but usually dependent variables are in money units.
Many medical people will learn things like ANOVA very througoutly. Econometricians consider this to be a spin on linear regression and maybe never learn it as a distinct thing.
Before those things come up, students will study time series analysis very througoutly, with Hamilton (1994) being the premier book in that area.
Even if you study econometrics to a graduate level, many methods on this site will seem foreign to you at first. It is a applied take on statistics that goes deep where it needs to and omits things that are not important in the field of economics.
In your case I would say that learning Econometrics is the right thing to do. It includes the stuff you need to analyze problems such as the one you posted.
My recommendation would include three books on three different levels.
The first one is Introduction to Econometrics by Stock and Watson. It is written for economics undergrads without further interest in statistics - just the basic stuff without going deep in the math. On the other hand it brings you - method wise - up to topics like cointegrated analysis.
Next up are Econometrics by either Hayashi or Green. These are the standard econometric graduate textbooks. If things are not proven or analyzed completely, they at least refer to the mathematical texts that do. Those give you a pretty good understanding of the usual econometric problem.
Last up is Time Series Analysis by Hamilton. It pretty much includes everything you could know at the point when it was published (1994), though it is more on a late graduate or phd level at least for (non quant-) economists.
*One could very well argue about this though
I doubt there are strict, formal definitions that a wide range of data analysts agree on.
In general however, time series connotes a single study unit observed at regular intervals over a very long period of time. A prototypical example would be the annual GDP growth of a country over decades or even more than a hundred years. For an analyst working for a private company, it might be monthly sales revenues over the life of the company. Because there are so many observations, the data are analyzed in great detail, looking for things like seasonality over different periods (e.g., monthly: more sales at the beginning of a month just after people have been paid; yearly: more sales in November and December, when people are shopping for the Christmas season), and possibly regime shifts. Forecasting is often very important, as @StephanKolassa notes.
Longitudinal typically refers to fewer measurements over a larger number of study units. A prototypical example might be a drug trial, where there are hundreds of patients measured at baseline (before treatment), and monthly for the next 3 months. With just 4 observations of each unit in this example, it is not possible to try to detect the kinds of features time series analysts are interested in. On the other hand, with patients presumably randomized into treatment and control arms, causality can be inferred once the non-independence has been addressed. As that suggests, often the non-independence is considered almost a nuisance, rather than the primary feature of interest.
Best Answer
There are some terminological differences where the same thing is called different names in different disciplines:
There are terminological differences where the same term is used to mean different things in different disciplines:
I view the unique contributions of econometrics to be
Overall, economists tend to look for strong interpretation of coefficients in their models. Statisticians would take a logistic model as a way to get to the probability of the positive outcome, often as a simple predictive device, and may also note the GLM interpretation with nice exponential family properties that it possesses, as well as connections with discriminant analysis. Economists would think about the utility interpretation of the logit model, and be concerned that only $\beta/\sigma$ is identified in this model, and that heteroskedasticity can throw it off. (Statisticians will be wondering what $\sigma$ are the economists talking about, of course.) Of course, a utility that is linear in its inputs is a very funny thing from the perspective of Microeconomics 101, although some generalizations to semi-concave functions are probably done in Mas-Collel.
What economists generally tend to miss, but, IMHO, would benefit from, are aspects of multivariate analysis (including latent variable models as a way to deal with measurement errors and multiple proxies... statisticians are oblivious to these models, though, too), regression diagnostics (all these Cook's distances, Mallows' $C_p$, DFBETA, etc.), analysis of missing data (Manski's partial identification is surely fancy, but the mainstream MCAR/MAR/NMAR breakdown and multiple imputation are more useful), and survey statistics. A lot of other contributions from the mainstream statistics have been entertained by econometrics and either adopted as a standard methodology, or passed by as a short term fashion: ARMA models of the 1960s are probably better known in econometrics than in statistics, as some graduate programs in statistics may fail to offer a time series course these days; shrinkage estimators/ridge regression of the 1970s have come and gone; the bootstrap of the 1980s is a knee-jerk reaction for any complicated situations, although economists need to be better aware of the limitations of the bootstrap; the empirical likelihood of the 1990s has seen more methodology development from theoretical econometricians than from theoretical statisticians; computational Bayesian methods of the 2000s are being entertained in econometrics, but my feeling is that are just too parametric, too heavily model-based, to be compatible with the robustness paradigm I mentioned earlier. (EDIT: that was the view on the scene in 2012; by 2020, Bayesian models have become standard in empirical macro where people probably care a little less about robustness, and are making their presence heard in empirical micro, as well. They are just too easy to run these days to pass by.) Whether economists will find any use of the statistical learning/bioinformatics or spatio-temporal stuff that is extremely hot in modern statistics is an open call.