Matrix dimensions in Linear Algebra vs Time series Analysis

linear algebramachine learningmathematical-statisticsmatrix decompositiontime series

I am confused or may misunderstand the dimensions of a Matrix when I was reading about time series analysis.

From what I understand in linear Algebra, if we have a Matrix $A \in \mathbf{R}^{m*n}$, this means $m$ refers to rows and $n$ refers to columns. But, when I was reading about multivariate time series, the notations kind of seem all the way around. They refer to $m$ as variables or features whereas $n$ as the length.

From my general understanding, if we have a number of instances $\mathbf{x} \in \mathbf{R}^{m*n}$, this means $m$ would be the dimension of matrix (column or number of variables) and $n$ would be the length (number of rows). This seems to me we have a transpose Matrix.

The inconsistent in notations kind of makes me thinking that I may misunderstand something. does this help in time series analysis or what is the intuition behind this.

Best Answer

It's just a matter of convention of your instructor or a textbook to pick either the columns or rows of a matrix as variables. I've seen this done both ways. In econometrics it's more often to see variables in columns, i.e. $n$ variables in $R^{m\times n}$ matrix. I think ML papers it's often the rows with variables, i.e. $m$ variables with $n$ observations. Pay attention to the introduction of the paper to see which convention they use, that's all you can do

Related Question