I have a list in which each element inside it is a matrix. How can I unfold this list to get a matrix.
Example:
List[[1]]=matrix A
List[[2]]=matrix B
List[[3]]=matrix C
and I want directly to get a matrix of
A,0,0
A,B,0
0,0,C
Note that I would like to do this for a large matrix
Solved – R unfold a list into a matrix
covariance-matrixmatrixr
Related Solutions
In chapter 2 of the Matrix Cookbook there is a nice review of matrix calculus stuff that gives a lot of useful identities that help with problems one would encounter doing probability and statistics, including rules to help differentiate the multivariate Gaussian likelihood.
If you have a random vector ${\boldsymbol y}$ that is multivariate normal with mean vector ${\boldsymbol \mu}$ and covariance matrix ${\boldsymbol \Sigma}$, then use equation (86) in the matrix cookbook to find that the gradient of the log likelihood ${\bf L}$ with respect to ${\boldsymbol \mu}$ is
$$\begin{align} \frac{ \partial {\bf L} }{ \partial {\boldsymbol \mu}} &= -\frac{1}{2} \left( \frac{\partial \left( {\boldsymbol y} - {\boldsymbol \mu} \right)' {\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y} - {\boldsymbol \mu}\right) }{\partial {\boldsymbol \mu}} \right) \nonumber \\ &= -\frac{1}{2} \left( -2 {\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y} - {\boldsymbol \mu}\right) \right) \nonumber \\ &= {\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y} - {\boldsymbol \mu} \right) \end{align}$$
I'll leave it to you to differentiate this again and find the answer to be $-{\boldsymbol \Sigma}^{-1}$.
As "extra credit", use equations (57) and (61) to find that the gradient with respect to ${\boldsymbol \Sigma}$ is
$$ \begin{align} \frac{ \partial {\bf L} }{ \partial {\boldsymbol \Sigma}} &= -\frac{1}{2} \left( \frac{ \partial \log(|{\boldsymbol \Sigma}|)}{\partial{\boldsymbol \Sigma}} + \frac{\partial \left( {\boldsymbol y} - {\boldsymbol \mu}\right)' {\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y}- {\boldsymbol \mu}\right) }{\partial {\boldsymbol \Sigma}} \right)\\ &= -\frac{1}{2} \left( {\boldsymbol \Sigma}^{-1} - {\boldsymbol \Sigma}^{-1} \left( {\boldsymbol y} - {\boldsymbol \mu} \right) \left( {\boldsymbol y} - {\boldsymbol \mu} \right)' {\boldsymbol \Sigma}^{-1} \right) \end{align} $$
I've left out a lot of the steps, but I made this derivation using only the identities found in the matrix cookbook, so I'll leave it to you to fill in the gaps.
I've used these score equations for maximum likelihood estimation, so I know they are correct :)
Yes, the scales of your variables affect the condition number. This is a real phenomenon with practical consequences; for example, I am using linear least-squares to solve a fitting problem, and if I just drop in the appropriate columns my condition number is of order 10^18 (presumably worse, as this is the limit of my numerical precision). If on the other hand I rescale my variables so each column of the fit matrix has the same sum-of-squares amplitude, the condition number of the fit matrix drops to less than a hundred. If I use the ill-conditioned matrix to compute fit values, they and the residuals are terrible; if I use the rescaled matrix and then rescale the variables, I get good stable fits.
What this means in terms of correlation and covariance matrices is that if you want to work with differently-scaled variables, you should keep the individual variable scales separate from the correlation matrix. If you do this, then a bad condition number of the correlation matrix corresponds to real, strong correlations between your variables. If you construct a covariance matrix by multiplying the scales in, then indeed, you can get a bad condition number just because your variables have different scales.
You don't say exactly what you want to do with your generated covariance matrices. If you're trying to evaluate the performance of an algorithm, then you have revealed a shortcoming in that algorithm: it works better if you rescale all your variables first. If you're doing something else, well, the fact is that if your variables have different scales, the covariance matrices really will have horrible condition numbers.
Best Answer
try this :
output <- matrix(unlist(List), ncol, byrow = TRUE)