Here are some of the approaches that come to mind.
You can use nearPD function from the Matrix package to convert the matrix output from cov/cor to a positive definite matrix.
You can use an imputation package such as VIM to fill missing data and then use cov.
You can code up your own routine to compute cov estimate using Expectation Maximization under normality. For details you can refer to Rubin.
I will answer most of the bottom line questions, but not all of your individual questions.
Covariance matrix with no structure means no a prior information or assumptions, other than it is positive semi-definite (psd), which all covariance matrices must be. Structure means some kind of a priori information or assumptions about one or more entries in the covariance matrix. For example, if it is known that the covariance matrix is diagonal, or has all diagonal elements equal, or has all off-diagonal elements equal, or perhaps a bound on absolute value of correlation coefficients, etc.
Eigenvalues being spread out, given that the subject of discussion is covariance matrices, i.e., psd, means that the 2-norm condition number, which is the ratio of the largest eigenvalue to the smallest eigenvalue, is "large". The eigenvalues will be the most spread out possible if the 2-norm condition number = infinity, which happens exactly when one or more eigenvalues = 0 (presuming there is at least one positive eigenvalue), i.e., when the covariance matrix is singular, i.e, when it is psd, but not positive definite. if there is not enough (independent) data relative to the covariance matrix dimension, and in in particular if k < p, the sample covariance matrix will be singular, i.e., have at least one eigenvalue = 0, even if the actual covariance matrix is not singular, therefore, the sample covariance matrix will have eigenvalues which are too spread out, as measured by the 2-norm condition number. The authors also claim that at least in some circumstance the largest eigenvalue in the sample covariance matrix is too large - I leave that for you to verify.
All of these schemes to adjust eigenvalues relative to the eigenvalues of the sample covariance are attempts to reduce the condition number of the unbiased sample covariance matrix estimator of covariance. There are several different ways which have been suggested in various papers over the years, beginning with Stein in 1956. they may accomplish this by introducing a bias in exchange for reducing the variance, thereby perhaps improving the Mean Square Error.
Best Answer
Another approach is to compute the maximum likelihood mean and covariance matrix, given all observed data. This requires an iterative algorithm, such as the expectation maximization algorithm. Accelerated variants and other types of optimization algorithms exist too. Compared to imputation, this approach can produce estimators that are more efficient, and unbiased under a wider variety of settings. It does require that the missingness of data is independent of the missing values (i.e. the data is 'missing completely at random' or 'missing at random').
References: