Other answers came up with nice tricks to solve my problem in various ways. However, I found a principled approach that I think has a large advantage of being conceptually very clear and easy to adjust.
In this thread: How to efficiently generate random positive-semidefinite correlation matrices? -- I described and provided the code for two efficient algorithms of generating random correlation matrices. Both come from a paper by Lewandowski, Kurowicka, and Joe (2009), that @ssdecontrol referred to in the comments above (thanks a lot!).
Please see my answer there for a lot of figures, explanations, and matlab code. The so called "vine" method allows to generate random correlation matrices with any distribution of partial correlations and can be used to generate correlation matrices with large off-diagonal values. Here is the example figure from that thread:

The only thing that changes between subplots, is one parameter that controls how much the distribution of partial correlations is concentrated around $\pm 1$.
I copy my code to generate these matrices here as well, to show that it is not longer than the other methods suggested here. Please see my linked answer for some explanations. The values of betaparam
for the figure above were ${50,20,10,5,2,1}$ (and dimensionality d
was $100$).
function S = vineBeta(d, betaparam)
P = zeros(d); %// storing partial correlations
S = eye(d);
for k = 1:d-1
for i = k+1:d
P(k,i) = betarnd(betaparam,betaparam); %// sampling from beta
P(k,i) = (P(k,i)-0.5)*2; %// linearly shifting to [-1, 1]
p = P(k,i);
for l = (k-1):-1:1 %// converting partial correlation to raw correlation
p = p * sqrt((1-P(l,i)^2)*(1-P(l,k)^2)) + P(l,i)*P(l,k);
end
S(k,i) = p;
S(i,k) = p;
end
end
%// permuting the variables to make the distribution permutation-invariant
permutation = randperm(d);
S = S(permutation, permutation);
end
Update: eigenvalues
@psarka asks about the eigenvalues of these matrices. On the figure below I plot the eigenvalue spectra of the same six correlation matrices as above. Notice that they decrease gradually; in contrast, the method suggested by @psarka generally results in a correlation matrix with one large eigenvalue, but the rest being pretty uniform.

Update. Really simple method: several factors
Similar to what @ttnphns wrote in the comments above and @GottfriedHelms in his answer, one very simple way to achieve my goal is to randomly generate several ($k<n$) factor loadings $\mathbf W$ (random matrix of $k \times n$ size), form the covariance matrix $\mathbf W \mathbf W^\top$ (which of course will not be full rank) and add to it a random diagonal matrix $\mathbf D$ with positive elements to make $\mathbf B = \mathbf W \mathbf W^\top + \mathbf D$ full rank. The resulting covariance matrix can be normalized to become a correlation matrix (as described in my question). This is very simple and does the trick. Here are some example correlation matrices for $k={100, 50, 20, 10, 5, 1}$:

The only downside is that the resulting matrix will have $k$ large eigenvalues and then a sudden drop, as opposed to a nice decay shown above with the vine method. Here are the corresponding spectra:

Here is the code:
d = 100; %// number of dimensions
k = 5; %// number of factors
W = randn(d,k);
S = W*W' + diag(rand(1,d));
S = diag(1./sqrt(diag(S))) * S * diag(1./sqrt(diag(S)));
Correlation matrices need not be positive definite.
Consider a scalar random variable X having non-zero variance. Then the correlation matrix of X with itself is the matrix of all ones, which is positive semi-definite, but not positive definite.
As for sample correlation, consider sample data for the above, having first observation 1 and 1, and second observation 2 and 2. This results in sample correlation being the matrix of all ones, so not positive definite.
A sample correlation matrix, if computed in exact arithmetic (i.e., with no roundoff error) can not have negative eigenvalues.
Best Answer
A correlation matrix is really the covariance matrix of a bunch of variables which have been rescaled to have variance one.
But every population covariance matrix is positive semi-definite, and if we rule out weird cases (such as with missing data, or "numerical fuzz" turning a small eigenvalue to a negative one), so is every sample covariance matrix.
So if a matrix is supposed to be a correlation matrix, it should be positive semi-definite.
Note that the semi-definite is important here. In the bivariate case, take your two variables to be perfectly positively correlated and then the correlation matrix is $\pmatrix{1 & 1 \\ 1& 1}$ which has eigenvalues of $2$ and $0$: the zero eigenvalue means it is not positive definite.