The two predictors are correlated -- housing starts can predict consumer spending well enough that the latter is not needed very much in addition to housing starts to help predict sales. For simplicity you can probably get by with the simpler one-variable model using housing starts.
And don't forget to actually look at the results. In my experience, some people get too dazzled by asterisks and p values and don't pay enough attention to the values of their estimates. In this example, the one-variable model is also much easier to interpret. For example, you can draw a scatterplot of the data and superimpose the fitted line.
Even for the two-predictor model, you can make a little table showing what you predict for some typical values of consumer spending and housing starts, and give a prediction interval so people know how widely they can expect future results to vary from your predictions. In R, that's done using the predict function, giving a hypothetical dataset in newdata argument, and specifying interval="predict"
In terms of the p-value, the answer can be found in an earlier post. Basically, use the permutation test for n<20. A generally normalizing transformation, such as rankit, will work for larger n's and will be more powerful (Bishara & Hittner, 2012). Of course, if you transform, you're no longer looking at the linear relationship on the original scale.
In terms of the confidence interval, the answer is less clear. There aren't many published large-scale Monte Carlo comparisons. Puth et al. (2014) have some evidence that the Fisher Z can be inadequate with large violations of normality. There was no general solution - even bootstrapping with BCa did not solve it. You might consider either:
a)Spearman CIs with Fisher Z. Instead of using $SE_z=1/\sqrt(n-3)$, use the Fieller et al. (1957) estimate of standard error for the Fisher Z:
$SE_z=1.03/\sqrt(n-3)$
b)Transforming via rankit, and then using the Fisher Z for the CI as usual
References:
Bishara, A. J., & Hittner, J. B. (2012). Testing the significance of a correlation with non-normal data: Comparison of Pearson, Spearman, transformation, and resampling approaches. Psychological Methods, 17, 399-417. doi:10.1037/a0028087
Fieller, E. C., Hartley, H. O., & Pearson, E. S. (1957). Tests for rank correlation coefficients. I. Biometrika, 44, 470-481.
Puth, M., Neuhäuser, M., & Ruxton, G. D. (2014). Effective use of Pearson’s product-moment correlation coefficient. Animal Behaviour, 93, 183-189.
Best Answer
You have interpreted these results correctly according to the conventional textbook scheme.
Personally, I am often not a fan of the standard way of thinking about p-values. (Mounting soapbox...) Firstly, it's worth considering that there are several valid ways to look at p-values. Fisher thought of them as a continuous measure of evidence against the null hypothesis, and Neyman & Pearson used them as the hub around which the decision making process turned. The most common way p-values seem to be used is not valid under either approach. The Neyman-Pearson framework has much to speak for it, in my opinion, but is primarily applicable in situations where there are theories that clearly posit two possible values, a null value (which could be $r_{null}=0$, but could be another number), and an alternative value ($r_{alt}$). In such a case, you could design your whole investigation around differentiating between those two values. This would entail specifying, among other things, $\alpha$ (the long-run type I error rate you're willing to live with), $\beta$ (the long-run type II error rate you're willing to live with), $N$ (the sample size), etc. In that context, it makes sense to me to say that something is 'significant' or 'not-significant'. However, I believe those situations are the minority of the cases. For example, for your second sample, I would say that you cannot conclude with more than 70% confidence that the correlation is positive. You will also want to examine your data and think about possible non-linearities and range restriction. (Stepping down from soapbox...)