This is mostly conjecture, based on physics and common sense.
We know that photons couple to other (charged) particles via the electromagnetic force. Whilst neutrons themselves have zero charge, they are comprised of bound (u,d,d) quarks, which are charged, and with which, the photons could interact.
The density of pure neutron matter would be extremely high, so even a small amount of it would contain a lot of neutrons, and thus many opportunities for photons to interact with quarks. Photons will either scatter directly off the neutrons or briefly induce an excited state, which would decay of the order of $10^{-24}$s, emitting a photon of equivalent energy.
As a simple model, this is not too conceptually different from why clouds appear the way they do (white when thin, black when dense - if the light source is behind the cloud). Therefore, I would conjecture that a lump of neutronic matter would appear black if in front of a light source and white if behind it.
Let's try to validate and quantify the conjecture first raised by Carl Witthoft in a comment to the question, which is basically that the sky only appears less blue in the second picture because a lot more light is scattering off of the windows towards your camera.
If this is true, we ought to be able to see it. The first thing to do is convert the pictures from the relatively useless RGB colorspace to the much more useful XYZ colorspace which is built on a model of the actual receptors in the human eye. The $Y$ coordinate corresponds to the perceived luminance of the image (i.e. the average human response across the visible spectrum) and the $Z$ coordinate corresponds to our blue receptor response. The $X$ coordinate is set to be pick up the slack and doesn't necessarily have a clear physical interpretation. See here the responses across the visible spectrum: (from wikipedia):
So, that is the first thing I did. I obtained:
Above you will see the two original pictures, as well as their $Y$ and $Z$ values. Here we can clearly see that the total illumination ($Y$) in the Gray picture has gone up, and the blue content of the image ($Z$) has gone up as well.
Let's try to take a closer look. To do that I will next look at a histogram of the $Y$ and $Z$ values in the images:
Looking at this histogram of values, we can clearly see that at the middle levels (near ~ 0.5) both of the images have a blue hump. Let's assume that is the sky (we'll check in a second). But notice also that if anything that blue hump has shifted up a bit in activation. Sorta nearby the blue hump is a hump in the luminance ($Y$), which appears to move a lot. But there is a lot going on in the image, and if the conjecture is right and there is more light coming in through the windows, we would expect everything in the picture to be brighter, including the columns and wall. So, we need to try to filter the sky, so let's make a cut on the image given by those humps in the blue. I've shown my choices for the cuts as the vertical dashed lines in the image. Applying that cut to the original image we obtain:
Absolutely wonderful! We've just developed a nearly perfect sky filter. Now that we know which pixels correspond to the sky, we can look again at our histograms, but this time only for "sky" pixels.
And now it would appear as though there is no denying Carl Witthoft's explanation, the sky appears less blue, in the "Gray Sky" picture, not because any of the blue has gone away (in fact if anything there is more blue content in it) but because there is just so much more light coming from those points beyond just the blue, and so it doesn't look blue anymore. For completeness, let's look at the histograms in the RGB channels of just the sky pixels:
Here we can clearly see that it is not that the blue went away, we just have a heck of a lot more red and green coming from the windows now.
But why does it look so much less blue, when the values of the red and green channels are still smaller than the blue?
That is entirely an effect of human perception. We are a lot less sensitive to blue light than we are to green. If you take a look at the plot at the top of this answer again, remember that the $Y$ curve was chosen to be the perceptual sensitivity of human subjects across the visible spectrum. Notice how little it overlaps with blue.
In fact, a common formula people use to convert images to grey scale (that is worse than the XYZ transformation, but easy to do) is:
$$ L = 0.21 R + 0.72 G + 0.07 B $$
This demonstrates the issue with just three numbers. Roughly 72% of what we perceive as brightness comes from the green channel, 21% comes from the red, and only 7% comes from the blue. This is why, when the sun shines on those windows in your building, even though there is more blue light coming in, and the blue components still dominate the other colors, it suddenly looks very drab indeed.
All of the code used to make these figures is available as an ipython notebook here.
Best Answer
Many stars are very far from white! The reason one may get the impression they are all white is just that our color vision is pretty bad when looking at dim objects. However, even with the naked eye, you can clearly see a whole variety of colors on a dark night. For instance, Betelgeuse can be found on the shoulder of Orion, and it is clearly red. The brightest star in the sky, Sirius, is notably blue. My favorite example of diversity is the binary system of Albireo. You need binoculars or a telescope to separate the two stars, but then you clearly see one as bluish and the other as orange. A southern sky equivalent would be the Jewel Box, though I have not seen that one personally.
In fact, stars have a color index associated with them. Basically, you take a star's luminosity in two different, standard, narrow-ish bandpasses - such as B (blue) or V (visible or green) - and take the logarithm of the ratio of luminosities. In the B-V case, a larger number means a redder star. Sirius is about 0 on this scale; Betelgeuse is almost 2.
Note that because stars are roughly blackbody emitters, you won't see every color. Blackbody spectra form a one-parameter family, usually parameterized by the effective temperature. Take the emitted spectrum at a given temperature, convolve it with a response function for human vision (something people still debate the details of), and you get the perceived overall color. This image shows roughly what colors you might see a blackbody as. This is the same phenomenon behind glowing hot metal, so the colors are much the same. Note that the curve traced out by various temperatures happens to miss the green region - you will never see a green star. Once stars are hot enough to be emitting more green than red, they are emitting enough blue to appear white-ish.