This has as much to do with biology as with physics. The long answer on the biology is here. The biology in summary: the human eye has only three different "color-sensitive" elements, and uses a complex combination of the amount of response it sees from each of these to assign a "color" to the image.
Because there are only three sensitivities in the human eye, there are a variety of different techniques using only three basic (in some schemes called primary) colors to represent colors to humans.
The actual frequency of the light emitted from the part of the rainbow we call green may have the same effect on the human eye as something we get by mixing our blue crayon with our yellow crayon on a piece of paper. But it is easy to build a detector which will trivially differentiate between monochromatic green from one slice of a rainbow, and a mixture of the light reflected from blue and yellow crayon pigments.
In principle, humans might have evolved a different eye with four or five different "color" detectors, in which case the schemes needed to make color images would probably need to have four or five basic colors, and the images we see from our current three color representations would seem to be washed out, missing something important. But the eye didn't develop that way.
Part of why you don't see colors in astronomical objects through a telescope is that your eye isn't sensitive to colors when what you are looking at is faint. Your eyes have two types of photoreceptors: rods and cones. Cones detect color, but rods are more sensitive. So, when seeing something faint, you mostly use your rods, and you don't get much color. Try looking at a color photograph in a dimly lit room.
As Geoff Gaherty points out, if the objects were much brighter, you would indeed see them in color.
However, they still wouldn't necessarily be the same colors you see in the images, because most images are indeed false color. What the false color means really depends on the data in question. What wavelengths an image represents depends on what filter was being used (if any) when the image was taken, and the sensitivity of the detector (eg CCD) being used. So, different images of the same object may look very different. For example, compare this image of the Lagoon Nebula (M8) to this one.
Few astronomers use filter sets designed to match the human eye. It is more common for filter sets to be selected based on scientific considerations. General purpose sets of filters in common use do not match the human eye: compare the transmission curves for the Johnson-Cousins UBVRI filters and the SDSS filters the the sensativity of human cone cells. So, a set of images of an object from a given astronomical telescope may have images at several wavelengths, but these will probably not be exactly those that correspond to red, green, and blue to the human eye. Still, the easiest way for humans to visualise this data is to map these images to the red, green, and blue channels in an image, basically pretending that they are.
In addition to simply mapping images through different filters to the RGB channels of an image, more complex approaches are sometimes used. See, for example, this paper (2004PASP..116..133L).
So, ultimately, what the colors you see in a false color image actually mean depends both of what data happened to be used to be make the image and the method of doing the mapping preferred by whoever constructed the image.
Best Answer
All colors are only in the mind. Light has a mix of wavelengths, but it doesn't have color until someone sees it.
When light enters the eye, it hits rods and cones in the retina. Cones are color receptors. There are three kinds. Each kind is sensitive to a range of wavelengths. Color is the result of stimulation of the cones, and additional processing in the brain.
The image is from The Color-Sensitive Cones at HyperPhysics. Copyright by C. R. Nave, Georgia State University. A good starting link is the Light and Vision page.
Loosely, the sensors are sensitive to long, medium, and short wavelengths. The ranges overlap. Most light, even single wavelength laser light, stimulates more than one. The graph shows which are stimulated by single wavelength light at different wavelengths. The colors we see are determined by the mix of stimulations. The bottom of the graph gives names of colors for single wavelength light.
Grey is not on the list. Grey requires a mix of wavelengths that stimulate the three types more or less equally. So do black (very little stimulation) and white (more).
There is more to it than that. The perception of color is affected by colors around it. There are photographs where two different patches reflect the same light. But the colors we perceive are different because of the surroundings. For example, see the Checker Shadow illusion.
By Original by Edward H. Adelson, this file by Gustavb [Copyrighted free use], via Wikimedia Commons
Also no wavelength will stimulate only the "Green" cones. They are always stimulated in combination with other cones. I once read it is possible to stimulate them with a probe. The person saw a color he had never seen before. I wish I could find a link. Quora might be a good place to start.