most of the astronomy images we find online have some color modification
How close to the false color images would they be
This is a common misconception, that the pictures you see of galaxies and nebulae are necessarily "false color", "modified color", or "photoshopped". Some of them, yes. But a lot of them are quite simply true color, but taken with a sensor (CMOS, CCD) that doesn't suffer from the limitations of the human eye.
E.g. look at this image of the Horsehead Nebula:
All that color is real. It's there, in the photons reaching you. But your eye cannot see it. A CMOS, however, can. This is not "false color", although saturation was likely increased in post-processing, in addition to what the sensor can do. But the hues are probably real (e.g., the red you see in the image, or the blue, was present in the photons hitting the sensor - albeit at a lower saturation level).
(An astute observer may object that the eye and the CMOS don't see the exact same hues, but let's not go down that rathole now.)
"False color" means when the image shows green where the CMOS (or the human eye, if luminosity was higher) would see red, or something like that. This is not always the case with images of nebulae and galaxies; in fact, if the image was taken with visible light, chances are the hues are preserved.
Proper false color images are those taken in UV or IR, and then artificially converted to visible light. This is an example of it, the Sun in ultraviolet:
Now, to answer your question:
Unfortunately, even from a near distance, most of these objects won't look much better. They are, after all, faint, rarefied clouds of dust and gas. They are just not bright enough for the human eye to see color.
There are few exceptions. A notable one would be close binary systems where the components are stars of very different temperatures. Kind of like Albireo, but much closer. From a starship, looking at the two stars orbiting each other, you'd see very clearly a striking color difference - perhaps a large, somewhat dim, deep red star, and a blinding, crisp dot of bluish white light, the smaller and more active companion.
The views from the center of a globular cluster undergoing a compression phase should be pretty spectacular, too. Night would never be dark on a planet in the middle of the cluster.
Sort of.
As Space.com writes,
The raw Hubble images, as beamed down from the telescope itself, are black and white. But each image is captured using three different filters: red, green and blue. The Hubble imaging team combines those three images into one, in a Technicolor process pioneered in the 1930s. (The same process occurs in digital SLRs, except that in your camera, it's automatic.)
Why are the original images in black and white? Because if Hubble's eye saw in color, the light detector would have to have red, green and blue elements crammed into the same area, taking away crucial resolving capability. Without those different elements, Hubble can capture images with much more detail.
As an interesting aside, the Wide Field Camera 3 sees in wavelengths other than visible light, as do the Cosmic Origins Spectrograph and the Space Telescope Imaging Spectrograph.
NASA goes into a litte detail about the process here, as well as some of the rationale behind choosing some colors. Some of the reasons for using artificial colors include showcasing elements whose emission lines are out of the visible spectrum, and showing features that are too dim at visible wavelengths. Remember, CCD detectors usually don't see the same things that humans do, and Hubble can see outside the visible spectrum.
Best Answer
Part of why you don't see colors in astronomical objects through a telescope is that your eye isn't sensitive to colors when what you are looking at is faint. Your eyes have two types of photoreceptors: rods and cones. Cones detect color, but rods are more sensitive. So, when seeing something faint, you mostly use your rods, and you don't get much color. Try looking at a color photograph in a dimly lit room.
As Geoff Gaherty points out, if the objects were much brighter, you would indeed see them in color.
However, they still wouldn't necessarily be the same colors you see in the images, because most images are indeed false color. What the false color means really depends on the data in question. What wavelengths an image represents depends on what filter was being used (if any) when the image was taken, and the sensitivity of the detector (eg CCD) being used. So, different images of the same object may look very different. For example, compare this image of the Lagoon Nebula (M8) to this one.
Few astronomers use filter sets designed to match the human eye. It is more common for filter sets to be selected based on scientific considerations. General purpose sets of filters in common use do not match the human eye: compare the transmission curves for the Johnson-Cousins UBVRI filters and the SDSS filters the the sensativity of human cone cells. So, a set of images of an object from a given astronomical telescope may have images at several wavelengths, but these will probably not be exactly those that correspond to red, green, and blue to the human eye. Still, the easiest way for humans to visualise this data is to map these images to the red, green, and blue channels in an image, basically pretending that they are.
In addition to simply mapping images through different filters to the RGB channels of an image, more complex approaches are sometimes used. See, for example, this paper (2004PASP..116..133L).
So, ultimately, what the colors you see in a false color image actually mean depends both of what data happened to be used to be make the image and the method of doing the mapping preferred by whoever constructed the image.