[Physics] Computer-Generated Holograms: I’m completely lost. How are they physically implemented

electromagnetic-radiationhologramoptics

I have been reading about holography, and I think I understand the general concept, but one thing that has me completely lost is how computer generated holography works in practice.

I think I get the basic idea behind how CGHs work. If we were to take a 3D object, like a Utah teapot, we could emulate the behaviour of an actual laser beam bouncing off the teapot and interfering with itself, thus forming the hologram. Now, here's where I'm confused: I've read about printing holograms (as in, with a regular printer), recording actual holograms on CCDs, patterning a holographic plate with the fringes using an LCD, and even holographic displays. What I don't get at all is how this is even vaguely possible? Aren't the interference fringes which make up the hologram much smaller than wavelength of light? Even if we had LCDs with massive resolution, wouldn't the diffraction limit prevent the using of them to pattern the plate, in the same way that visible light photolithography is nearing its physical limitations in the microfabrication? Basically, I've never seen a straight forward explanation of how computer holograms are actually transferred to the physical recording medium. As far as I know, it is possible, because there are companies currently doing it (such as Zebra Imaging). However, reading over patents and other papers in the literature yielded no clear understanding of how this really works, most authors seems to gloss over the implementation, and often seemingly contradict themselves. It was my understanding that one needed an electron microscope to actually make out the fringes because they are so small. If this is the case, why does one not need an electron microscope to etch the fringes?

Best Answer

The distance between the typical adjacent lines in a hologram is comparable to or longer than the wavelength of the light we use. After all, the lines arise from interference and the interference depends on the relative phase.

If you consider the distance of points H1, H2 from two generic points A, B and calculate the distances, the difference between H1-A and H1-B distances will differ from the difference between H2-A and H2-B by a distance comparable to the distance between H1 and H2 themselves. So the wave is imprinted in the hologram.

However, when the object we are visualizing is sufficiently far from the screen in the normal direction, the change of the phase will actually be much smaller which means that the lines on the photographic plates will be much further from each other than the wavelength. This should be known from double-slit experiments and diffraction gratings.

At most, you need the resolution of the hologram to exceed one pixel per the wavelength of the light. That's comparable to 0.5 microns. Invert it and you get 5,000 wave maxima per inch. That's close to the dots-per-inch resolution of some best printers.

However, the condition above is one for a really fine hologram. In reality, you can make a hologram even when its resolution is worse than that. Note that when we look at the hologram, in each direction we see the result of the interference of pretty much all the points on the screen - it's some kind of a Fourier transform. Because there are so many points that interfere, they can effectively reconstruct the subpixel structure of the image.

It's also a well-known fact that you may break a hologram into pieces and you may still see the whole object in each piece.

Related Question