[Physics] How do HUD optics create readable text so close to the eye

opticsvision

The human eye has a minimum focal length of about maybe 50-80mm. Most Heads Up Displays (HUDs) are much closer to that, so how is the text they display visible without being horribly out of focus?

For example, this goggle HUD says:

RideOn’s display, although positioned right in front of the user’s eye, projects virtual layers and data focused to a distance of >15ft, thus creating an AR experience where graphics appear as if seamlessly integrated onto the real world.

This sounds a little bit pseudosciency, but the fact remains that the HUD has to be able to deal with the eye's long minimum focal length, so how do they do it?

Clarification (from comment below): I understand how optics can be used to make close things appear far away. I guess the real question is how do you do that AND see the real world behind that image? Is it just because the lensed image is only displayed on a relatively sparse grid of pixels, and you can see between them, or does the HUD optics actually incorporate the background image?

Best Answer

They make a virtual image on a plane behind the HUD. A good model system to understand this kind of thing is the aplanatic sphere: Born and Wolf "Principles of Optics"gives a good discussion of this if you can get hold of it.

If not, here's a simple explanation: imagine a lens system that collimates a point source, so that the point source, i.e. the point source lies on the lens's focal plane. Now imagine your HUD image on this plane, and shift this plane every so slightly nearer to the lens than the focal plane. The lens will not quite collimate the light from the point sources: instead it is still slightly divergent and the light from each point source seems to be diverging from a point source a long way further from the lens than the focal plane.

The thin lens equation, with proper heed taken of the meaning of the sign on the results, can be used to explore this concept quantitatively thus:

$$\frac{1}{d_i}+\frac{1}{d_o} = \frac{1}{f}$$

Put $d_o =f-\epsilon$, where $d_o$ is the object's and $\epsilon>0$ is small and positive. You'll see $d_o\approx -\epsilon^{-1}$ is big and negative, meaning a virtual image a long way behind the focal plane.


Edit after Question from OP

I think I'm probably going to need a diagram for this. I understand how optics can be used to make close things appear far away. I guess the real question is how do you do that AND see the real world behind that image? Is it just because the lensed image is only displayed on a relatively sparse grid of pixels, and you can see between them, or does the HUD optics actually incorporate the background image?

There are several ways this can be done. In older systems, a partially silvered mirror is used to add the light field from the imaging array to the incoming view, so that the user can see both lightfields at once, as in my drawing below.

enter image description here

One can also use transparent screens, grounded on LCD or similar technologies and put them just in front of the focal plane of the converging lens in a Galilean telescope. A second Galilean telescope is used to compensate for the gain of the one with the imaging array in it, so that the image through the viewfinder is unmagnified, as in my drawing below.

As an aside, a Galilean telescope is made of a converging lens and a diverging lens such that the focal planes of the two are on top of one another. Thus rays from infinity have the paths as shown in red.

enter image description here

Related Question