[Physics] Effect of a wavefront deformation on the far-field diffraction pattern of a TEM00

diffractionfourier transformoptics

By performing Matlab simulations on a TEM00 mode (approximated by a gaussian intensity profile with a flat wavefront), I got the impression that applying wavefront deformations (such as a single Zernike coefficient) on the input field had very minimal effects on the far-field diffraction pattern (computed with a Fourier-transform of the input field), even for deformations up to several tens of Lambda.
The only exceptions seem to be the first+second order Zernikes (a linear phase gradient), which shift the beam as they should, and the fourth order Zernike (spherical deformation) which focuses the beam as expected. No significant effect was observed with the coefficients corresponding to astigmatism and coma, for instance.

On the contrary, even small deformations (such as astigmatism and coma) applied to such a gaussian profile but with an added phase helix (phase given by 2*pi*Theta, where Theta is the angle in polar coordinates in the wave plane) have dramatic effects on the shape of the far-field pattern, which is normally a perfect donut (approximating an LG01 mode, the first non-trivial Laguerre-Gaussian mode).

What puzzles me is that, from what I have seen in a simple experimental realization of the first case (laser beam coming out of a monomode fiber, wavefront deformation applied by a LC-SLM, focusing of the beam with a microscope objective, far-field diffraction pattern observed with a camera looking at the reflection of the beam on a microscope slide), even slight deformations have a very visible effect on the "TEM00" (ok, exactly how TEM00 the beam stays after going through all the optical elements is hard to say).

Am I missing something ?
(illustrations can be provided upon request)

Best Answer

You are probably filling your whole computation grid with your Zernike polynomial. Remember that Zernikies are only orthogonal (and useful as optical aberrations) over the unit circle, so you've got to choose an appropriately sized patch of the computation grid over which you will define your beam intensity and phase. For example, I generated some plots. The grid is 512x512 pixels:

Here is the phase delay map for coma, computed as if the unit circle extends to the edges of the grid, computed as $\sqrt{8} (3 \rho^3 -2 \rho) \sin{\theta}$. It has a P-V of 1 wave: coma

Now if we put a Gaussian intensity profile on, but we don't fill the whole grid (as I'm sure you're not, because this is a much more difficult error to not notice), with the $e^{-2}$ intensity contour having a radius of $\frac{1}{10}$ the grid, we get a near field intensity like this: $I_{NF}$

If you use the above phase and intensity, you get a far field like this (shown as $\sqrt{\mathrm{Intensity}}$): The wrong $I_{FF}$

This basically looks like a perfect diffraction limited spot! Why? Well look at the phase over the region occupied by the beam: Wrong phase

It's just tilt! and its P-V is way below 1 wave. So it won't really do much of anything to the far field.

Now, lets try again, but properly scale the $\rho$ axis when we compute the Coma: Correct NF phase and intensity Correct FF

Just what you expect.

Related Question