If you increase the number of photons leaving a light source you increase the brightness/intensity, so it could be said that intensity is directly proportional to number of photons. However intensity is directly proportional to amplitude squared, so this leads me to wonder that if you increase the amplitude of a light wave, are you just creating more photons rather than affecting each individual photon in some way?
[Physics] How is the amplitude of light related to number of photons
electromagnetismphotonsvisible-light
Related Solutions
Intensity is an objectively measurable attribute of light. It is the rate at which energy is delivered to a surface. Intensity is energy delivered per unit time per unit area. The intensity of light is a measurement of photon irradiance, which is the number of photons delivered per square meter per second. You can measure intensity with a photoelement, such as a solar cell or a photomultiplier, which converts light to electric current. As the electric current varies strictly with the intensity of the light, an objective measurement is possible. The more energetic the photon (the shorter its wavelength), the fewer will be the number of photons required for a given intensity. See this link: http://www.pveducation.org/pvcdrom/properties-of-sunlight/photon-flux.
Brightness is a "subjective" quality of light. It depends on the perception of whoever is viewing the light. It can't be objectively measured, but it can be scaled, so that the same viewer (or viewers with similar perceptions) can agree that certain light is more or less bright. For instance a less bright surface may be deemed 50% of the brightness of a brighter surface. In astronomy, for example, stars may be graded according to their apparent magnitude, which is their brightness in comparison to a very bright benchmark star. Brightness may also be called luminous flux. Here is a list of units in which various luminous standards of brightness are scaled: https://en.wikipedia.org/wiki/Luminous_flux
Photon is a term used to describe the particle attribute of light. A photon may be considered the smallest packet of energy into which light can be separated.
You could deliver greater intensity by emitting light of a shorter wavelength, or by increasing the surface area emitting the light (greater surface area means more electrons emitting more photons).
The energy of each photon is measured in joules, and depends on the wavelength of the light according to this formula: Q = h*c / lambda, where Q is energy in joules, h is Planck's constant, c is the speed of light in a vacuum, and lambda is the wavelength of the light in meters. Photons are considered to be "massless particles", but since a photon has energy, it must have mass. This conundrum is solved by saying that photons have relativistic mass when they are traveling, but possess zero rest mass because they are never at rest. Here is a better explanation: http://math.ucr.edu/home/baez/physics/ParticleAndNuclear/photon_mass.html.
The wavelength of a photon determines its energy, and its energy determines its wavelength. The shorter the wavelength, the shorter the cycle of the wave, the greater its frequency, and the more periods of the wavelength can be crammed into a unit time. Short wavelengths are more energetic than long wavelengths. As photons of light possess both particle and wave natures, a photon when traveling has velocity (c), momentum (relativistic mass * c), and wavelength, but when a photon is emitted or absorbed (at the beginning and the end of its journey), it is considered a particle.
The power of a light source can be measured in watts. A watt is the rate of energy of one joule per second.
SUMMARY:
This is a very good question. In a lossless medium, fundamentally the answer to your question is "no, an individual ray does not lose energy in propagating" because it represents a plane wave (in photon language, a momentum eigenstate), whose intensity does not vary as it propagates. Intensity information is encoded in the flux density of rays through the target surface in a raytracing simulation. You can't see intensity information in a lone ray, because this information is encoded in the relationship between a ray and its neighbors, i.e. by a notion of how much a tube of rays swells and shrinks laterally as it propagates.
With these two statements, you should be able to see the difference between the laser case and the diverging wave case.
But this statement must be qualified in practice according to the exact way you are interpreting the notion of "ray" in. In particular, let's look at the various conceptions of rays in a software implementation.
LOCALIZED RAYS
A a localized ray is an approximate abstraction representing light when the Eikonal equation holds (slowly varying envelope approximation) and we must make our abstraction yield the right answers in calculations and answers to physical questions. The answer depends, therefore, on the application.
Mostly a ray is a unit normal to a phasefront, and tracing rays simply lets us visualize phase fronts; we see where they converge to near focusses and so forth. No amplitude information is needed here.
Now we get to more sophisticated calculations, where we try to answer questions about intensity and phase of the local light field from traced rays. How you encode amplitude data in rays depends on how you combine your rays to get this intensity and phase information. Note that we can only ask for intensity / phase information from individual localized rays in regions where the slowly varying envelope approximation holds. This approximation therefore rules out the naive use rays to find phase and amplitude information about fields near focusses for example where the amplitude varies swiftly over a few wavelengths. The contribution to the field there is from many rays at once. There is a way around this difficulty in software, so read on to find out how this actually comes about through the right notion of addition of ray contributions.
TRUE RAYS
Most fundamentally, a conception of a ray that has no approximation is as the definition of a plane wave: the ray does this by being a unit normal to a plane wavefront. So, suppose we assign a complex amplitude to our ray to represent intensity and phase: the magnitude of this quantity does not change with propagation, only the phase does. We can even assign two complex amplitudes to account for polarization. The entity propagates by multiplying the complex quantities by $\exp(i\,\vec{k}\cdot\vec{r})$. Here $\vec{k}$ is the wavevector, and, strictly speaking, it is the classic example of a one form (covector, or covariant vector) rather than a vector: a linear map $\mathbb{R}^3\to\mathbb{R}$ that takes as input the displacement $\vec{r}$ and returns how many phasefronts a displacement in this direction and of this magnitude pierces. It is really helpful to keep this fundamental geometry in mind when thinking of what a ray really stands for: displacements standing for how we move about in space, and parallel stacks of phasefronts pierced by the former as we do so (see reference [[1]]).
I'll call this entity a "true ray", and it behaves a little differently from rays in most raytracing software. In particular, since it stands for a plane wave, it can be slidden anywhere on the planar phasefront and encode exactly the same plane wave. So suppose we have a bunch of these rays converging in a raytracing simulation to an imperfect focus and we wish to know the field phases and amplitudes at the point $P$, somewhere near the focus:
Since any ray can slide anywhere orthogonal to itself along its tail, we slide all the rays as shown:
then propagate them to the point $P$ and tally up all the field vector components implied by the propagated polarization complex amplitudes. Note that, in theory, this works for any point $P$, if the rays truly represented plane waves, and if you did this rigorously, calculating the plane wave decomposition of any source, this ray combination technique is equivalent to solving the Helmholtz equation by Fourier analysis, so it is time-consuming. In practice, furthermore, rays in simulations are localized rays: they stand for fields that are well approximated by plane waves only in a small neighborhood. So in most simulations, you can only safely slide rays in thus way ten microns or so (tens of wavelengths, say). This is well good enough if you propagate all your rays to a spherical surface centered near a focus: a sideways slide of ten microns of all the rays lets you compute the field vector amplitudes well good enough to get a good picture of most point spread functions.
THE INTERMEDIATE CASE
By now it should be fairly clear what it going on: rays encode locally plane waves, they can propagate phase information but intensity information is encoded by the flux density of waves. Near focusses, we need full Fourier analysis to extract the implied amplitude and phase distributions, as above. But away from focusses there is a good, intermediate notion that both lets you calculate intensities, relieves you of the need to propagate millions of rays to work out flux densities accurately and also will yield amplitude and phase distributions on spherical surfaces centered on focusses so you can make a good approximation to the Fourier analysis above. This is an object which I call a "Ray Tubelet", and it comprises a triplet of localized rays. The triplet begins from a divergence point (point source) and each ray can keep track of its phase delays as it propagates, whilst the divergence between the triplet of rays can be used to extract the intensity information. Suppose we wish to calculate the light intensity at a point within the tublet. This intensity varies inversely with the area of a triangle defined by the three intersections of tubelet rays with the least squares best fit to a surface that is orthogonal to all three passing through the point in question (to be trully orthogonal to all three is impossible unless they are parallel, that's why we use the least squares best fit). WE define the tublet's position, after applying the same propagation operations to all three members, as the mean of the ray head positions. The area in question is then the cross product of any pair of differences between the three head positions.
Another way to tackle this problem is to propagate wavefront curvature information as well as amplitude with each ray. In effect, you are decomposing a wavefront into a great number of Gaussian beams, propagting them through a system and then summing their contributions at the end of the simulation.
[1]: Two of the best descriptions of one forms for physicists are in chapter 1 of Misner, Thorne and Wheeler, "Gravitation" and Bernard Schutz, "A First Course in General Relativity".
Best Answer
Yeah, pretty much. When you increase the amplitude of a light wave, you are essentially just sending more photons of the same kind. The energy of each photon is $hf$ (though with some allowance for wavepackets, where each photon will come with as a probability distribution over a range of frequencies), so if you increase the energy flux you need to increase the photon flux.
When you actually dig down on it things are not quite as simple, of course, because the exact value of the electric field is a quantum mechanical variable analogous to the position variable of a quantum harmonic oscillator, and you can't really describe it as a variable with a well-defined oscillation. (In particular, for example, if you know precisely the intensity of the light, then you lose all information about the phase of the oscillations, and vice versa.) However, the electric field is essentially confined to a region that grows as the square root of the number of photons in the mode.