[Physics] Perpetual motion machine of the second kind possible in nano technology

entropynanoscienceperpetual-motionthermodynamics

First of all sorry for my English – it is not my native language.

During my engineering studies at the university the thermodynamics professor told us that the "second law of thermodynamics is not true for a low number of molecules".

At that time scientists were already talking about micro technologies where single molecules do some work. (Today such technologies already exist).

Now I'm wondering if a perpetual motion machine of the second kind (a machine gaining energy by cooling down the environment) would be possible in nanotechnology. As a result such microchips would lower the entropy of the universe.

I was thinking about the following theoretical experiment:

Brown's molecular movement was discovered by observing objects in fluids in amber stones. Due to Brown's molecular movement these objects are moving. The energy comes from the environmental heat. If the object was a small, very strong magnet and metallic parts are placed near the amber then the moving magnet should induce eddy currents in the metallic parts. This would mean that there is a heat energy flow from the amber to the metallic parts even if the metallic parts are warmer then the amber.

Now the question: Would such a device be possible in nanosystem technology or even microsystem technology?

(I already asked a physics professor who only said: "maybe".)

Best Answer

The second law holds on average for systems of any size, large or small. If you have an isolated contraption containing just a few atoms, and you run it through some procedure (maybe as simple as waiting 5 seconds, or maybe more complicated), there is some probability that the atoms will wind up in a lower-entropy configuration at the end of the procedure than the start. That's what your professor was referring to.

However, the probability of random entropy reduction is not high, and certainly not 100%. The key point is that the average change in entropy, upon many repetitions of the procedure, cannot be negative for any procedure.

When people hear this, they get an idea: "I'll run the procedure 100 times, and check each time whether or not the entropy got randomly lowered, and somehow I'll only use that 1 successful run to power the perpetual motion machine while throwing away the 99 unsuccessful runs." Unfortunately, it doesn't work that way! One would refer to this whole process (involving 100 runs of the original procedure) as being just one run of a different (more complicated) procedure, which also now involves extra effort / energy to check whether or not the entropy was lowered each time. (Otherwise you wouldn't know which of the 100 runs to use). That checking process creates enough entropy to undo the benefit of repeated runs.

This kind of stuff is commonly discussed under the heading of Maxwell's Demon.

If you have a small magnet diffusing around by brownian motion, it will indeed transfer energy to a stationary metal nearby via eddy currents. Unfortunately, the reverse process happens at exactly the same rate: The electrons in that metal randomly jiggle, creating currents that create magnetic fields that push on the diffusing magnet, thus transferring energy from the metal to the magnet. The total energy transfer rate is equal in both directions, or more specifically, as long as both parts start at the same temperature, they stay at the same temperature.

UPDATE WITH MORE DETAILS

A better way to define entropy is to say "We don't know exactly what the microstate (microscopic configuration) of a system is, instead our best information is that there a probability distribution of possible microstates. Then the entropy is $S = k_B \sum_n p_n \log p_n$ where $p_n$ is the probability of microstate $n$.

Note that entropy is observer-dependent, in the sense that one observer may have more information about the probability distribution than another. In more concrete terms, a system might be disordered to one observer, but a different observer knows a "magic recipe" for undoing that disorder. For example, I could create a seemingly-unpolarized beam of light by switching the polarization of a laser every nanosecond according to a pseudo-random sequence. I know the sequence, and therefore I can use a waveplate to get back a perfectly polarized beam with no intensity losses. But for somebody who doesn't know my pseudo-random sequence, the beam really needs to be treated as unpolarized, and they cannot polarize it without losses, according to the 2nd law.

A configuration of a small number of molecules might randomly become "more ordered" in some sense, but that doesn't mean it has a lower entropy. Until you measure it, you don't know that it became more ordered, and therefore you cannot make use of that order. All you have is a probability distribution for what the microstate is. As time passes the probability distribution changes and the entropy that you calculate from it either stays the same or goes up. Well, in a certain sense, it does not go up by Liouville's theorem in classical mechanics or unitarity in quantum mechanics ... but it often turns out that you wind up with useless information about the microstate, i.e. information that cannot be translated into a "magic recipe" for undoing the apparent disorder as in the polarization example above. In that case, you might as well just forget that information and accept a higher entropy. See this question.

When you make a measurement, there is a probability distribution for the possible measurement results and the possible microstates following the measurement. Some of the measurement results may leave you with a low-entropy configuration (you pretty much know what the microstate is from the measurement). But if you appropriately average over all possible measurement results, the overall entropy increases on avearge as a result of the full measurement process.

Related Question