Wednesday, June 06, 2007

Entropy Production and the Best of All Worlds

[No, I don't really expect any comments. It's a pared-down version of my blog post, maybe easier to read. Less text means less text that's wrong.]

Chemical engineering has historically been a phenomenological science, meaning that it's more based on observed phenomena than it is built bottom-up from the sub-microscopic nuts and bolts of the universe. If you were to, oh I don't know, go to graduate school in the field, you'd find a good third of the curriculum devoted to Transport Phenomena, which describe the rate at which thermodynamic variables--momentum, thermal energy, chemical potential (and charge too, though it's not usually lumped in there)--dissipate along gradients.

The second law of thermodynamics states that entropy of a closed system increases toward a maximum. When you start talking about the rate of entropy production, how fast it increases, you've entered the field of non-equilibrium thermodynamics. That's a way of describing transport phenomena too.

Entropy is also interesting that it has a statistical (informational) basis, which is generally wiped out by the abovementioned phenomenological approach with its tendency to lump all that microscopic mumbo-jumbo together. It turns out, however, that just as phenomenology has a habit of washing out microscopic events, enormous systems-of-systems wash out details of the usual phenomenology. Interestingly enough, non-equilibrium and statistical approaches both appear to become very useful at this distant level. I've read some (public access) articles recently on the hypothesis of maximum entropy production for complex systems (such as weather, ecosystems, or economies) which states that a really complicated system will organize itself to dispel gradients in state variables in the most efficient way possible. The hypothesis appears to be completely baseless, but it has nonetheless been (evidently) successful in describing some complex phenomena, and suggests that the recurrence of certain structures in large systems is because they dissipate the great big energy gradients most efficiently. I hate the examples that these authors have used--phenomenological models describe waves and vortices and so on well enough--but to suggest that they can be interpreted as optimum dissipative structures is, well, interesting.

The question arises: is human evolution (genetic or social, take your pick) merely the most effective path that has so far developed to dissipate that huge solar energy gradient? You can see the danger of drawing out the cranks here, but just because speculation is silly and intellectually dangerous doesn't mean that it can't be fun.

People aren't at equilibrium (certainly not when they're alive), and neither is society. With the constant application of solar energy, we maintain at a roughly steady-state non-equilibrium condition. You can irresponsibly ask thermodynamics the same questions that philosophers have struggled with for millenia. Is human nature a development that, when averaged out, facilitiates entropy production, and does human organization facilitate it further? Is the nature of our ensemble developing over time, and do we waste heat better if we are brutal or if we are constructive? If we're happy or sad? Can it get better than this? Has it ever been, really?


Some public access reading, if you just can't get enough:

  • Maximum entropy production vs. Darwinism (science reporting, easy to read)
  • Entropy production and life as we know it (accessible, but I found it tedious)
  • Hey man, quantum mechanics isn't dissipative. Where's the entropy? (big words and a little math, read 1/3 of it and didn't seem bad though.)
  • Fluctuation theory for small systems (from Physics Today)