How to Evolve Specified Complexity by Natural Means
Many years ago, I read this advice to a young physicist desperate to get his or her work cited as frequently as possible: Publish a paper that makes a subtle misuse of the second law of thermodynamics. Then everyone will rush to correct you and in the process cite your paper. The mathematician William Dembski has taken this advice to heart and, apparently, made a career of it.
Specifically, Dembski tries to use information theory to prove that biological systems must have had an intelligent designer. [Dembski, 1999; Behe et al., 2000] A major problem with Dembski’s argument is that its fundamental premise – that natural processes cannot increase information – is flatly wrong and precisely equivalent to a not-so-subtle misuse of the second law.
Let us accept for argument’s sake Dembski’s proposition that we routinely identify design by looking for specified complexity. I do not agree with his critics that he errs by deciding after the fact what is specified. That is precisely what we do when we look for extraterrestrial intelligence (or will do if we think we’ve found it). Indeed, detecting specification after the fact is little more than looking for nonrandom complexity. Nonrandom complexity is a weaker criterion than specified complexity, but it is adequate for detecting design and not very different from Dembski’s criterion. [For other criticisms of Dembski’s work, see especially Fitelson et al., 1999; Korthof, undated; Perakh, undated. For the Argument from Design in general, see Young, 2001a.]
Specified or nonrandom complexity is, however, a reliable indicator of design only when we have no reason to suspect that the nonrandom complexity has appeared naturally, that is, only if we think that natural processes cannot bring about such complexity. More to the point, if natural processes can increase information, then specified or nonrandom complexity is not an indicator of design.
Let us, therefore, ask whether Dembski’s “law” of conservation of information is correct or, more specifically, whether information can be increased by natural processes.
Entropy. We begin by considering a machine that tosses coins, say five at a time. We assume that the coins are fair and that the machine is not so precise that its tosses are predictable.
Each coin may turn up heads or tails with 50 % probability. There are in all 32 possible combinations:
H H H H H
H H H H T
H H H T T
and so on. Because the coins are independent of each other, we find that the total number of permutations is 2 _ 2 _ 2 _ 2 _ 2 = 25 = 32.
The exponent, 5, is known as the entropy of the system of five coins. The entropy is the number of bits of data we need to describe the arrangement of the coins after each toss. That is, we need one bit that describes the first coin (H or T), one that describes the second (H or T), and so on till the fifth coin. Here and in what follows, I am for simplicity neglecting noise.
In mathematical terms, the average entropy of the system is the sum over all possible states of the quantity – pi _ log(pi), where pi is the probability of finding the system in a given state i and the logarithm is to the base 2. In our example, there are 2N permutations, where N is the number of coins, in our case, 5. All permutations are equally likely and have probability 1/2N. Thus, we may replace pi with the constant value p = 1/2N. The sum over all i of – pi _ log(pi) becomes 2N _ [-p _ log(p)], which is just equal to -log(p). That is, the entropy of N coins tossed randomly (that is, unspecified) is -log(p), or 5.
Information. At this point, we need to discuss an unfortunate terminological problem. The entropy of a data set is the quantity of data – the number of bits – necessary to transmit that data set through a communication channel. In general, a random data set requires more data to be transmitted than does a nonrandom data set, such as a coherent text. In information theory, entropy is also called uncertainty or information, so a random data set is said to have more information than a nonrandom data set. [Blahut, 1991] In common speech, however, we would say that a random data set contains little or no information, whereas a coherent text contains substantial information. In this sense, the entropy is our lack of information, or uncertainty, about the data set or, in our case, about the arrangement of the coins. [Stenger, 2002; for a primer on information theory, see also Schneider, 2000]
To see why, consider the case where the coins are arranged by an intelligent agent; for example, suppose that the agent has chosen a configuration of 5 heads. Then, there is only one permutation:
H H H H H
Because 20 = 1, the entropy of the system is now 0. The information gained (the information of the new arrangement of the coins) is the original entropy, 5, minus the final entropy, 0, or 5 bits.
In the terminology of communication theory, we can say that a receiver gains information as it receives a message. [Pierce, 1980] When the message is received, the entropy (in the absence of noise) becomes 0. The information gained by the receiver is again the original entropy minus the final entropy.
In general, then, a decrease of entropy may be considered an increase of information. [Stenger, 2002; Touloukian, 1956] The entropy of the 5 coins arranged randomly is 5; arranged in a specified way, it is 0. The information has increased by 5 bits. As I have noted, this definition of information jibes with our intuitive understanding that information increases as randomness decreases or order increases. In the remainder of this paper, I will use “entropy” when I mean entropy and “information” when I mean decrease of entropy. Information used in this way means “nonrandom information” and is related to Dembski’s specified information in the way I discussed above.
Dembski correctly notes that you do not need a communication channel to talk of information. In precisely the sense that Dembski means it, the system of coins loses entropy and therefore gains information when the intelligent agent arranges the coins. That is, a nonrandom configuration displays less entropy and therefore more information than a random configuration. There is nothing magic about all heads, and we could have specified any other permutation with the same result.
Similarly, the genome contains information in nonrandom sequences of the four bases that code genetic information. It also contains a lot of junk DNA (at least as far as anyone is able to deduce). [Miller, 1994] If we write down the entire genome, we at first think it has a very high entropy (many bases with many more possible combinations). But once we find out which bases compose genes, we realize that those bases are arranged nonrandomly and that their entropy is 0 (or at least very much less than the entropy of an equivalent, random set of bases). That is, the genes contain information because their entropy is less than that of a random sequence of bases of the same length.
Natural selection. Suppose now that we have a very large number, or ensemble, of coin-tossing machines. These machines toss their coins at irregular intervals. The base of each machine is made of knotty pine, and knots in the pine sometimes leak sap and create a sticky surface. As a result, the coins sometimes stick to the surface and are not tossed when the machine is activated.
For unknown reasons, machines that have a larger number of, say, heads have a lower probability of malfunctioning. Perhaps the reverse side of the coins is light-sensitive, corrodes, and damages the working of the machine. For whatever reason, heads confers an advantage to the machines.
As time progresses, many of the machines malfunction. But sometimes a coin sticks to the knotty pine heads up. A machine with just a few heads permanently showing is fitter than those with a few tails permanently showing or those with randomly changing permutations (because those last show tails half the time, on average). Given enough machines and enough time (and enough knots!), at least some of the machines will necessarily end up with five heads showing. These are the fittest and will survive the longest.
You do not need reproduction for natural selection. Nevertheless, it must be obvious by now that the coins represent the genome. If the machines were capable of reproducing, then machines with more heads would pass their “headedness” to their descendants, and those descendants would outcompete machines that displayed “tailedness.”
Thus do we see a combination of regularity (the coin-tossing machines) and chance (the sticky knots) increasing the information in a genome.
Explanatory filter. Dembski’s explanatory filter is a flow chart that is designed to distinguish between chance and design. The coin-tossing machines would escape Dembski’s explanatory filter and suggest design where none exists, because the filter makes a false dichotomy between chance and design. Natural selection by descent with modification is neither chance nor design but a combination of chance and law. Many self-organizing systems would also pass through Dembski’s filter and “prove” design where none exists. Indeed, the intelligent designauts give short shrift to self organization, an area where they are most vulnerable.
The 747 argument. We have seen that Dembski’s “law” of conservation of information is false. Nonrandom information can be generated by natural causes. But Dembski goes further and claims that complex, nonrandom information could not appear naturally in a finite time, even if chance had produced apparently nonrandom information that was not complex. What about that?
You will hear the argument that there is a very small chance of building a fully assembled Boeing 747 by tossing the parts into the air. Similarly, the argument goes, there is a very small chance of building a complex organism (or, equivalently, a genome) by chance. The argument is false for at least two reasons.
First, airplanes and mousetraps are assembled from blueprints. The arrangement of the parts is not a matter of chance. The locations of many of the parts are highly correlated, in the sense that subsystems such as motors are assembled separately from the airplane and incorporated into the airplane as complete units. All mousetraps and airplanes of a given generation are nominally identical. When changes are made, they are apt to be finite and intentional. This is one reason, by the way, that Michael Behe’s mousetrap [Behe, 1996] as an analogy for an irreducibly complex organism is a false analogy. [Young, 2001b]
Birds and mice, by contrast, are assembled from recipes, not blueprints. The recipes are passed down with modification and sometimes with error. All birds and mice of a given generation are different. When changes are made, they are apt to be infinitesimal and accidental.
When Dembski appeals to specified complexity, he appears to be presenting the 747 argument in a different guise. He presents a back-of-the-envelope calculation to “prove” that there has not been enough time for a complex genome to have developed. The calculation implicitly assumes that each change in the genome takes place independently of others and that no two changes can happen simultaneously.
Creationists used to argue that there was not enough time for an eye to develop. A computer simulation by Dan Nilsson and Susanne Pelger  gave the lie to that claim: Nilsson and Pelger estimated conservatively that 500,000 years was enough time. I say conservatively because they assumed that changes happened in series, whereas in reality they would almost certainly have happened in parallel, and that would have decreased the time required. Similarly with Dembski’s probability argument: Undoubtedly many changes of the genotype occurred in parallel not in series as the naive probability argument assumes.
Additionally, many possible genomes might have been successful; minor modifications in a given gene can still yield a workable gene. The odds of success are greatly increased when success is fuzzily defined. The airplane parts could as well have assembled themselves into a DC-10 and no one would have been the wiser.
In assuming that the genome is too complex to have developed in a mere billion years, Dembski in essence propagates the 747 argument. Organisms did not start out with a long, random genome and then by pure chance rearrange the bases until, presto, Adam appeared among the apes. To the contrary, they arguably started with a tiny genome. How that first genome appeared is another matter; I think here we are arguing about natural selection by descent with modification, not about the origin of life. No less quantitatively than Dembski, we may argue that the genome gradually expanded by well known mechanisms, such as accidental duplications of genes and incorporation of genomes from other organisms, until it was not only specified, but also complex.
Reversing entropy. The definition of entropy in information theory is precisely the same as that in thermodynamics, apart from a multiplicative constant. Thus, Dembski’s claim that you cannot increase information is equivalent to the claim that you cannot reverse thermodynamic entropy. That claim, which has long been exploited by creationists, is not correct. The correct statement is that you cannot reverse entropy in a closed or isolated system. A living creature (or a coin-tossing machine) is not a closed system. A living creature thrives by reversing entropy and can do so in part because it receives energy from outside itself. It increases the entropy of the universe as a whole as it excretes its wastes. Dembski’s information-theoretical argument amounts to just another creationist ploy to yoke science in support of a religious preconception.
Acknowledgement: Many thanks to Victor Stenger and Chris Debrunner for illuminating the concept of entropy, and to Mark Perakh and Brendan McKay for their helpful comments. As far as I know, Victor Stenger was the first to recognize that Dembski’s “law”of conservation of information is false.
Copyright (c) 2001 by Matt Young. All rights reserved.
Michael J. Behe, 1996, Darwin’s Black Box, Touchstone, New York.
Michael J. Behe, William A. Dembski, and Stephen C. Meyer, 2000, Science and Evidence for Design in the Universe, Ignatius, San Francisco.
Richard E. Blahut, 1991, “Information Theory,” Encyclopedia of Physical Science and Technology, Academic, New York.
William A. Dembski, 1999, Intelligent Design: The Bridge between Science and Theology, Intervarsity, Downers Grove, Illinois.
Brandon Fitelson, Christopher Stephens, and Elliott Sober, 1999, “How Not to Detect Design,” Philosophy of Science, vol. 66, pp. 472-488.
Gert Korthof, undated, “On the Origin of Complex Information by Means of Intelligent Design: A Review of William Dembski’s Intelligent Design, http://home.wxs.nl/~gkorthof/kortho44.htm, accessed November 2001.
Kenneth Miller, 1994, “Life’s Grand Design,” Technology Review, vol. 97, no. 2, pp. 24-32, February-March.
Dan-E. Nilsson and Susanne Pelger, 1994, “A Pessimistic Estimate of the Time Required for an Eye to Develop,” Proceedings of the Royal Society of London, vol. 256. pp. 53-58.
Mark Perakh, undated, “A Consistent Inconsistency,” http://members.cox.net/perakm/dembski.htm, accessed November 2001.
John R. Pierce, 1980, An Introduction to Information Theory, 2nd ed., Dover, New York.
Thomas D. Schneider, 2000, “Information Theory Primer with an Appendix on Logarithms,” version 2.48, http://www.lecb.ncifcrf.gov/~toms/paper/primer, accessed November 2001.
Victor J. Stenger, 2002, Has Science Found God? The Latest Results in the Search for Purpose in the Universe, Prometheus Books, Amherst, NY, to be published.
Y. S. Touloukian, 1956, The Concept of Entropy in Communication, Living Organisms, and Thermodynamics, Research Bulletin 130, Purdue University, Engineering Experiment Station, Bloomington, Indiana.
Matt Young, 2001a, No Sense of Obligation: Science and Religion in an Impersonal Universe, 1stBooks Library, Bloomington, Indiana; http://www.1stBooks.com/bookview/5559.
Matt Young, 2001b, “Intelligent Design is Neither,” paper presented at the conference Science and Religion: Are They Compatible? Atlanta, Georgia, November 9-11, 2001, http://www.mines.edu/~mmyoung/DesnConf.pdf.
Matt Young is an Adjunct Professor of Physics, Colorado School of Mines in Golden, Colorado , and a retired physicist of the US National Institute of Standards and Technology, in the opening of today’s column. He continues by observing that specifically.