Explaining Specified Complexity

Explaining Specified Complexity

Print Friendly, PDF & Email

 In his recent book The Fifth Miracle, Paul Davies suggests that any laws capable of explaining the origin of life must be radically different from scientific laws known to date. The problem, as he sees it, with currently known scientific laws, like the laws of chemistry and physics, is that they are not up to explaining the key feature of life that needs to be explained. That feature is specified complexity. Life is both complex and specified. The basic intuition here is straightforward. A single letter of the alphabet is specified without being complex (i.e., it conforms to an independently given pattern but is simple). A long sequence of random letters is complex without being specified (i.e., it requires a complicated instruction-set to characterize but conforms to no independently given pattern). A Shakespearean sonnet is both complex and specified.

Now, as Davies rightly notes, contingency can explain complexity but not specification. For instance, the exact time sequence of radioactive emissions from a chunk of uranium will be contingent, complex, but not specified. On the other hand, as Davies also rightly notes, laws can explain specification but not complexity. For instance, the formation of a salt crystal follows well-defined laws, produces an independently known repetitive pattern, and is therefore specified; but that pattern will also be simple, not complex. The problem is to explain something like the genetic code, which is both complex and specified. As Davies puts it: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity” (p. 112).

How does the scientific community explain specified complexity? Usually via an evolutionary algorithm. By an evolutionary algorithm I mean any algorithm that generates contingency via some chance process and then sifts the so-generated contingency via some law-like process. The Darwinian mutation-selection mechanism, neural nets, and genetic algorithms all fall within this broad definition of evolutionary algorithms. Now the problem with invoking evolutionary algorithms to explain specified complexity at the origin of life is absence of any identifiable evolutionary algorithm that might account for it. Once life has started and self-replication has begun, the Darwinian mechanism is usually invoked to explain the specified complexity of living things.=20

But what is the relevant evolutionary algorithm that drives chemical evolution? No convincing answer has been given to date. To be sure, one can hope that an evolutionary algorithm that generates specified complexity at the origin of life exists and remains to be discovered. Manfred Eigen, for instance, writes, “Our task is to find an algorithm, a natural law that leads to the origin of information,” where by “information” I understand him to mean specified complexity. But if some evolutionary algorithm can be found to account for the origin of life, it would not be a radically new law in Davies’s sense. Rather, it would be a special case of a known=  process.

I submit that the problem of explaining specified complexity is even worse than Davies makes out in The Fifth Miracle. Not only have we yet to explain specified complexity at the origin of life, but evolutionary algorithms fail to explain it in the subsequent history of life as well. Given the growing popularity of evolutionary algorithms, such a claim may seem ill-conceived. But consider a well known example by Richard Dawkins (The Blind Watchmaker, pp. 47-48) in which he purports to show how a cumulative selection process acting on chance can generate specified complexity. He starts with the following target sequence, a putative instance of specified complexity:

METHINKS=95IT=95IS=95LIKE=95A=95WEASEL

 

(he considers only capital Roman letters and spaces, here represented by bullets-thus 27 possibilities at each location in a symbol string).

If we tried to attain this target sequence by pure chance (for example, by randomly shaking out scrabble pieces), the probability of getting it on the first try would be around 10 to the -40, and correspondingly it would take on average about 10 to the 40 tries to stand a better than even chance of getting it. Thus, if we depended on pure chance to attain this target sequence, we would in all likelihood be unsuccessful. As a problem for pure chance, attaining Dawkins’s target sequence is an exercise in generating specified complexity, and it becomes clear that pure chance simply is not up to the task.

But consider next Dawkins’s reframing of the problem. In place of pure chance, he considers the following evolutionary algorithm: (i) Start out with a randomly selected sequence of 28 capital Roman letters and spaces,=  e.g.,

WDL=95MNLT=95DTJBKWIRZREZLMQCO=95P

(note that the length of Dawkins’s target sequence, METHINKS=95IT=95 IS=95LIKE=95A=95WEASEL, comprises exactly 28 letters and spaces); (ii) rando= mly alter all the letters and spaces in this initial randomly-generated sequence; (iii) whenever an alteration happens to match a corresponding letter in the target sequence, leave it and randomly alter only those remaining letters that still differ from the target sequence.

In very short order this algorithm converges to Dawkins’s target sequence. In The Blind Watchmaker, Dawkins (p. 48) provides the following computer simulation of this algorithm:

        (1)     WDL=95MNLT=95DTJBKWIRZREZLMQCO=95P

        (2)     WDLTMNLT=95DTJBSWIRZREZLMQCO=95P …

         (10)    MDLDMNLS=95ITJISWHRZREZ=95MECS=95P …        

        (20)    MELDINLS=95IT=95ISWPRKE=95Z=95WECSEL …

        (30)    METHINGS=95IT=95ISWLIKE=95B=95WECSEL …

        (40)    METHINKS=95IT=95IS=95LIKE=95I=95WEASEL …

        (43)    METHINKS=95IT=95IS=95LIKE=95A=95WEASEL

Thus, Dawkins’s simulation converges on the target sequence in 43 steps. In place of 10 to the 40 tries on average for pure chance to generate the target sequence, it now takes on average only 40 tries to generate it via an evolutionary algorithm.

Although Dawkins uses this example to illustrate the power of evolutionary algorithms, the example in fact illustrates the inability of evolutionary algorithms to generate specified complexity. We can see this by posing the following question: Given Dawkins’s evolutionary algorithm, what besides the target sequence can this algorithm attain? Think of it this way. Dawkins’s evolutionary algorithm is chugging along; what are the possible terminal points of this algorithm? Clearly, the algorithm is always going to converge on the target sequence (with probability 1 for that matter). An evolutionary algorithm acts as a probability amplifier. Whereas it would take pure chance on average 10 to the 40 tries to attain Dawkins’s target sequence, his evolutionary algorithm on average gets it for you in the logarithm of that number, that is, on average in only 40 tries (and with virtual certainty in a few hundred tries).

But a probability amplifier is also a complexity attenuator. For something to be complex, there must be many live possibilities that could take its place. Increasingly numerous live possibilities correspond to increasing improbability of any one of these possibilities. To illustrate the connection between complexity and probability, consider a combination lock. The more possible combinations of the lock, the more complex the mechanism and correspondingly the more improbable that the mechanism can be opened by chance. Complexity and probability therefore vary inversely: the greater the complexity, the smaller the probability.

It follows that Dawkins’s evolutionary algorithm, by vastly increasing the probability of getting the target sequence, vastly decreases the complexity inherent in that sequence. As the sole possibility that Dawkins’s evolutionary algorithm can attain, the target sequence in fact has minimal complexity (i.e., the probability is 1 and the complexity, as measured by the usual information measure, is 0). In general, then, evolutionary algorithms generate not true complexity but only the appearance of complexity. And since they cannot generate complexity, they cannot generate specified complexity either.

This conclusion may seem counterintuitive, especially given all the marvelous properties that evolutionary algorithms do possess. But the conclusion holds. What’s more, it is consistent with the “no free lunch” (NFL) theorems of David Wolpert and William Macready, which place significant restrictions on the range of problems genetic algorithms can solve.

The claim that evolutionary algorithms can only generate the appearance of specified complexity is reminiscent of a claim by Richard Dawkins. On the opening page of his The Blind Watchmaker he states, “Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Just as the Darwinian mechanism does not generate actual design but only its appearance, so too the Darwinian mechanism does not generate actual specified complexity but only its appearance.

But this raises the obvious question, whether there might not be a fundamental connection between intelligence or design on the one hand and specified complexity on the other. In fact there is. There’s only one known source for producing actual specified complexity, and that’s intelligence. In every case where we know the causal history responsible for an instance of specified complexity, an intelligent agent was involved. Most human artifacts, from Shakespearean sonnets to D=FCrer woodcuts to Cray supercomputers, are specified and complex. For a signal from outer space to convince astronomers that extraterrestrial life is real, it too will have to be complex and specified, thus indicating that the extraterrestrial is not only alive but also intelligent (hence the search for extraterrestrial intelligence-SETI).

Thus, to claim that laws, even radically new ones, can produce specified complexity is in my view to commit a category mistake. It is to attribute to laws something they are intrinsically incapable of delivering-indeed, all our evidence points to intelligence as the sole source for specified complexity. Even so, in arguing that evolutionary algorithms cannot generate specified complexity and in noting that specified complexity is reliably correlated with intelligence, I have not refuted Darwinism or denied the capacity of evolutionary algorithms to solve interesting problems. In the case of Darwinism, what I have established is that the Darwinian mechanism cannot generate actual specified complexity. What I have not established is that living things exhibit actual specified complexity. That is a separate question.

Does Davies’s original problem of finding radically new laws to generate specified complexity thus turn into the slightly modified problem of finding find radically new laws that generate apparent-but not actual-specified complexity in nature? If so, then the scientific community faces a logically prior question, namely, whether nature exhibits actual specified complexity. Only after we have confirmed that nature does not exhibit actual specified complexity can it be safe to dispense with design and focus all our attentions on natural laws and how they might explain the appearance of specified complexity in nature.

Does nature exhibit actual specified complexity? This is the million dollar question. Michael Behe’s notion of irreducible complexity is purported to be a case of actual specified complexity and to be exhibited in real biochemical systems (cf. his book Darwin’s Black Box). If such systems are, as Behe claims, highly improbable and thus genuinely complex with respect to the Darwinian mechanism of mutation and natural selection and if they are specified in virtue of their highly specific function (Behe looks to such systems as the bacterial flagellum), then a door is reopened for design in science that has been closed for well over a century. Does nature exhibit actual specified complexity? The jury is still out.

 William A. Dembski