America’s Obsession with Design: A Response to Wolfhart Pannenberg – Part 1

America’s Obsession with Design: A Response to Wolfhart Pannenberg – Part 1

Print Friendly, PDF & Email

 In 1994, I participated at a C. S. Lewis summer conference at Queens’ College, Cambridge. The event was sponsored by the C. S. Lewis Foundation (Redlands, California) and hosted by John Polkinghorne. A portion of the conference was devoted to design, and proponents of design who spoke at the conference including Phillip Johnson, Stephen Meyer, and Walter Bradley. After Walter Bradley’s presentation, Arthur Peacocke got up and dismissively remarked that “design is an American thing.” His point presumably was that European intellectuals had made their peace with Darwin and evolution long since and had moved on to better things.

It was therefore with a sense of déjà vu that I followed Wolfhart Pannenberg’s spring 2001 tour of American colleges and universities. In that tour, he focused on the relation between science and religion and in particular on the role of contingency in the evolution of the biophysical universe. Thomas Oord, in the Templeton Foundation’s Research News [May 2001, p. 34] reports that Pannenberg began the conference “Evolution: Scientific and Theological Views,” held in St. Paul, Minnesota with the remark: “Concerning design, I wonder again and again why the dispute in this country over the doctrine of evolution is so obsessive.” Pannenberg was especially puzzled why this obsession with design should take root in America: “If you think of design as a plan to bring about something, then one should be aware, especially in this country, that this involves predeterminism. Because Americans are usually so apprehensive concerning the danger of determinism, I wonder why ‘design’ is so much in the focus of the discussion about the theory of evolution.”

I want, in this essay, to try to answer Pannenberg’s puzzlement (though answering Peacocke’s dismissiveness is a lost cause). The short answer to Pannenberg’s puzzlement is this: Science itself will not allow the design problem to go away. It is important to understand that design in its present resurgence is not a project in natural theology (see my last article for Metaviews: “Is Intelligent Design a Form of Natural Theology?”). In particular, design is not wedded to a 17th or 18th century conception of God as a rational object, single subject, or first cause. All that design in its present American incarnation is wedded to is that telic processes operate in nature and that these processes are empirically detectable and not reducible to blind, unbroken natural laws (what Jacques Monod referred to as chance and necessity). The ultimate source behind such processes is not at issue.

On the same page of Research News where Pannenberg puzzles over America’s obsession with design, Pannenberg comments on the work and person of John Polkinghorne. The problem with Polkinghorne’s contribution to the science-religion dialogue for Pannenberg is that Polkinghorne “has no appropriate philosophical education.” Since I don’t know what Pannenberg’s mathematical education is, I’m in no position to say whether the problem with Pannenberg’s puzzlement over America’s obsession with design is that he “has no appropriate mathematical education.” Nonetheless, to see why design remains a live issue and why the design problem will not go away for science, it is important to understand something about the mathematical underpinnings of design, and specifically how these underpinnings relate to Darwinian evolution.

But before venturing on to these mathematical underpinnings, I need to clear up a confusion about design. Pannenberg evinces this confusion when he remarks that design entails a plan which in turn entails determinism. This is the clock-work view of design in which a designer forms a rigid plan that admits no variation and in which the designed object constructed conforms exactly to that rigid plan. Such designs are common with human artifacts, but they hardly exhaust the notion of design. Design as understood by American design theorists is conceived in much broader teleological terms. Michael Polanyi remarked on this broader conception of design as follows:

“It is true that the teleology rejected in our day is understood as an overriding cosmic purpose necessitating all the structures and occurrences in the universe in order to accomplish itself. This form of teleology is indeed a form of determinism — perhaps even a tighter form of determinism than is provided for by a materialistic, mechanistic atomism. However, since at least the time of Charles Saunders Peirce and William James a looser view of teleology has been offered to us — one that would make it possible for us to suppose that some sort of intelligible directional tendencies may be operative in the world without our having to suppose that they determine all things. Actually it is possible that even Plato did not suppose that his “Good” forced itself upon all things. As Whitehead has pointed out, Plato tells us that the Demiurge, looking toward the Good, “persuades” an essentially free matter to structure itself, to some extent, in imitation of the Forms. Plato appeared to Whitehead to have modeled the cosmos on a struggle to achieve the Good in the somewhat recalcitrant media of space and time and matter, a struggle well known to all souls with purposes and ends and aims. Whether or not it is true that Plato did this, certainly Whitehead modeled his own cosmos very much this way.”[1]

America’s obsession with design is not identical with America’s obsession with creationism. Creationism cannot be separated from religious commitments. Design can be considered on its own terms and strictly as a form of scientific investigation. What’s more, design is not properly regarded as anti-evolutionist. It stands against a certain conception of evolution, to wit, one in which teleology is removed from having any scientific significance. Pannenberg and I are at one that teleology is working itself out historically in the world (and since we are both Christians, we see that teleology as deriving ultimately from the Christian Trinitarian God). The source of Pannenberg’s puzzlement over America’s obsession with design is not that American’s should regard teleology as real but as having scientific content. Here is where design becomes interesting: Teleology plus scientific content equals design.

Of course, to say that teleology plus scientific content equals design is to raise all sorts of flags, and specifically the worry that design as it is now being developed is an unthinking throwback to the (deterministic) design of the old-time natural theologians. But this is not the case. Design as my colleagues and I are developing it can accommodate the rich contingency and freedom of the natural world and still give scientific content to teleology. To show this, however, will require a mathematical excursion into evolutionary algorithms and, in particular, into the No Free Lunch theorems that were proven five years ago.

The No Free Lunch theorems are not deep mathematical results that require brilliant intuitive leaps or fundamentally new concepts. An even cursory examination of their proofs reveals that the No Free Lunch theorems are essentially book-keeping results. They keep track of how well evolutionary algorithms do at optimizing fitness functions over a phase space. The fundamental claim of these theorems is that averaged across fitness functions, evolutionary algorithms cannot outperform blind search.

The No Free Lunch theorems underscore the fundamental limits of the Darwinian mechanism. Up until their proof, it was thought that because the Darwinian mechanism could account for all of biological complexity, evolutionary algorithms (i.e., its mathematical underpinnings) must be universal problem solvers. The No Free Lunch theorems show that evolutionary algorithms, apart from careful fine-tuning by a programmer, are anything but universal problem solvers. Consequently, these theorems undercut the power of the Darwinian mechanism to account for all of biological complexity. Granted, the No Free Lunch theorems are book-keeping results. But book-keeping can be very useful. It keeps us honest about the promissory notes our various enterprises — science being one of them — can and cannot make good. In the case of Darwinism we are no longer entitled to think that the Darwinian mechanism can offer biological complexity as a free lunch.

It is a very human impulse to look for magical solutions to circumvent mathematical impossibilities. The theory of accounting tells us that Ponzi schemes cannot work. The theory of probability tells us that games of chance whose expected gain favors not us but the casino can only lead to our loss in the long run. Nonetheless, Ponzi schemes and casino gambling continue to be big business. Likewise, in biology, even though computational theory is clear that evolutionary algorithms cannot generate complex specified information, by suitably shuffling information around one often gets the impression that evolutionary algorithms can in fact generate complex specified information and that complex specified information is a free lunch after all. Invariably what’s involved here is a shell game in which the shells are adroitly moved so that one loses track of just which shell contains the elusive pea. The pea here is complex specified information. The task of the book-keeper is to follow the information trail so that it is properly accounted for and not magically smuggled in. Complex specified information is what gives teleology scientific teeth and turns teleology into design.

As an example of smuggling in complex specified information that is purported to be generated for free, consider the work of Thomas Schneider. Schneider heads a laboratory of experimental and computational biology at the National Cancer Institute. He is well-versed in Shannon’s theory of information, regularly applies it in his research, and devotes considerable space to it on his website.[2] In the summer of 2000, he published an article in Nucleic Acids Research titled “Evolution of Biological Information.”[3] In that paper, he identified a computational phase space consisting of all sequences 256 letters in length constructed from a four-letter alphabet [cf. the four nucleotide bases]. The phase space therefore consisted of 4^256 sequences, or approximately 10^154 sequences. Starting with an evolutionary algorithm acting on a randomly chosen sequence from the phase space, Schneider then purported to generate an information-rich sequence corresponding to a finely tuned genetic control system in which one part of the genome codes for proteins that precisely bind to another part of the genome. To model genetic control, Schneider divided his 256-letter computational genomes essentially in half, treating the first half as what he called a “weight matrix” and the second half as binding sites. The optimization task of his evolutionary algorithm was to get the weight matrix to match up suitably with the binding sites. Here the weight matrix corresponded to translation and protein folding of natural biological systems, and the binding sites corresponded to locations on DNA where these proteins would then bind.

The details here are not that important. What is important is the discrepancy between what Schneider thinks his computer simulation establishes and what it in fact establishes. Schneider thinks that he has generated complex specified information for free, or as he puts it, “from scratch.” Early in his article he writes, “The necessary information should be able to evolve from scratch.”[4] Later in the article he claims to have established precisely that: “The program simulates the process of evolution of new binding sites from scratch.”[5] According to Schneider, the advantage of his simulation over other simulations that attempt to generate complex specified information  (like Richard Dawkins’s biomorphs program and Thomas Ray’s Tierra environment) is that Schneider’s program “starts with a completely random genome, and no further intervention is required.”[6] Schneider gives his readers to believe that he has decisively confirmed the full sufficiency of the Darwinian mechanism to account for biological information. Accordingly, he claims his model “addresses the question of how life gains information, … [and]  shows explicitly how this information gain comes about from mutation and selection, without any other external influence.”[7]

But has Schneider in fact successfully answered the charge that the Darwinian mechanism is inadequate to generate biological information and in particular complex specified information? In reading Schneider’s article, and more generally when confronting Darwinian scenarios that purport to generate complex specified information for free, I always go back to my days as a graduate student in mathematics teaching undergraduates trigonometry. When it came time to grade their tests, I always had to watch that they didn’t trick me by purporting to establish a trigonometric equality when in fact they didn’t have a clue why one trigonometric expression was equal to another. What students would do is write one expression at the top of the page, the other at the bottom of the page. Then they would manipulate the top expression, transforming it line by line down the middle of the page. Next they would manipulate the bottom expression, transforming it line by line up the middle of the page. In the middle of the page the transformed top and bottom expressions would happily meet, offering no clue how they were related. My challenge was to find where the unwarranted leap occurred (i.e., where the transformation from one expression to the other could no longer be justified).

I find myself in a similar position analyzing Schneider’s article and Darwinian scenarios like his. Schneider claims to have generated complex specified information for free. The No Free Lunch theorems, however, tell us this is not possible. Where, then, has he smuggled in complex specified information? The precise place where he smuggles it in is not hard to find if one knows what to look for. Here is the crucial paragraph in his article:

“The organisms [i.e., the computational sequences in phase space] are subjected to rounds of selection and mutation. First, the number of mistakes made by each organism in the population is determined. Then the half of the population making the least mistakes is allowed to replicate by having their genomes replace (‘kill’) the ones making more mistakes. (To preserve diversity, no eplacement takes place if they are equal.) At every generation, each organism is subjected to one random point mutation in which the original base is obtained one-quarter of the time.”[8]

Within this crucial paragraph, the crucial sentence is: “The number of mistakes made by each organism in the population is determined.” Who or what determines the number of mistakes? Clearly, Schneider had to program any such determination of number of mistakes into his simulation. Moreover, the determination of number of mistakes is the key defining feature of his fitness function, for which optimal fitness corresponds to minimal number of mistakes.

Readers of Richard Dawkins’s The Blind Watchmaker have seen all this before, to wit, in Richard Dawkins’s “Methinks It Is like a Weasel” simulation. To be sure, Schneider’s simulation is more subtle. But the parallels are unmistakable. Like Dawkins’s simulation, Schneider’s simulation starts with a randomly given “genome” and requires no further intervention. Unlike Dawkins’s simulation, Schneider’s does not identify an explicitly given target sequence. Even so, it identifies target sequences implicitly through the choice of fitness function. Moreover, by tying fitness to number of mistakes, Schneider guarantees that the gradients of his fitness function rise gradually and thus that his evolutionary algorithm converges in short order to an optimal computational sequence (optimality being defined in relation to his fitness function). Although once the algorithm starts running, there is no intervention on the part of the investigator.  It is not the case that Schneider didn’t intervene crucially in structuring the fitness function. He did, and this is where he smuggled in the complex specified information that he claimed to obtain from scratch.

Schneider’s choice of fitness function is the most obvious place where he smuggles in complex specified information. But there are others. In the Nucleic Acids Research article we’ve been discussing, he does not list the source code for the program underlying his simulation. For that code, he refers readers to the relevant web address. The source code is revealing and shows that Schneider had to do a lot of fine-tuning to his evolutionary algorithm to make his simulation come out right. For instance, in the crucial paragraph from his article that I quoted above, Schneider remarks parenthetically: “To preserve diversity [of organisms], no replacement takes place if [the number of mistakes is] equal.” Schneider’s Pascal source code reveals why: “SPECIAL RULE: if the bugs have the same number of mistakes, reproduction (by replacement) does not take place. This ensures that the quicksort algorithm does not affect who takes over the population. [1988 October 26] Without this, the population quickly is taken over and evolution is extremely slow!”[9] Schneider is here fine-tuning his evolutionary algorithm to obtain the results he wants. All such fine-tuning amounts to investigator interference smuggling in complex specified information.

Sources
 

[1] Michael Polanyi and Harry Prosch, Meaning (Chicago: University of Chicago Press, 1975), pp. 162-163.

[2] http://www.lecb.ncifcrf.gov/~toms.

[3] Thomas D. Schneider, “Evolution of Biological Information,” Nucleic Acids Research 28(14) (2000): 2794-2799.

[4] Ibid., p. 2794.

[5] Ibid., p. 2796.

[6] Ibid.

[7] Ibid., p. 2797.

[8] Ibid. p. 2795.

[9] http://www.lecb.ncifcrf.gov/~toms/delila/ev.html.