America’s Obsession with Design: A Response to Wolfhart Pannenberg – Part 2

America’s Obsession with Design: A Response to Wolfhart Pannenberg – Part 2

Print Friendly, PDF & Email

 In the case of computer simulations, following the information trail and finding the place where complex specified information was smuggled in is usually not difficult. I predict it will become more difficult in the future as this shell game becomes more sophisticated, involving more shells and quicker movements of the shells. But just as accounts where profits and losses cannot be squared with receivables contain an error in addition or subtraction somewhere, so simulations that claim to generate complex specified information from scratch contain an unwarranted insertion of pre-existing complex specified information. With simulations, all that’s needed is to follow the information trail and find the point of insertion. That may be complicated, but the entire trail is surveyable and will eventually yield to sustained analysis — it’s not as though we’re missing any crucial piece of the puzzle.

The same cannot be said for actual biological examples. Consider, for instance, a proposed counterexample to my claim that evolutionary algorithms cannot generate complex specified information. The challenge in this instance focuses on carefully controlled experiments with biopolymers. Here is the challenge as it’s been put to me in several unsolicited emails over the Internet:

“For selection to produce some innovation that is both complex and specific would demolish your hypothesis. In fact, selection can do just that. Consider in vitro RNA evolution [N.B.: the actual type of biopolymer used is unimportant; RNA is the fashion these days]. Using only a random pool of RNAs (none of them designed), we can select for RNAs that perform a certain highly specified function. They can be selected to bind to any molecule of choice with high specificity or to catalyze a highly specific reaction. This is molecular specified information, by anyone’s definition. We have thus empirically seen that highly specific information can be generated in a molecule without designing the molecule. Information theory just has to catch up with what we know from experiment. At the beginning of a SELEX experiment, for instance, you have a random pool of RNAs that can’t do much at all. At the end, you have a pool of RNAs that can perform a complex specified function, such as catalyze a specific reaction or bind a specific molecule. In other words, there is an increase in net complex specified information through the course of the experiment. The pool of molecules you get at the end of the experiment were never designed. To the contrary, the scientist has no clue as to the identity of their sequence or structure. An extensive effort usually follows a SELEX experiment to characterize the evolved RNA. The RNA must be sequenced, and in some cases it is crystallized and the structure is solved. Only then does the scientist know what was created, and how it performs its complex specific function.”[10]

In no way do SELEX, ribozyme engineering, or similar experimental techniques circumvent the No Free Lunch theorems. In SELEX experiments, large pools of randomized RNA molecules are formed by intelligent synthesis and not by chance — there’s no natural route to RNA. These molecules are then sifted chemically by various means for catalytic function. What’s more, the catalytic function is specified by the investigator. Those molecules showing some activity are isolated and become templates for the next round of selection. And so on, round after round. At every step in both SELEX and ribozyme (catalytic RNA) engineering experiments generally, the investigator is carefully arranging the outcome, even if he or she doesn’t know the specific sequence that will emerge. It is simply irrelevant that the investigator is ignorant of the identity and structure of the evolved ribozyme and must determine it after the experiment is over. The investigator first had to specify a precise catalytic function, next had to specify a fitness measure gauging degree of catalytic function for a given biopolymer, and finally had to run an experiment optimizing the fitness measure. Only then does the investigator obtain a biopolymer exhibiting the catalytic function of interest. In all such experiments the investigator is inserting complex specified information right and left, most notably in specifying the fitness measure that gauges degree of catalytic function. Once it’s clear what to look for, following the information trail in such experiments is straightforward.[11]

I want now to step back and consider why researchers who employ evolutionary algorithms might be led to think that these algorithms generate complex specified information as a free lunch. The mathematics is against complex specified information arising de novo from any non-telic process. What’s more, counterexamples that purport to show how complex specified information can arise as a free lunch are readily refuted once one follows the information trail and, as it were, audits the books. Even so, there is something oddly compelling and almost magical about the way evolutionary algorithms find solutions to problems where the solutions are not like anything we have imagined.[12] A particularly striking example is the “crooked wire genetic antennas” of Edward Altshuler and Derek Linden.[13] The problem these researchers solved with evolutionary (or genetic) algorithms was to find an antenna that radiates equally well in all directions over a hemisphere situated above a ground plane of infinite extent. Contrary to expectations, no wire with a neat symmetric geometric shape solves this problem. Instead, the best solutions to this problem look like funky zigzagging tangles.[14] What’s more, evolutionary algorithms find their way through all the various zigzagging tangles — most of which don’t work — to one that actually does. This is remarkable. Even so, the fitness function that prescribes optimal antenna performance is well-defined and readily supplies the complex specified information that an optimal crooked wire genetic antenna seems to acquire for free.

Perhaps more striking and certainly better known is the evolutionary checker playing program of Kumar Chellapilla and David Fogel. As James Glanz reported in the New York Times, “Knowing only the rules of checkers and a few basics, and otherwise starting from scratch, the program must teach itself how to play a good game without help from the outside world — including from the programmers.”[15] The program employs evolutionary algorithms, neural networks, and the techniques of artificial intelligence. In the initial work of Chellapilla and Fogel in 1999, their program attained a level one or two notches below expert.[16] Since then, “with longer evolutionary trials and the inclusion of a preprocessing layer to let the neural network learn that the game is played on a two-dimensional board, rather than a one-dimensional 32-element vector,” the program has in fact attained the level of expert.[17] The program therefore plays checkers at a level far superior to most humans. What’s remarkable about this program is that it attained such a high level of play without having to be explicitly programmed with expert knowledge like the world champion chess program Deep Blue or the world champion checker program Chinook.[18]

But did the evolutionary checker program of Chellapilla and Fogel achieve its superior play without commensurate input from prior intelligence? If one looks at how Chellapilla and Fogel actually programmed their evolutionary algorithm, one finds that they instituted a rating system (like the one used to rank chess players) that continually tracked how well a given neural network (i.e., candidate solution) was doing. [19] In place of a fixed fitness function, Chellapilla and Fogel therefore defined what might be called a “floating fitness function,” or what Stuart Kauffman calls a coevolving fitness landscape. But the mathematics of evolutionary algorithms is the same whether the fitness functions are fixed or floating (see section 5.10 of my forthcoming No Free Lunch).

The important thing to note about these ratings is that they are fine grained and specify very precisely how well a candidate solution is doing with respect to other possible solutions. It’s not as though there are only two or three discrete categories for ranking solutions. Instead, there is a whole series of numbers ranging from 0 to 2400 and above in which higher numbers correspond to superior skill and expert-level play corresponds to between 2000 and 2199 (master play is ranked 2200 and above). Consequently, finding an optimal solution here is like the old Easter egg hunt game in which one is told either “hotter” or “colder” depending on whether one is getting closer to or farther away from the hidden prize. There is an incredible amount of complex specified information packed into a fitness function (whether it’s fixed or floating) that for every pair of elements in a solution space can tell you which is superior. What’s more, any evolutionary algorithm capable of precisely implementing such a fitness function by preserving only the superior and weeding out all the inferior is making full use of that information (Chellapilla and Fogel’s algorithm did just that; note that natural selection in biology operates with nowhere near this precision). Again, there is no free lunch here — complex specified information has not been generated for free.

In closing this essay I want to draw a pair of lessons, both of which I hope Wolfhart Pannenberg will find congenial. Both design and evolution have a lesson to learn from each other. The No Free Lunch theorems show that for evolutionary algorithms to output complex specified information they had first to receive a prior input of complex specified information. And since complex specified information is reliably linked to intelligence [cf. my The Design Inference], evolutionary algorithms, insofar as they output complex specified information, do so on account of a guiding intelligence. The lesson, then, for evolution is that any intelligence evolutionary processes display is never autonomous but always derived. On the other hand, evolutionary algorithms do produce remarkable solutions to problems — solutions that in many cases we would never have imagined on our own. Having been given some initial input of complex specified information, evolutionary algorithms as it were mine that complex specified information and extract every iota of value from it. The lesson, then, for design is that natural causes can synergize with intelligent causes to produce results far exceeding what intelligent causes left to their own abstractions might ever accomplish. Too often design is understood in a deterministic sense in which every aspect of a designed object has to be pre-ordained by a designing intelligence. Evolutionary algorithms underwrite a non-deterministic conception of design in which design and nature operate in tandem to produce results that neither could produce by itself (Christian incarnational theology resonates deeply with this point).

One final note is in order. Pannenberg is puzzled over America’s obsession with design. He begins what was perhaps the key talk of his recent American tour with the remark: “Concerning design, I wonder again and again why the dispute in this country over the doctrine of evolution is so obsessive.” Pannenberg had many interlocutors during his American tour.  Yet no design theorist was invited to serve as an interlocutor for Pannenberg during that entire tour. I therefore have a puzzlement of my own: Why was that?

[10] Adapted from one of many emails like it that I have received. SELEX refers to “systematic evolution of ligands by exponential enrichment.” In 1990, the laboratories of J. W. Szostak (Boston), L. Gold (Boulder), and G. F. Joyce (La Jolla) independently developed this technique, which permits the simultaneous screening of more than 10^15 polynucleotides for different functionalities. See S. Klug and M. Famulok, “All You Wanted to Know about SELEX,” Molecular Biology Reports 20 (1994): 97-107. See also Gordon Mills and Dean Kenyon, “The RNA World: A Critique,” Origins & Design 17(1) (1996): 9-14.

[11[ I’m indebted to Paul Nelson for helping me see how formal mathematical theory connects to current experimental work with biopolymers.

[12] For a survey of the diverse problems to which evolutionary algorithms have been applied and for many of which these algorithms have generated unexpected solutions, see Melanie Mitchell, An Introduction to Genetic Algorithms (Cambridge, Mass.: MIT Press, 1996), pp. 15-16.

[13] Edward E. Altshuler and Derek S. Linden, “Design of Wire Antennas Using Genetic Algorithms,” pp. 211-248 in Electromagnetic Optimization by Genetic Algorithms, eds. Y. Rahmat-Samii and E. Michielssen (New York: Wiley, 1999). I’m indebted to Karl Stephan for pointing me to this example. See Karl Stephan, “Evolutionary Computing and Intelligent Design,” Zygon (2001): in review.

[14] Altshuler and Linden, “Design of Wire Antennas Using Genetic Algorithms,” fig. 22.

[15] James Glanz, “It’s Only Checkers, but the Computer Taught Itself,” New York Times (25 July 2000): on the web at “http://www.transhumanismus.de/SciTech/0007/Checkers.htm.”>http://www.transhumanismus.de/SciTech/0007/Checkers.htm.</A>

[16] Kumar Chellapilla and David B. Fogel, “Co-Evolving Checkers Playing Programs using Only Win, Lose, or Draw,” SPIE’s AeroSense’99: Applications and Science of Computational Intelligence II (Orlando, Fl.: 5-9 April 1999). SPIE is the International Society for Optical Engineering.

[17] Personal communication from David B. Fogel, 27 February 2001.

[18] Deep Blue’s defeat of Gary Kasparov in 1997 is widely known. For an account of Chinook, see J. Schaeffer, R. Lake, P. Lu, and M. Bryant, “Chinook: The World Man-Machine Checkers Champion,” AI Magazine 17 (1996): 21-29.

[19] Kumar Chellapilla and David B. Fogel, “Evolving Neural Networks to Play Checkers without Relying on Expert Knowledge,” IEEE Transactions on Neural