ID as a Theory of Technological Evolution

ID as a Theory of Technological Evolution

Print Friendly, PDF & Email

1. Nature and Art

In Book II of the Physics Aristotle remarks, “If the ship-building art were in the wood, it would produce the same results by nature.” Aristotle is here contrasting nature and art. Nature provides the raw materials (here wood); art provides the means for fashioning those materials (here into a ship). For Aristotle, art consists in the knowledge and skill to produce an object and presupposes the imposition of form on the object from outside. On the other hand, nature consists in capacities inherent in the physical world — capacities that produce objects, as it were, internally and without outside help. Thus in Book VII of the Metaphysics Aristotle writes, “Art is a principle of movement in something other than the thing moved; nature is a principle in the thing itself.” Consequently, Aristotle refers to art as completing “what nature cannot bring to a finish.” Thomas Aquinas took this idea and sacramentalized it into grace completing nature.

In Aristotle’s distinction between art and nature lies the central issue in the debate over biological evolution. The central issue is not the interpretation of Genesis, nor whether humans are descended from apes, nor whether all organisms trace their lineage to a last common ancestor. Indeed, where one comes down on these side issues is irrelevant to the central issue. The central issue is whether nature has sufficient resources in herself to generate all of biological diversity or whether in addition nature requires art to complete what nature alone cannot bring to a finish. The Greek word for art is techne, from which we get our word technology. The English word most commonly used to capture what Aristotle means by art derives not from the Greek but from the Latin. That word is, of course, design.

The central issue in the debate over biological evolution can therefore be put as follows: Is nature complete in the sense of possessing all the resources necessary to bring about the biological structures we see around us or does nature also require some contribution of design to bring about those structures? A typical reaction to this question is simply to observe that biological systems are natural objects and then to pose the following counter-question: What besides nature could conceivably have played an essential role in the formation of biological systems? Although there has been no dearth of answers to this counter-question (special creation, vitalism, and orthogenesis come to mind), the answers given to date no longer inspire confidence within much of the scientific community.

It is therefore important to understand that intelligent design (or ID as it is increasingly being abbreviated) is not yet another answer to this counter-question. To ask what besides nature could conceivably have played an essential role in the formation of biological systems is to ask for an entity with causal powers to produce objects that nature unassisted could not produce. The problem is that any such entities are not open to direct empirical investigation. Our knowledge of them can be at best indirect, dependent on phenomena mediated through nature. But a designing intelligence that mediates its action through nature has since the time of Darwinseemed largely dispensable — certainly from science and now increasingly from common life.

The strength of intelligent design as an intellectual project consists not in presupposing a prepackaged conception of a designer and then determining how the facts of science square with that conception. Rather, intelligent design’s strength consists in starting with nature, exploring nature’s limitations, and therewith determining where design fits in the scheme of nature. Aristotle claimed that the art of ship-building is not in the wood that constitutes the ship. Likewise intelligent design claims that the art of life-building is not in the physical stuff that constitutes life. But intelligent design does not stop there. Rather, the very methods that establish nature’s limitations also establish that design is operating in nature. Nor does intelligent design commit a god-of-the-gaps fallacy. Intelligent design locates discontinuities in the causal structure of nature that are inherently unbridgeable by natural causes. Such gaps are ontological rather than epistemic, and thus offer no promise of being removed by closer investigation of natural causes.

But why admit any gaps at all? Nature gives rise to human beings. Once human beings are on the scene, they act as designing intelligences to produce artifacts. But human beings are themselves natural. Art in Aristotle’s sense is therefore at most once removed from nature: Nature produces embodied rational agents like us, who in turn produce designed objects. To speak of nature herself being designed or to speak of natural objects (like biological systems) being designed seems therefore to commit a category mistake. To state the problem in the language of evolution: Nature in her evolution produces life, and some of those evolved forms of life produce designed objects. Yet to place design prior to the evolved forms that produce design is to misconceive design.

The problem with this objection is that it still fails to address nature’s limitations, especially with regard to the emergence of biological systems. Does nature in and of herself — unassisted and unsupplemented — have what it takes to produce the diversity of life? To be sure, one can simply as a metaphysical assumption suppose that nature can do all her own designing. Aristotle made this assumption, and so did the ancient Stoics. For Aristotle, final causes operated as a part of nature.  Final causes expressed purposes inherent in nature and were therefore capable of effecting design (biological designs in particular). Thus in Book II of the Physics Aristotle writes of purpose being present in both art and nature. But endowing nature with purpose and therewith empowering nature to produce design is not an option for most contemporary scientists. As Jacques Monod put it, “The cornerstone of the scientific method is the postulate that nature is objective. In other words, the systematic denial that ‘true’ knowledge can be got at by interpreting phenomena in terms of final causes — that is to say, of ‘purpose’.”

Whence the removal of purpose and therewith design from nature? I lay the blame with the mechanical philosophy that was prevalent at the birth to modern science. Paradoxically, the very clockwork universe that the early mechanical philosophers like Robert Boyle used to buttress design in nature was in the end probably more responsible than anything for undermining design in nature. The mechanical philosophy viewed the world as an assemblage of material entities interacting by purely mechanical means. Boyle advocated the mechanical philosophy because he saw it as refuting the immanent teleology of Aristotle and the Stoics for whom design arose as a natural outworking of natural forces. For Boyle this was idolatry, identifying the source of creation not with God but with nature.

 The mechanical philosophy offered a world operating by mechanical principles and processes that could not be confused with God’s creative activity and yet allowed such a world to be structured in ways that clearly indicated the divine handiwork and therefore design. What’s more, the British natural theologians always retained miracles as a mode of divine interaction that could bypass mechanical processes. Over the subsequent centuries, however, what remained was the mechanical philosophy and what dropped out was the need to invoke miracles or God as designer. Henceforth, purely mechanical processes could themselves do all the design work for which Aristotle and the Stoics had required an immanent natural teleology and for which Boyle and the British natural theologians required God.

2. Testing Nature’s Limits
 

The mechanical philosophy is still with us, though in place of particles and force we now tend to think in terms of fields and energy. The mechanical philosophy has bequeathed to us a view of nature in which natural processes operate unsupplemented by any form of teleology, purpose, or design. Fortunately, this view of nature is testable. To see this, I will need to describe some of my own work on design detection (especially as laid out in my book The Design Inference). Yet instead of merely recapitulating that work, I will approach it through Murray Gell-Mann’s work on effective complexity and total information.

Since the early 1990s, Gell-Mann has been attempting to combine Shannon’s statistical theory of information with Kolmogorov’s algorithmic theory of information into a comprehensive theory of complexity and information for science. Gell-Mann starts with the observation that the complexity that interests us in practice is not pure randomness but patterned regularities that remain once the effects of randomness have been factored out. Gell-Mann thus defines “effective complexity” as the complexity inherent in these patterned regularities. Moreover, he defines “total information” as the effective complexity together with the complexity inherent in the effects of randomness that were factored out. He then characterizes effective complexity mathematically in terms of an algorithmic information measure that measures the extent to which patterned regularities can be compressed into a minimal representation (he calls such representations “schemata”). Moreover, he characterizes the residual effects of randomness mathematically in terms of a Shannoninformation measure that measures the extent to which random deviations depart from the patterned regularities in question. Total information thus becomes the sum of an algorithmic information measure and a Shannoninformation measure.

Gell-Mann’s theory of effective complexity attempts to account for how complex adaptive systems like us make sense out of a world that exhibits regularities as well as random deviations from those regularities. Though richly suggestive, applying Gell-Mann’s mathematical formalism in practice is largely intractable since it requires taking conceptual schemata of patterned regularities appropriate to some inquiry, mapping them onto a computational data structure, and then seeing how such data structures can be reduced in size while faithfully preserving the conceptual structures that map from conceptual to computational space. Thus far Gell-Mann’s theory has resisted detailed applications to real-world problems.

Why then do I consider it here? According to philosopher David Roche, design theorists like me are all mixed up about information theory and complexity. Thus Roche argues that the Darwinian mechanism is well able to account for biological complexity once we are clear about the type of complexity that is actually at issue in biology. The problem, according to Roche, is that design theorists are using the wrong notion of complexity. What is the right notion? Roche claims Gell-Mann’s concept of effective complexity is the right one for biology.

But there is a problem with Gell-Mann’s approach to complexity. While Gell-Mann’s approach is well-suited for describing how regularities of nature that are subjected to random perturbations match our conceptual schemata, it is not capable of handling contingencies in nature that are unaccountable by any regularities but that happen all the same to match our conceptual schemata. Such contingencies establish a design in nature that is not reducible to nature. What are these contingencies that are unaccountable by regularities but that nonetheless match our conceptual schemata?  The technical name for such contingencies is specified complexity.

Think of the signal that convinced the radio astronomers in the movie Contact that they had found an extraterrestrial intelligence. The signal was a long sequence of prime numbers. On account of its length the signal was complex and could not be assimilated to any natural regularity. And yet on account of its arithmetic properties it matched our conceptual schemata. The signal was thus both complex and specified. What’s more, the combination of complexity and specification convincingly pointed those astronomers to an extraterrestrial intelligence. Design theorists contend that specified complexity is a reliable indicator of design, is instantiated in certain (though by no means all) biological structures, and lies beyond the remit of nature to generate it.

If the previous remarks about complexity, specification, and information have seemed unduly elliptical, it is because this is a complicated subject and the details can quickly become overwhelming, especially in so short a talk as this. Nonetheless, I do want to give some sense of why specified complexity is the right instrument for identifying nature’s limitations.  To say that specified complexity lies beyond the remit of nature to generate it is not to say that naturally occurring systems cannot exhibit specified complexity or that natural processes cannot serve as a conduit for specified complexity. Naturally occurring systems can exhibit specified complexity and nature operating unassisted can take preexisting specified complexity and shuffle it around. But that is not the point. The point is whether nature can generate specified complexity in the sense of originating it when previously there was none. Take, for instance, a[n Albrecht] Durer woodcut. It arose by mechanically impressing an inked woodblock on paper. The Durer woodcut exhibits specified complexity. But the mechanical application of ink to paper via a woodblock does not account for that specified complexity in the woodcut. The specified complexity in the woodcut must be referred back to the specified complexity in the woodblock which in turn must be referred back to the designing activity of Durer himself. Specified complexity’s causal chains end not with nature but with a designing intelligence.

To place the burden of design detection on specified complexity remains controversial. The philosophy of science community, wedded as it is to a Bayesian approach to probabilities, is still not convinced that my account of specified complexity is even coherent. The Darwinian community, convinced that the Darwinian mechanism can do all the design work in biology, regards specified complexity as an unexpected vindication of Darwinism. On the other hand, mathematicians and statisticians have tended to be more generous with my work on specified complexity and to regard it as an interesting contribution to the study of randomness. Perhaps the best reception of my work has come from engineers and the defense industry looking for ways to apply specified complexity to pattern matching. The final verdict is not in. Indeed, the discussion has barely begun. In my forthcoming book titled No Free Lunch I respond at length to my critics (including Wesley Elsberry). Since I will presumably have some time to respond to Wesley’s criticisms of my work following his talk, I’ll leave off further disc ssion of specified complexity’s merits.

3. Technological Evolution

I want next to focus on what insights into biological evolution a design perspective offers. Here we are at a conference on interpreting evolution. Suppose that specified complexity lies beyond the remit of natural causes to generate it, and that specified complexity is a reliable empirical marker of actual design, and that specified complexity is instantiated in actual biological systems (huge suppositions for many of you). How then should we interpret biological evolution?

Phillip Johnson has criticized OhioStateUniversityzoologist Tim Berra for likening Darwinian evolution to the technological evolution of the Corvette automobile. Darwinian evolution is by definition undirected by any intelligence whereas Corvette evolution is directed by an intelligence. According to Johnson, there is a fundamental disanalogy between these two types of evolution, and to use one to justify the other is invalid. Johnson therefore refers to Berra’s conflation of Darwinian evolution and technological evolution as Berra’s Blunder. I prefer instead to refer to it as Berra’s Freudian Slip. Berra was quite right to compare biological evolution to technological evolution. Biological evolution is indeed a form of technological evolution. Berra’s mistake was in thinking that Darwinian evolution is a form of technological evolution. It is not.

Darwinian evolution is a trial-and-error method for gradually improving preexisting functions and for co-opting serendipitous functions. Within Darwinian evolution, natural selection supplies the trial and random variation the error. Although trial and error plays a role in technological evolution, trial and error is too myopic to serve as the powering force behind technological evolution. The watchmaker behind technological evolution needs to be far-seeing, not myopic and certainly not blind.

We now have extremely good information about the trends that technologies follow in their evolution. Once designed systems are in place, operational, and interacting (be they within an economy or ecosystem), technological evolution tends to follow certain patterns. These patterns of evolution have been extensively studied by Russian engineers and scientists, beginning notably with the work of Genrich Altshuller. As Semyon Savransky remarks, “Engineers in the former Soviet Unionwere responsible to spend eight hours [a day] at their work place but often had nothing to do (their regular salary did not depend on their effort, experience, or quantity and quality of work). Many of them … used this time to study patents.”

Altshuller, an engineer, studied more than 400,000 patents from across the world to uncover patterns in technological evolution. Another Russian engineer, I. V. Vikent’ev, studied all USSRpatents (about a million at the time) looking for patterns in technological evolution. The systematic study of patents by Russian engineers and scientists created a new discipline, now known under the acronym T-R-I-Z. TRIZ corresponds to a Russian phrase that in English means “Theory of Inventive Problem Solving.” Although Russian researchers have been actively investigating TRIZ for the last fifty years, it has only made its mark in the West in the last decade. TRIZ as a methodology for facilitating inventions and solving problems is increasingly being employed in industry. On the other hand, its applications to biology are only now becoming evident.

TRIZ is a vast topic, so in my few remaining minutes I will provide only the barest sketch of this methodology as it relates to biology. TRIZ is concerned with the improvement of existing designs and the emergence of novel designs. I’ll call the one intraspecific technological evolution, the other transpecific technological evolution. Although intraspecific technological evolution can proceed by trial and error (as in the Darwinian mechanism), the trial-and-error method is only suitable, as TRIZ expert Semyon Savransky observes, for “simple, well-defined, routine closed problems.” Problems are routine if all the critical steps leading to a solution are known. On the other hand, a problem is nonroutine if at least one critical step leading to a solution is unknown.

In response to environmental pressure (be it economic or ecological), intraspecific technological evolution is frequently called on to solve nonroutine problems. Environmental pressure pushes designed systems toward what TRIZ proponents call “ideality.” A system is said to approach ideality to the degree that it maximizes the system’s useful functions and minimizes its harmful functions. In the Marxist spirit in which TRIZ was invented, TRIZ seeks to overcome the contradictions that arise when improving one function of a system leads to deficits in another function of the system. TRIZ seeks to resolve these contradictions not so much by balancing advantages against disadvantages, as in constrained optimization, but by novel win-win solutions that maximize useful functions without (ideally) incurring harmful side-effects. The great obstacle in the way of ideality is psychological inertia, which artificially constricts a solution space rather than opening it to undreamt of possibilities. Psychological inertia thinks, as it were, inside a box. Ideality requires thinking outside the box.

TRIZ characterizes ideality in the following Zen-like terms (I quote from Savransky):

*       The ideal machine has no mass or volume but accomplishes the required work.

*       The ideal method expends no energy or time but obtains the necessary effect in a self-regulating manner.

*       The ideal process is actually only the process result without the process itself.

*       The ideal substance is actually no substance (a vacuum), but whose function is performed.

*       The ideal technique occupies no space, has no weight, requires no labor or maintenance, delivers benefit without harm, and “does it itself,” without any additional energy, mechanisms, cost, or raw materials.

This Zen-like dwindling of a system’s substantiality to nothing while its function progresses to perfection is to be sure an idealization that cannot be realized in any concrete physical system. Nonetheless, this idealization serves as a useful regulative principle for designed systems. Certainly, ideality’s best instantiation is found in biology (according to Genrich Altshuller, biology has given us the best of all patent libraries). Among human artifacts ideality’s best instantiation is perhaps found in computers. Whether Moore’s law will continue to obtain and push computers closer to ideality than biological systems (especially in regard to the human brain) is very much a matter of debate at this time.

According to TRIZ, intraspecific evolution gives way to transpecific evolution when a given technology has been pushed as close to ideality as possible and when new pressures from the environment require new technologies with new functions. When novel technological systems emerge, as far as possible they take advantage of and incorporate preexisting technologies. What’s more, novel systems tend to emerge suddenly. Once a novel system has emerged, the pressure is on to achieve ideality. A system that approximates ideality will persist for long stretches of time provided its environmental niche is undisturbed. Stasis is therefore part of TRIZ’s evolutionary scheme. But so is extinction: When environmental pressures become too great, antiquated systems either give way to novel systems or simply disappear without any system taking their place. Unlike emergence, which is sudden, extinction can be sudden or gradual (thus a new technology may gradually displace an old one or eliminate it all at once). Finally, good ideas get reused and reinvented. Technological evolution therefore includes convergent evolution. Moreover, it readily accommodates homologies (similar structures used for different purposes) as well as analogies (different structures used for similar purposes).

Sudden innovation, convergence to ideality, and extinction are all part of TRIZ’s evolutionary scheme. Now where have we seen that scheme before? The scheme is non-Darwinian. Nor can the Darwinian scheme be easily modified to accommodate it. For instance, Robert Wright’s addition of game theory to selection and variation is insufficient to account for technological innovation — at best game-theoretic constraints provide a necessary condition for technological innovation. TRIZ’s evolutionary scheme fits quite nicely with Eldredge and Gould’s model of punctuated equilibria. Leaving aside their model’s mechanism of evolutionary change and innovation, the patterns of evolution described by TRIZ and the Eldredge-Gould model are quite similar.

Perhaps the one discrepancy is that the Eldredge-Gould model does not make explicit the convergence to ideality. From the vantage of technological evolution, the speed of convergence to ideality reflects the perspicacity of the designing intelligence responsible for technological improvement. In the limiting case, therefore, a designing intelligence produces technological systems that are as close to ideality as possible from the start. Although suboptimality of design remains an issue in biological evolution, aspects of biological designs seem indeed to approach ideality. For instance, the miniaturization of molecular machines in the cell seems to approach the physico-chemical limits of matter.

In conclusion, Aristotle’s distinction between nature and art remains very much a live issue for the natural sciences. In particular, at the heart of the current debate over intelligent design is whether biological systems exhibit some feature that cannot be ascribed to nature as such but in addition requires art or design to complete what, as Aristotle put it, “nature cannot bring to a finish.” Moreover, if design theorists are correct in arguing that specified complexity lies beyond the remit of natural causes to generate it, that specified complexity is a reliable empirical marker of actual design, and that specified complexity is instantiated in actual biological systems; then the way is open for a massive reinterpretation of biological evolution. In that case, biological evolution becomes a form of technological evolution. What’s more, thanks to TRIZ, a ready-made theory of technological evolution is already in place to interpret biological evolution. Biology confirms the patterns of technological evolution outlined by TRIZ. Significantly, these patterns are non-Darwinian.

Reference Notes

The quotes from Aristotle are taken from Jonathan Barnes, ed., The Complete Works of Aristotle (Princeton: Princeton University Press, 1984). For Internet information on TRIZ, start with www.triz.org, and www.triz-journal.com. The citations to Savransky and Altshuller are taken respectively from Semyon Savransky, Engineering of Creativity: Introduction to TRIZ Methodology of Inventive Problem Solving (Boca Raton, Fl.: CRC Press, 2000) and Genrich Altshuller, The Innovation Algorithm : TRIZ, Systematic Innovation and Technical Creativity (Worcester, Mass.: Technical Innovation Center, 1999).