# Another Way to Detect Design? Part 3

Part 3. Design by Elimination

The defects in Sober's likelihood approach are, in my view, so grave that it cannot provide an adequate account of how design hypotheses are inferred. [18] The question remains, however, whether specified complexity can instead provide an adequate account for how design hypotheses are inferred. The worry here centers on the move from specified complexity to design. Specified complexity is a statistical and complexity-theoretic notion. Design, as generally understood, is a causal notion. How do the two connect?

In *The Design Inference* and then more explicitly in *No Free Lunch*, I connect the two as follows. First, I note that intelligence has the causal power to generate specified complexity. About this there is no controversy. Human artifacts, be they Hubble space telescopes, Cray supercomputers, Dürer woodcuts, gothic cathedrals, or even such humble objects as paperclips exhibit specified complexity. Moreover, insofar as we infer to nonhuman intelligences, specified complexity is the key here too. This holds for animal intelligences as well as for extraterrestrial intelligences. Indeed, short of observing an extraterrestrial life form directly, any signals from outer space that we take to indicate extraterrestrial life will also indicate extraterrestrial “intelligent life”. It is no accident that the search for extraterrestrial intelligence, by looking to radio signals, cannot merely detect life as such but must detect intelligent life. Moreover, radio signals that would warrant a SETI researcher in inferring extraterrestrial intelligence invariably exhibit specified complexity.

To show that specified complexity is a reliable empirical marker of design, it is therefore enough to show that no natural cause has the causal power to generate specified complexity (natural causes being understood here as they are in the scientific community, namely, as undirected, blind, purposeless causes characterized in terms of the joint action of chance and necessity). Showing that natural causes cannot generate specified complexity seems like a tall order, but in fact it is not an intractable problem. Natural causes, because they operate through the joint action of chance and necessity, are modeled mathematically by nondeterministic functions known as stochastic processes. Just what these are in precise mathematical terms is not important here. The important thing is that functions map one set of items to another set of items and in doing so map a given item to one and only one other item. Thus for a natural cause to "generate" specified complexity would mean for a function to map some item to another item that exhibits specified complexity. But that means the complexity and specification in the item that got mapped onto gets pushed back to the item that got mapped. In other words, natural causes just push the problem of accounting for specified complexity from the effect back to the cause, which now in turn needs to be explained. It is like explaining a pencil in terms of a pencil-making machine. Explaining the pencil-making machine is as difficult as explaining the pencil. In fact, the problem typically gets worse as one backtracks specified complexity.

Stephen Meyer makes this point beautifully for DNA. [19] Suppose some natural cause is able to account for the sequence specificity of DNA (i.e., the specified complexity in DNA). The four nucleotide bases are attached to a sugar-phosphate backbone and thus cannot influence each other via bonding affinities. In other words, there is complete freedom in the sequencing possibilities of the nucleotide bases. In fact, as Michael Polanyi observed in the 1960s, this must be the case if DNA is going to be optimally useful as an information bearing molecule. [20] Indeed, any limitation on sequencing possibilities of the nucleotide bases would hamper its information carrying capacity. But that means that any natural cause that brings about the specified complexity in DNA must admit at least as much freedom as is in the DNA sequencing possibilities (if not, DNA sequencing possibilities would be constrained by physico-chemical laws, which we know they are not). Consequently, any specified complexity in DNA tracks back via natural causes to specified complexity in the antecedent circumstances responsible for the sequencing of DNA. To claim that natural causes have "generated" specified complexity is therefore totally misleading -- natural causes have merely shuffled around preexisting specified complexity.

I develop this argument in detail in chapter 3 of *No Free Lunch*. In the next lecture I shall consider the type of natural cause most widely regarded as capable of generating specified complexity, namely, the Darwinian mechanism. In that lecture I shall show that the Darwinian mechanism of random variation and natural selection is in principle incapable of generating specified complexity. In my final lecture, "The Chance of the Gaps," I close off one last loophole to the possibility of naturalistically generating specified complexity. A common move these days in cosmology and metaphysics (the two are becoming increasingly hard to separate) is to inflate one's ontology, augmenting the amount of time and stuff available in the physical universe and thereby rendering chance plausible when otherwise it would seem completely implausible. In my final lecture I show why inflating one's ontology does not get around the problem of specified complexity.

For the remainder of this paper I therefore want to focus on logical and foundational concerns connected with specified complexity. This is where Sober focuses his criticism. According to Sober, the chief problem with specified complexity is that it detects design purely by elimination, telling us nothing positive about how an intelligent designer might have produced an object we observe. Take, for instance, a biological system, one that exhibits specified complexity, but for which we have no clue how an intelligent designer might have produced it. To employ specified complexity as a marker of design here seems to tell us nothing except that the object is designed. Indeed, when we examine the logic of detecting design via specified complexity, at first blush it looks purely eliminative. The "complexity" in "specified complexity" is a measure of improbability. Now probabilities are always assigned in relation to chance hypotheses. Thus, to establish specified complexity requires defeating a set of chance hypotheses. Specified complexity therefore seems at best to tell us what is not the case, not what is the case.

In response to this criticism, note first that even though specified complexity is established via an eliminative argument, it is not fair to say that it is established via a purely eliminative argument. If the argument were purely eliminative, one might be justified in saying that the move from specified complexity to a designing intelligence is an argument from ignorance (not X therefore Y). But unlike Fisher's approach to hypothesis testing, in which individual chance hypotheses get eliminated without reference to the entire set of relevant chance hypotheses that might explain a phenomenon, specified complexity presupposes that the entire set of relevant chance hypotheses has first been identified.[21] This takes considerable background knowledge. What's more, it takes considerable background knowledge to come up with the right pattern (i.e., specification) for eliminating all those chance hypotheses and thus for inferring design.

Design inferences that infer design by identifying specified complexity are therefore not purely eliminative. They do not merely exclude, but they exclude from an exhaustive set in which design is all that remains once the inference has done its work (which is not to say that the set is logically exhaustive; rather, it is exhaustive with respect to the inquiry in question -- that is all we can ever do in science). Design inferences, by identifying specified complexity, exclude everything that might in turn exclude design. The claim that design inferences are purely eliminative is therefore false, and the claim that they provide no (positive) causal story is true but hardly relevant -- causal stories must always be assessed on a case-by-case basis independently of general statistical considerations.

I want next to take up a narrowly logical objection. Sober and colleagues argue that specified complexity is unable to handle conjunctive, disjunctive, and mixed explananda. [22] Let us deal with these in order. Conjunctions are supposed to present a problem for specified complexity because a conjunction can exhibit specified complexity even though none of its conjuncts do individually. Thus, if specified complexity is taken as an indicator of design, this means that even though the conjunction gets attributed to design, each of the conjuncts get attributed to chance. Although this may seem counterintuitive, it is not clear why it should be regarded as a problem. Consider a Scrabble board with Scrabble pieces. Chance can explain the occurrence of any individual letter at any individual location on the board. Nevertheless, meaningful conjunctions of those letters arranged sequentially on the board are not attributable to chance. It is important to understand that chance is always a provisional designation that can be overturned once closer examination reveals specified complexity. Thus attributing chance to the isolated positioning of a single Scrabble piece does not contradict attributing design to the joint positioning of multiple Scrabble pieces into a meaningful arrangement.

Disjunctions are a bit trickier. Disjunctions are supposed to pose a problem in the case where some of the disjuncts exhibit specified complexity but the disjunction itself is no longer complex and therefore no longer exhibits specified complexity. Thus we would have a case where a disjunct signifies design, but the disjunction does not. How can this run into trouble? Certainly there is no problem in the case where one of the disjuncts is highly probable. Consider the disjunction, either the arrow lands in the target or outside. If the target is sufficiently small, the arrow landing in the target would constitute a case of specified complexity. But the disjunction itself is a tautology and the event associated with it can readily be attributed to chance.

How else might specified complexity run into trouble with disjunctions? Another possibility is that all the disjuncts are improbable. For instance, consider a lottery in which there is a one-to-one correspondence between players and winning possibilities. Suppose further that each player predicts he or she will win the lottery. Now form the disjunction of all these predictions. This disjunction is a tautology, logically equivalent to the claim that some one of the players will win the lottery (which is guaranteed since players are in one-to-one correspondence with winning possibilities). Clearly, as a tautology, this disjunction does not exhibit specified complexity and therefore does not signify design. But what about the crucial disjunct in this disjunction, namely, the prediction by the winning lottery player? As it turns out, this disjunct can never exhibit specified complexity either. This is because the number of disjuncts count as probabilistic resources, which I define as the number of opportunities for an event to occur and be specified (more on this in my final lecture). With disjunctions, this number is the same as the number of lottery players and ensures that the prediction by the winning lottery player never attains the degree of complexity/improbability needed to exhibit specified complexity. A lottery with N players has at least N probabilistic resources, and once these are factored in, the correct prediction by the winning lottery player is no longer improbable. In general, once all the relevant probabilistic resources connected with a disjunction are factored in, apparent difficulties associated with attributing a disjunct to design and the disjunction to chance disappear.

Finally, the case of mixed explananda is easily dispatched. Suppose we are given a conjunction of two conjuncts in which one exhibits specified complexity and the other does not. In that case one will be attributed to design and the other to chance. And what about the conjunction? The conjunction will be at least as improbable/complex as the first conjunct (the one that exhibits specified complexity). What's more, the pattern qua specification that delimits the first conjunct will necessarily delimit the conjunction as well (conjunctions always restrict the space of possibilities more than their conjuncts). Consequently, the conjunction will itself exhibit specified complexity and be attributed to design. Note that this is completely unobjectionable. Specified complexity, in signaling design, merely says that an intelligent agent was involved. It does not require that intelligent agency account for every aspect of a thing in question.

In closing I want to take the charge that specified complexity is not a reliable instrument for detecting design and turn it back on critics who think that likelihoods provide a better way of inferring design. I showed earlier that the likelihood approach presupposes some account of specification in how it individuates the events to which it applies. More is true: The likelihood approach can infer design only by presupposing specified complexity.

To see this, take an event that is the product of design but for which we have not yet seen the relevant pattern that makes its design evident to us (take a Search for Extraterrestrial Intelligence example in which a long sequence of prime numbers, say, reaches us from outer space, but suppose we have not yet seen that it is a sequence of prime numbers). Without that pattern we will not be able to distinguish between the probability that this event takes the form it does given that it is the result of chance, and the probability that it takes the form it does given that it is the result of design. Consequently, we will not be able to infer design for this event. Only once we see the pattern will we, on a likelihood analysis, be able to see that the latter probability is greater than the former. But what are the right sorts of patterns that allow us to see that? Not all patterns signal design. What's more, the pattern needs to delimit an event of sufficient improbability (i.e., complexity) for otherwise the event can readily be referred to chance. We are back, then, to needing some account of complexity and specification. Thus a likelihood analysis that pits competing design and chance hypotheses against each other must itself presuppose the legitimacy of specified complexity as a reliable indicator of intelligence.

Nor is the likelihood approach salvageable. Lydia and Timothy McGrew, philosophers at Western Michigan University, think that likelihoods are ideally suited for detecting design in the natural sciences but that my Fisherian approach to specified complexity breaks down. Taking issue with both Sober and me, they argue that the presence of irreducible complexity in biological systems constitutes a state of affairs upon which the design hypothesis confers greater probability than the Darwinian hypothesis. [23] Irreducible complexity is biochemist Michael Behe's notion. According to Behe, a system is irreducibly complex if it is "composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. [24] The McGrews are looking for some property of biological systems upon which the design hypothesis confers greater probability than its naturalistic competitors. This sounds reasonable until one considers such properties more carefully. For the McGrews specified complexity is disallowed because it is a statistical property that depends on Fisher's approach to hypothesis testing, and they regard this approach as not rationally justified (which in The Design Inference I argue it is once one introduces the notion of a probabilistic resource). What they apparently fail to realize, however, is that any property of biological systems upon which a design hypothesis confers greater probability than a naturalistic competitor must itself presuppose specified complexity.

Ultimately what enables irreducible complexity to signal design is that it is a special case of specified complexity. Behe admits as much in his public lectures whenever he points to my work in *The Design Inference* as providing the theoretical underpinnings for his own work on irreducible complexity. The connection between irreducible complexity and specified complexity is easily seen. The irreducibly complex systems Behe considers require numerous components specifically adapted to each other and each necessary for function. On any formal complexity-theoretic analysis, they are complex. Moreover, in virtue of their function, these systems embody independently given patterns that can be identified without recourse to actual living systems. Hence these systems are also specified. Irreducible complexity is thus a special case of specified complexity.

But the problem goes even deeper. Name any property of biological systems that favors a design hypothesis over its naturalistic competitors, and you will find that what makes this property a reliable indicator of design is that it is a special case of specified complexity -- if not, such systems could readily be referred to chance. William Paley's adaptation of means to ends,[25] Harold Morowitz's minimal complexity,[26] Marcel Schutzenberger's functional complexity,[27] and Michael Behe's irreducible complexity all, insofar as they reliably signal design, have specified complexity at their base. Thus, even if a likelihood analysis could coherently assign probabilities conditional upon a design hypothesis (a claim I disputed earlier), the success of such an analysis in detecting design would depend on a deeper probabilistic analysis that finds specified complexity at its base. Consequently, if there is a way to detect design, specified complexity is it.

Let me conclude with a reality check. Often when likelihood theorists try to justify their methods, they reluctantly concede that Fisherian methods dominate the scientific world. For instance, Howson and Urbach, in their Bayesian account of scientific reasoning, concede the underwhelming popularity of Bayesian methods among working scientists.[28] Likewise, Richard Royall, who is the statistical authority most frequently cited by Sober, writes: "Statistical hypothesis tests, as they are most commonly used in analyzing and reporting the results of scientific studies, do not proceed ... with a choice between two [or more] specified hypotheses being made ... [but follow] a more common procedure...."[29] Royall then outlines that common procedure as specifying a chance hypothesis, using a test-statistic to identify a rejection region, checking whether the probability of that rejection region under the chance hypothesis falls below a given significance level, determining whether a sample falls within that rejection region, and if so rejecting the chance hypothesis.[30] In other words, the sciences look to Ronald Fisher and not to Thomas Bayes for their statistical methodology. The smart money is therefore on specified complexity -- and not a likelihood analysis -- as the key to detecting design and turning intelligent design into a full-fledged scientific research program.

Notes

18. Fitelson et al. ("How Not to Detect Design," 475) write, "We do not claim that likelihood is the whole story [in evaluating Chance and Design], but surely it is relevant." In fact, a likelihood analysis is all they offer. What's more, such an analysis comes into play only after all the interesting statistical work has already been done.

19. Stephen C. Meyer, "DNA by Design: An Inference to the Best Explanation for the Origin of Biological Information,"*Rhetoric & Public Affairs* 1(4) (1998): 519-556.=20

20. Michael Polanyi, "Life Transcending Physics and Chemistry,"*Chemical and* *Engineering News* (21 August 1967): 54-66; Michael Polanyi, "*Life's Irreducible Structure,*" *Science* 113 (1968): 1308-1312.

21. Fitelson et al. ("How Not to Detect Design," 479) regard this as an impossible task: "We doubt that there is any general inferential procedure that can do what Dembski thinks the [criterion of specified complexity] accomplishes." They regard it as "enormously ambitious" to sweep the field clear of chance in order to infer design. Nonetheless, we do this all the time. This is not to say that we eliminate every logically possible chance hypothesis. Rather, we eliminate the ones relevant to a given inquiry. The chance hypotheses relevant to a combination lock, for instance, do not include a chance hypothesis that concentrates all the probability on the actual combination. Now it can happen that we may not know enough to determine all the relevant chance hypotheses. Alternatively, we might think we know the relevant chance hypotheses, but later discover that we missed a crucial one. In the one case a design inference could not even get going; in the other, it would be mistaken. But these are the risks of empirical inquiry, which of its nature is fallible. Worse by far is to impose as a priority requirement that all gaps in our knowledge must ultimately be filled by non-intelligent causes.

22. Ibid., 486.

23. Lydia McGrew, "*Likely Machines*: A Response to Elliott Sober's 'Testability'," typescript, presented at conference titled *Design and Its Critics* (Mequon,Wisconsin:ConcordiaUniversity, 22-24 June 2000).

24. Michael Behe, *Darwin**'s Black Box* (New York: Free Press, 1996), 39.

25. See the watchmaker argument in William Paley, *Natural Theology: Or Evidences of the Existence and Attributes of the Deity Collected from the Appearances of Nature, *reprinted (1802; reprintedBoston: Gould and Lincoln, 1852), ch. 1.

26. Harold J. Morowitz, *Beginnings of Cellular Life: Metabolism Recapitulates Biogenesis* (New Haven, Conn.: Yale University Press, 1992), 59-68.

27. Interview with Marcel Schtzenberger, "*The Miracles of Darwinism*," Origins and Design_ 17(2) (1996): 11.

28. Colin Howson and Peter Urbach, *Scientific Reasoning*: *The Bayesian Approach*, 2nd ed. (La Salle, Ill.: Open Court, 1993), 192.

29. Richard Royall, *Statistical Evidence: A Likelihood Paradigm* (London: Chapman & Hall, 1997), 61-62.

30. Ibid., 62.

## Join Metanexus Today

Metanexus fosters a growing international network of individuals and groups exploring the dynamic interface between cosmos, nature and culture. Membership is open to all. Join Now!