Another Way to Detect Design? Part 2

Another Way to Detect Design? Part 2

Print Friendly, PDF & Email

Let me paint the picture more starkly. Consider an elementary event E.  Suppose initially we see no pattern that gives us reason to expect an intelligent agent produced it. But then, rummaging through our background knowledge, we suddenly see a pattern that signifies design in E. Under a likelihood analysis, the probability of E given the design hypothesis suddenly jumps way up. That, however, isn’t enough to allow us to infer design. As is usual in a likelihood or Bayesian scheme, we need to compare a probability conditional on design to one conditional on chance. But for which event do we compute these probabilities? As it turns out, not for the elementary outcome E, but for the composite event E* consisting of all elementary outcomes that exhibit the pattern signifying design. Indeed, it does no good to argue for E being the result of design on the basis of some pattern unless the entire collection of elementary outcomes that exhibit that pattern is itself improbable on the chance hypothesis. The likelihood theorist therefore needs to compare the probability of E* conditional on the design hypothesis with the probability of E* conditional on the chance hypothesis. The bottom line is this: the likelihood approach offers no account of how it arrives at the (composite) events upon which it performs its analyses. The selection of those events is highly intentional and, in the case of Bayesian design inferences, needs to presuppose an account of specification.

So simple and straightforward do Sober and his colleagues regard their likelihood analysis that they mistakenly conclude: “Caputo was brought up on charges and the judges found against him.”[8] Caputo was brought up on charges of fraud, but in fact the New Jersey Supreme Court justices did not find against him. [9] The probabilistic analysis that Sober and fellow likelihood theorists find so convincing is viewed with skepticism by the legal system, and for good reason. Within a likelihood approach, the probabilities conferred by design hypotheses are notoriously imprecise and readily lend themselves to miscarriages of justice. [10]

I want therefore next to examine the very idea of hypotheses conferring probability, an idea that is mathematically straightforward but that becomes problematic once the likelihood approach gets applied in practice. According to the likelihood approach, chance hypotheses confer probability on states of affairs, and hypotheses that confer maximum probability are preferred over others. But what exactly are these hypotheses that confer probability? In practice, the likelihood approach is too cavalier about the hypotheses it permits. Urn models as hypotheses are fine and well because they induce well-defined probability distributions. Models for the formation of functional protein assemblages might also induce well-defined probability distributions, though determining the probabilities here will be considerably more difficult (I develop some techniques for estimating these probabilities in my forthcoming book No Free Lunch). But what about hypotheses like “Natural selection and random mutation together are the principal driving force behind biological evolution” or “God designed living organisms”? Within the likelihood approach, any claim can be turned into a chance hypothesis on the basis of which likelihood theorists then assign probabilities. Claims like these, however, do not induce well-defined probability distributions. And since most claims are like this (i.e., they fail to induce well-defined probability distributions), likelihood analyses regularly become exercises in rank subjectivism.

Consider, for instance, the following analysis taken from Sober’s text Philosophy of Biology. [11] Sober considers the following state of affairs: E — “Living things are intricate and well-suited to the task of surviving and reproducing.” He then considers three hypotheses to explain this state of affairs: H1 — “Living things are the product of intelligent design”; H2 — “Living things are the product of random physical processes”; H3 — “Living things are the product of random variation and natural selection.” As Sober explains, prior to Darwin only H1 and H2 were live options, and E was more probable given H1 than given H2. Prior to Darwin, therefore, the design hypothesis was better supported than the chance hypothesis. But with Darwin’s theory of random variation and natural selection, the playing field was expanded and E became more probable given H3 than either H1 or H2.

Now my point is not to dispute whether in Darwin’s day H3 was a better explanation of E than either H1 or H2. My point, rather, is that Sober’s appeal to probability theory to make H1, H2, and H3 each confer a probability on E is misleading, lending an air or mathematical rigor to what really is just Sober’s own subjective assessment of how plausible these hypotheses seem to him. Nowhere in this example do we find precise numbers attached to Sober’s likelihoods. The most we see are inequalities of the form P(E|H1) >> P(E|H2), signifying that the probability of E given H1 is much greater than the probability of E given H2 (for the record, the “much greater” symbol “>>” has no precise mathematical meaning). But what more does such an analysis do than simply assert that with respect to the intricacy and adaptedness of organisms, intelligent design is a much more convincing explanation to Sober than the hypothesis of pure chance? And since Sober presumably regards P(E|H3) >> P(E|H1), the Darwinian explanation is for him an even better explanation of the intricacy and adaptedness of organisms than intelligent design. The chance hypotheses on which Sober pins his account of scientific rationality and testability are not required to issue in well-defined probability distributions. Sober’s probabilities are therefore probabilities in name only.

But there are more problems with the likelihood approach. Even when probabilities are well-defined, the likelihood approach can still lead to wholly unacceptable conclusions. Consider, for instance, the following experimental setup. There are two urns, one with five white balls and five black balls, the other with seven white balls and three black balls. One of these urns will be sampled with replacement a thousand times, but we do not know which. The chance hypothesis characterizing the first urn is that white balls should on average occur the same number of times as black balls, and the chance hypothesis characterizing the second urn is that white balls should on average outnumber black balls by a ratio of seven to three. Suppose now we are told that one of the urns was sampled and that all the balls ended up being white. The probability of this event by sampling from the first urn is roughly 1 in 10^300 whereas the probability of this event by sampling from the second urn is roughly 1 in 10^155.

The second probability is therefore almost 150 orders of magnitude greater than the first. Thus on the likelihood approach, the hypothesis that the urn had seven white balls is vastly better confirmed than the hypothesis that it only had five. But getting all white balls from the urn with seven white balls is a specified event of small probability, and on Fisher’s approach to hypothesis testing should be eliminated as well (drawing with replacement from this urn 1000 times, we should expect on average around 300 black balls from this urn, and certainly not a complete absence of black balls). This comports with our best probabilistic intuitions: Given these two urns and a thousand white balls in a row, the only sensible conclusion is that _neither_ urn was randomly sampled, and any superiority of the “urn two” hypothesis over the “urn one” hypothesis is utterly insignificant. To be forced to choose between these two hypotheses is like being forced to choose between the moon being made entirely of cheese or the moon being made entirely of nylon. Any superiority of the one hypothesis over the other drowns in a sea of inconsequentiality.

The likelihood principle, being an inherently comparative instrument, has nothing to say about the absolute value of the probability (or probability density) associated with a state of affairs, but only their relative magnitudes. Consequently, the vast improbability of either urn hypothesis in relation to the sample chosen (i.e., 1000 white balls) would on strict likelihood grounds be irrelevant to any doubts about either hypothesis. Nor would such vast improbabilities in themselves provide the Bayesian probabilist with a legitimate reason for reassigning prior probabilities to the two urn hypotheses considered here or assigning nonzero probability to some hitherto unrecognized hypothesis (for example, a hypothesis that makes white balls overwhelmingly more probable than black balls).

The final problem with the likelihood approach that I want to consider is its treatment of design hypotheses as chance hypotheses. For Sober any hypothesis can be treated as a chance hypothesis in the sense that it confers probability on a state of affairs. As we have seen, there is a problem here because Sober’s probabilities typically float free of well-defined probability distributions and thus become irretrievably subjective. But even if we bracket this problem, there is a problem treating design hypotheses as chance hypotheses, using design hypotheses to confer probability (now conceived in a loose, subjective sense) on states of affairs. To be sure, designing agents can do things that follow well-defined probability distributions. For instance, even though I acted as a designing agent in writing this paper, the distribution of letter frequencies in it follow a well-defined probability distribution in which the relative frequency of the letter ‘e’ is approximately 13 percent, that of ‘t’ approximately 9 percent, etc. — this is the distribution of letters for English texts.[12] Such probability distributions ride, as it were, epiphenomenally on design hypotheses.[13] Thus in this instance, the design hypothesis identifying me as author of this paper confers a certain probability distribution on the letter frequencies of it. (But, note, if these letter frequencies were substantially different, a design hypothesis might well be required to account for the difference. In 1939, Ernest Vincent Wright published a novel of over 50,000 words titled Gadsby that contained no occurrence of the letter ‘e’. Clearly, the absence of the letter e was designed. [14])

Sober, however, is much more interested in assessing probabilities that bear directly on a design hypothesis than in characterizing chance events that ride epiphenomenally on it. In the case of letter frequencies, the fact that letters in this paper appear with certain relative frequencies reflects less about the design hypothesis that I am its author than about the (impersonal) spelling rules of English. Thus with respect to intelligent design in biology, Sober wants to know what sorts of biological systems should be expected from an intelligent designer having certain characteristics, and not what sorts of random epiphenomena might be associated with such a designer. What’s more, Sober claims that if the design theorist cannot answer this question (i.e., cannot predict the sorts of biological systems that might be expected on a design hypothesis), then intelligent design is untestable and therefore unfruitful for science.

Yet to place this demand on design hypotheses is ill-conceived. We infer design regularly and reliably without knowing characteristics of the designer or being able to assess what the designer is likely to do. Sober himself admits as much in a footnote that deserves to be part of his main text: “To infer watchmaker from watch, you needn’t know exactly what the watchmaker had in mind; indeed, you don’t even have to know that the watch is a device for measuring time. Archaeologists sometimes unearth tools of unknown function, but still reasonably draw the inference that these things are, in fact, tools.”[15]

Sober is wedded to a Humean inductive tradition in which all our knowledge of the world is an extrapolation from past experience. [16] Thus for design to be explanatory, it must fit our preconceptions, and if it does not, it must lack epistemic support. For Sober, to predict what a designer would do requires first looking to past experience and determining what designers in the past have actually done. A little thought, however, should convince us that any such requirement fundamentally misconstrues design. Sober’s likelihood approach puts designers in the same boat as natural laws, locating their explanatory power in an extrapolation from past experience. To be sure, designers, like natural laws, can behave predictably (for instance, designers often institute policies that are dutifully obeyed). Yet unlike natural laws, which are universal and uniform, designers are also innovators. Innovation, the emergence to true novelty, eschews predictability. A likelihood analysis generates predictions about the future by conforming the present to the past and extrapolating from it. It therefore follows that design cannot be subsumed under a likelihood framework. Designers are inventors. We cannot predict what an inventor would do short of becoming that inventor.

But the problem goes deeper. Not only can Humean induction not tame the unpredictability inherent in design; it cannot account for how we recognize design in the first place. Sober, for instance, regards the design hypothesis for biology as fruitless and untestable because it fails to confer sufficient probability on biologically interesting propositions. But take a different example, say from archeology, in which a design hypothesis about certain aborigines confers a large probability on certain artifacts, say arrowheads. Such a design hypothesis would on Sober’s account be testable and thus acceptable to science. But what sort of archeological background knowledge had to go into that design hypothesis for Sober’s likelihood analysis to be successful? At the very least, we would have had to have past experience with arrowheads. But how did we recognize that the arrowheads in our past experience were designed? Did we see humans actually manufacture those arrowheads? If so, how did we recognize that these humans were acting deliberately as designing agents and not just randomly chipping away at random chunks of rock (carpentry and sculpting entail design; but whittling and chipping, though performed by intelligent agents, do not)? As is evident from this line of reasoning, the induction needed to recognize design can never get started.[17] Our ability to recognize design must therefore arise independently of induction and therefore independently of Sober’s likelihood framework.

The direction of Sober’s logic is from design hypothesis to designed object, with the design hypothesis generating predictions or expectations about the designed object. Yet in practice we start with objects that initially we may not know to be designed. Then by identifying general features of those objects that reliably signal design, we infer to a designing intelligence responsible for those objects. Still further downstream in the logic is an investigation into the specific design characteristics of those objects (e.g., How was the object constructed? How could it have been constructed? What is its function? What effect have natural causes had on the original design? Is the original design recoverable? How much has the original design been perturbed? How much perturbation can the object allow and still remain functional?). But what are those general features of designed objects that set the design inference in motion and reliably signal design? The answer I am urging is specification and complexity.

Notes

[8]. Fitelson et al., “How Not to Detect Design,” 474.

[9]. According to the New York Times (23 July 1985, B1): “The court suggested — but did not order — changes in the way Mr. Caputo conducts the drawings to stem ‘further loss of public confidence in the integrity of the electoral process.’ … Justice Robert L. Clifford, while concurring with the 6-to-0 ruling, said the guidelines should have been ordered instead of suggested.” The court did not conclude that cheating was involved, but merely suggested safeguards so that future drawings would be truly random.

[10]. See Laurence Tribe’s analysis of the Dreyfus affair in “Trial by Mathematics: Precision and Ritual in the Legal Process,” Harvard Law Review 84 (1971): 1329-1393.

[11]. Sober, Philosophy of Biology, 33.

[12]. See Simon Singh, The Code Book: The Evolution of Secrecy from Mary Queen of Scots to Quantum Cryptography (New York: Doubleday, 1999), 19.

[13]. This epiphenomenal riding of chance on design is well-known. For instance, actuaries, marketing analysts, and criminologists all investigate probability distributions arising from the actions of intelligent agents (e.g., murder rates). I make the same point in The Design Inference (46-47). Fitelson et al.’s failure to recognize this point, however, is no criticism of my project: “Dembski treats the hypothesis of independent origination as a Chance hypothesis and the plagiarism hypothesis as an instance of Design. Yet, both describe the matching papers as issuing from intelligent agency, as Dembski points out (47). Dembski says that context influences how a hypothesis gets classified (46). How context induces the classification that Dembski suggests remains a mystery.” (“How Not to Detect Design,” 476) There is no mystery here. Context tells us when the activity of an intelligent agent has a well-defined probability distribution attached to it.

[14]. Ernest Vincent Wright, Gadsby (Los Angeles: Wetzel, 1939).

[15]. Sober, “Testability,” 73, n. 20.

[16]. Hume himself rejected induction as sufficient for knowledge and regarded past experience as the source of a non-reflective habituation of belief.

[17]. Thomas Reid argued as much over 200 years ago: “No man ever saw wisdom, and if he does not [infer wisdom] from the marks of it, he can form no conclusions respecting anything of his fellow creatures…. But says Hume, unless you know it by experience, you know nothing of it. If this is the case, I never could know it at all. Hence it appears that whoever maintains that there is no force in the [general rule that from marks of intelligence and wisdom in effects a wise and intelligent cause may be inferred], denies the existence of any intelligent being but himself.” See Thomas Reid, Lectures on Natural Theology, eds. E. Duncan and W. R. Eakin (1780; reprintedWashington,D.C.: University Press of America, 1981), 56.