A Misguided Critique of Irreducible Complexity

Yesterday, I began to respond to an article by John Danaher, a legal scholar and philosopher of ethics and religion at the University of Galway in Ireland. We saw that Danaher committed a genetic fallacy concerning the alleged religious motivations of ID proponents, erroneously claimed that we seek to hide our religious persuasions, and misrepresented Discovery Institute’s policy on education. Now, I shall address Danaher’s specific critiques of the argument from irreducible complexity.

Defining the Argument

Danaher writes,

Let’s start by clarifying the structure of the argument. The basic idea is that certain natural phenomena, specifically features of biological organisms, display a property that cannot be accounted for by mainstream evolutionary theory.


This is partially correct. However, the other part of the argument, which is not stated here, is that these same natural phenomena also bear hallmarks of conscious, rational, deliberative agency. In the case of irreducibly complex systems (the primary focus of Danaher’s article), the argument is that intelligent agents alone have the capability of visualizing complex endpoints and bringing together everything needed to realize that end goal. Thus, on the supposition of design, the existence of irreducibly complex systems (i.e., biological processes that depend for their function on multiple well-crafted components) are not especially improbable. However, they are wildly improbable on the hypothesis of naturalistic evolution. Thus, they tend to confirm (I would contend, overwhelmingly so) the design hypothesis over naturalistic evolution. 

Danaher represents the design inference, based on irreducible complexity, as follows:

  1. X is an irreducibly complex system. (For illustrative purposes say “X = bacterial flagellum”) 
  2. If X is an irreducibly complex system, then X must have been brought about by intelligent design. 
  3. Therefore, X (the bacterial flagellum) must have been intelligently designed.

I generally do not find logical syllogisms to be the best way of expressing scientific arguments, which are probabilistic in nature. Rather than arguing that an irreducibly complex system “must have been brought about by intelligent design,” this is better nuanced by stating that such a system is better explained by design than by unguided evolutionary processes. Is it impossible that such a system could be brought about by unguided evolution? No, but it is exceedingly improbable. Another reason I prefer not to use syllogisms in expressing arguments is that often a piece of evidence can increase the probability of a hypothesis without establishing it as true. Presenting each argument as a syllogism leads to the unfortunate misimpression that one can evaluate each evidence separately and, upon finding each individually to be non-decisive, move on to the next — rather than considering the evidential force of all the evidence taken in aggregate.

Alleged Conceptual Problems

Danaher’s first critique of irreducible complexity is given as follows:

It’s easy enough to say that the basic function of the mousetrap is to trap and kill mice. After all, we know the purpose for which it was designed. We know why all the parts are arranged in the format that they are. When it comes to natural objects, it’s a very different thing. Every object, organism, or event in the physical world causes many effects. A mouth is a useful food-grinding device, but it is also a breeding ground for bacteria, a signaling tool (e.g., smiles and smirks), a pleasure organ, and more. To say that one of these effects constitutes its “basic function” is contentious.


It is certainly true that some biological systems have multiple functions. For example, the primary purpose of ATP synthase is to catalyze the formation of ATP using the energy generated by the flow of protons down their electrochemical gradient. However, there are also rare circumstances where ATP synthase operates in reverse, acting as a proton pump (at the expense of ATP hydrolysis) rather than a synthase. The latter function, though, requires no less complexity. To take another example, the primary purpose of the bacterial flagellar motor is to propel cells through liquid. But, in some species, bacteria can use their flagella to form biofilms (communities of bacteria attached to surfaces), where the flagella contribute to the initial attachment. However, this is a secondary function. It does not detract from the irreducible complexity of the assembly system, the chemotaxis system, or the rotational mechanisms. There are also many irreducibly complex systems that have only one function — for example, the DNA replication machinery only has one purpose, which is to duplicate the genome in preparation for cell division.

A Serious Argument?

Danaher continues:

We cannot read the basic function of an alleged ICS off the book of nature. We need interpretive principles. One such principle would be to appeal to the intentions of an intelligent designer. But proponents of intelligent design don’t like to do this because they try to remain agnostic about who the designer is.


I find it difficult to view this as a serious argument. Is Danaher really suggesting that we cannot tell what the principle function of a flagellum, or a DNA replisome, is? To take an analogy, suppose that a thousand years ago explorers were to discover a modern vehicle, such as an automobile. Further suppose that, after much experimentation, they were able to determine how to operate the car. Despite never having encountered a modern vehicle before and having no idea who invented it, they would quickly discover what the machine was intended for — that is, moving at high speeds from one geographical region to another. It is unnecessary to interview the engineer in order to discover the designed purpose of a machine. 

Danaher continues,

Furthermore, even if they admitted to being orthodox theists, there would be problems. The mind of God is mysterious thing. Many a theologian has squandered a career trying to guess at His intentions. Some say we should not even try: God has beyond-our-ken reasons for action.


Since Danaher wants to talk about thoughts in the mind of God, let’s go there. I myself am a skeptical theist — meaning that I believe we should be extremely cautious about intuiting what God would or would not do (in the same way that a novice chess player should be skeptical about his or her intuitions about what moves Magnus Carlsen might make in a tournament match). Given that God has exhaustive knowledge and is much wiser than we are, it would not at all be surprising if God has knowledge that we lack access to — knowledge that is relevant to one or more of his decisions. This has applicability to the problem of evil, since it is difficult for us to evaluate, from our limited vantagepoint, whether God plausibly might have morally sufficient justification for allowing natural and personal evil to exist in the world. This is not to say that the problem of evil has no evidential force against theism, but, rather, that we should be cautious about overstating what we can assert with confidence about what God would or would not do or allow to happen. Moreover, there is a problem of diminishing returns by multiplying examples. If God has a morally sufficient justification for permitting one instance of evil (no matter how unexpected), he may well have a similar justification for permitting similar instances of evil. Thus, one cannot simply add successive examples indefinitely and expect the argument against theism to continue to grow in strength. Instances of evil in the world are therefore not epistemically independent.

A Double-Edged Sword

A popular objection that surfaces for the skeptical theist is that it serves as a double-edged sword, since it implies that the God hypothesis has no, or at least very limited, predictive power (this seems to be what Danaher is getting at). If one cannot confidently say what God is likely to do, how can one mount an argument for theism? But one does not need to assert that God probably has a particular intention, but rather only that such an intention is not wildly implausible (whereas it is absurdly improbable on the falsity of the hypothesis). So long as that likelihood ratio is top-heavy, it provides evidence that confirms theism.

Danaher continues,

Another possibility is to try to naturalize the concept of a basic function. But this too poses a dilemma for the proponent of intelligent design. One popular way of naturalizing basic functions is to appeal to the theory of evolution by natural selection — i.e. to argue that the basic function of a system is the one that was favored by natural selection — but since the goal of intelligent design theorists is to undermine natural selection this solution is not available to them.


But, of course, natural selection is a real phenomenon, and it does act to preserve those complex features that confer a fitness advantage, even if those systems were in fact designed. So, one could define the basic function as that which confers a benefit to the organism, and thus is conserved by natural selection. This critique, thus, again betrays misunderstandings about ID.

Evolutionary Co-optation

Danaher’s second criticism of the argument is that irreducibly complex systems can be explained by evolutionary co-optation. He asserts that proponents of ID “argue that natural selection — if they accept the idea at all — can only work in a gradual, step-wise fashion.” This again misrepresents ID proponents, since I do not know anyone in the ID community who would reject the reality of natural selection (one wonders who he has in mind?). The key point of contention is not whether natural selection occurs, but whether it is causally adequate to account for the complex designed features of living organisms. Danaher further explains,

The evolutionist’s response to this is pretty straightforward: you’re thinking about it in the wrong way. It may well be true that the bacterial flagellum is, currently, irreducibly complex, such that if you altered or removed one part it would no longer function as a rotary motor. But that doesn’t mean that the parts that currently make up the flagellum couldn’t have had other functions over the course of evolutionary history, or couldn’t have contributed to other systems that are not irreducibly complex over that period of time. The flagellum is the system that has emerged at the end of an evolutionary sequence, but evolution did not have that system in mind when it started out. Evolution isn’t an intelligently directed process. Anything that works (that contributes to survival or reproduction) gets preserved and multiplied, and the bits and pieces that work can get merged into other systems that work. So one particular protein may have contributed to a system that helped an organism at one point in time, but then get co-opted into another, newer, system at a later point in time.


This objection has been responded to ad nauseum already. In brief, pointing to homologues of flagellar proteins does not undermine the argument from irreducible complexity, since co-opting those proteins to produce a flagellar system depends upon multiple co-incident changes in order for the new system to be realized. For example, flagellar-specific proteins would not confer a selective advantage until incorporated into the flagellar system. But the necessary proteins that serve roles in other systems will not become incorporated into the flagellar system before these flagellar-specific proteins arise. There is also the need for complementary protein-protein binding interfaces, as well as a choreographed assembly system to ensure that the proteins are assembled in the appropriate order (which depends upon the flagellar genes being organized into a transcriptional hierarchy along the bacterial chromosome). For a more detailed discussion, see my previous article here.

In the case of some irreducibly complex systems, such as bacterial cell division or the DNA replication machinery, the option of co-optation does not exist since these processes are fundamental to self-replication, which is in turn a prerequisite for differential survival (i.e., natural selection). Thus, these processes are essential for evolutionary processes to even work.

Conclusion

To conclude, Danaher’s critiques of irreducible complexity are poorly informed and based on misunderstandings of intelligent design and what its key defenders argue. I trust that the remarks given above will serve to clarify our differences.

This article was originally published on March 7th, 2024, at Evolution News & Science Today.

Share