A Reductionism Based Challenge to Strong Emergence

A Reductionism Based Challenge to Strong Emergence

Print Friendly, PDF & Email

“The main fallacy in this kind of thinking is that the reductionist hypothesis does not by any means imply a �constructivist� one:  The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe� (P. W. Anderson 1972)

�It is plain that there is no natural cause which could determine all the planets, both primary and secondary, to move the same way and in the same plane, without any considerable variation; this must have been the effect of counsel.� (Isaac Newton�s 1st letter to Bently Dec 10, 16921)

 

I. Introduction

As the social ideal of science shifts from physics to biology old issues are being revisited.  The concept of mind and soul seem to be at odds with an ontological reductionism that forms the basis of physics.  Can biology incorporate concepts of mind and soul without introducing new substances or processes into the underlying physics that are incapable of experimental testing?  The vitalistic theories advanced in the 1800�s and early 1900�s were such a creation and created many philosophical issues for the sciences of their day.  Perhaps emergence represents a method for allowing concepts like mind and soul to co-exist with current descriptions of the fundamental laws of physics without invoking a dualist and/or vitalistic philosophy.  It has been suggested that there exist �laws of nature� at the biological level that can not be reduced to laws of physics.  This would imply that the reductionism will be unable to account for all of the �laws of nature.� 

Several prominent physicists2 have proposed that there are currently known phenomena in physics that also can be shown to be emergent processes.  Some go further and advocate that science has reached about as far as reductionism can go and in order to advance, science needs to embrace a method that moves beyond pure reductionism.  Is this a reasonable way for physics to proceed?

Science is constantly trying to explain that which is not currently understood.  At any given time there are many ideas and phenomena whose explanation is currently unknown.  The history of science is filled with hypothesized substances that later are abandoned when there is no longer any need for their existence3.  Sometimes science can see the start of an explanation to guide the efforts at understanding.  At other times there is no such �glimpse� of a possible solution.  The history of science filled with people who have assumed that their inability to solve the problem was due to the problem being �unsolvable�.  Sometimes these issues were offered as �proofs� of the existence of God (the �God of the Gaps�).  No less a natural philosopher than Isaac Newton fell into this trap.  Newton claimed that the organization of the solar system could only be explained by a divinity.  He could not find �initial conditions� that would lead to our solar system.  Not being able to find the initial conditions does not prove that they do not exist.  Subsequent works on this problem eventually led to the idea of planetary disks that slowly combine to form planets.  Today these proto-planetary disks have been imaged by the Hubble Space Telescope.  To paraphrase Laplace, science no longer needs the direct intervention of God to explain the existence of our solar system.  It is a risky assumption to assume that something �cannot� be explained by science.  Is emergence such a �risky assumption�?

The first part of this paper examines the role of the �bridge laws� in the inter-theoretic reduction between the �levels� of our physical description.  Reductionism has at its foundation the idea of �levels� of description.  Theories in science are developed to describe the behavior at one of these levels.  Inter-theoretic reduction attempts to deduce the higher level theories from the theories and concepts of the lower level theories.  However a higher level theory often (always?) contains concepts that have no counterpart in the lower level description.  The role of �bridge laws� is to link these higher level concepts to the lower level ones.  From these considerations we list some (most?) of the ways in which the reductive program might fail to provide us with a philosophically consistent description of reality that agrees with the known empirical evidence.   We can identify some of these potential failure mechanisms with the concept of emergence.  We will find that emergence can only arise through some indeterminacy in the basic laws of physics.  It is the indeterminacy that allows the emergent property to �emerge� at a high level and that presents a challenge to reductionism.

Unfortunately, �Emergence� does not have a universally accepted definition.  Adopting a meaning that is too weak fails to provide significant metaphysical or epistemological insights.  Adopting a meaning that is too strong may fail to include any actual examples that fit the definition.   Following Clayton (2004) we will define two major species of emergence (strong and weak) whose main difference is in the concept of �Downward Causation.� 

The second part of the paper is intended to give an example of some of the issues raised in the first part of the paper.  We construct a �virtual world� of probabilistic coins (which can interact) and consider how emergence might enter this world.  First we show that simple interactions between two coins can create patterns that extend to macroscopic features of our �virtual world.� Thus the existence of macroscopic patterns, by itself, is not sufficient to demonstrate emergent phenomena.  Next we present a situation where the probability of flipping a coin depends on a �macroscopic� parameter (excess of heads) and drives the macroscopic system to a defined state.   Finally we present another situation that produces the same result and pattern which only depends on �nearest neighbor coins�.   If we start with the data, how do you determine which model represents �reality?�  The challenge is to show that emergence is required for the explanation of the physical phenomena.

At the end of the article a challenge is offered that emergence will probably have to meet if it is to find wide acceptance in the physics community.  But what of the physicists who are leading the emergent cause?  We will close by examining one of Laughlin�s comments and offer our conclusions.

 

II  The Reductive Program and Emergence

Emergence properties imply the existence of levels of description of physical systems.  These levels certainly exist in our thinking about the world and the equations that we use to describe these levels.  In practice a different set of principles and equations are used to describe the behavior of atoms and automobiles.  But are these simply levels in our understanding or are they reflections of a deeper metaphysical difference?

Even if one could solve for the behavior of every molecule in a glass of water, what would you do with all that information?   Many of the characteristics we are interested in (like temperature) are not properties of individual molecules.  Temperature for instance is related to the average of the �random� kinetic energy of individual molecules.4  This is typical of many of the macroscopic (upper level) characteristics in which we are interested.

To describe the macroscopic world physics utilizes an approximation where the number of water molecules in the glass is very large.  The contribution from a single molecule to the macroscopic properties is small, which allows the introduction of statistics into the description of the macroscopic material.  There is a formalism of �statistical mechanics� that allows us to make this transition from the molecules to the macroscopic whole in a systematic manner5.  In this process no new �forces� or laws are introduced.  However, there are often assumptions made about the behavior of microscopic participants.6  It is these assumptions that allow us to replace the dynamics of individual molecules by statistical statements.  Sometimes these assumptions are directly testable and at other times the results of these assumptions are testable.

There are systems which have a large enough number of particles that solving for the behavior of each particle is at best difficult, but the number of particles is small enough that that the assumption of a �very large� number of particles is not a valid assumption.  This intermediate regime has been labeled �mesoscale� by Laughlin (Laughlin et al 2000).  Some of the claimed emergent phenomena in physics come from this realm.  Does this represent a third �level� for reductionism, or is it simply a more complicated �intermediate� case?

Do these levels have metaphysical as opposed to epistemological significance?  I think it safe to say that the proponents of strong emergence require a metaphysical significance to these levels.   If so, where does one place the boundaries between the various levels?  If I collect 10 atoms have a crossed a border to a new �emergent level�?  How about 11, 12, � , 100, 101, � , 1000, 1001, � ,  where do the borders lie? Does one simply rely on the detection of an emergent property to tell you that you have crossed the border?  In this day and age where we are gaining the ability to count atoms this is no longer idle speculation.  Is there a system where the addition of 1 additional atom would introduce new emergent laws?  Call the number of atoms necessary for the new level N.  What prevents us from viewing this as an �N-particle interaction?�  We have examples of 2, 3 and many particle interactions already.  The formalism of statistical mechanics allows us to build a bottom � up reductive theory of physics for almost arbitrary N. 

A galaxy of stars all interact through their gravitational attraction.  Is the Galaxy a fundamentally different level than the level of individual stars?  Are emergent levels only associated with Quantum Mechanics?  Assigning metaphysical significance to the levels raises a number of practical as well as philosophical issues.  Even if true, it is not clear that this would mean that a reductive approach to science would necessarily fail. 

A. The Role of Bridge Laws

The reductive approach attempts to understand the macroscopic laws in terms of the microscopic behavior.   We need a method of linking the macroscopic ideas (volume, temperature, density etc.) to the microscopic particulars.  We need to establish �bridge laws� between the levels of description.7

There have been numerous cases in the history of science where macroscopic phenomena were understood and used without an understanding of the �lower level� physics.  Only later were the two levels linked together.  One of the most studied areas is the concept of temperature in thermodynamics.  Thermometers date back at least to Galileo�s thermometer of 1593, with the first temperature scales dating to the early 1700�s and a mercury thermometer in 1714.   Heat and temperature have been the subject of natural philosophers from perhaps Heraclitus onward.  Sometimes heat was tied to a substance (fire, phlogiston8 and then caloric9) and sometime to motion (Descartes10, Bacon11, Hooke12 and many others).  It was not until the development of statistical mechanics that a method was found to analytically link the motion of the molecules in a gas to the temperature of the gas13.  This provided a link between the atomic theory of the time (microscopic descriptions) and temperature (a macroscopic property). 

Temperature is a property we measure with a thermometer.  Statistical Mechanics relates this measurement to the kinetic energy associated with the random motion of individual atoms14.  Statistical mechanics requires not only an understanding of the motion of the individual atoms, but an assumption about the random nature of that motion (the ergodic hypothesis).   This hypothesis allows the replacement of a time average by an average over an �ensemble� (of possible arrangements of the atoms in the gas)15.  Showing that the ergodic hypothesis arises from the equations of motions would provide a direct link from the atomic description to the macroscopic idea of temperature16.  In fact there is a great deal of work that has since justified the ergodic hypothesis.  There are situations where the ergodic hypothesis is not fully justified.  In these cases the concept of temperature also needs changing.  For instance a plasma in a magnetic field can have temperatures �along the magnetic field� that are different than the temperatures �perpendicular to the magnetic field.�  This is directly attributed to the difference in behavior of the ionized particles along and perpendicular to the magnetic fields.  The ergodic or non-ergodic nature of the microscopic particles has a direct consequence on the macroscopic temperature. 

Is temperature an emergent property?  In this situation a higher level property (temperature) is determined by lower level properties (the motion of individual atoms).  I would argue that accepting temperature (and similar cases) as emergent properties results in a concept of emergence that loses any metaphysical significance.  To have a metaphysical rather than epistemological emergence you need to show that the high level concept CANNOT be derived from a lower level one.  How does one do this?  If we can not show this, how can we be sure we have uncovered a metaphysically emergent concept? 

B. Reductionism as a project

At the current time there is no complete, philosophically consistent, empirically correct, testable theory of physics.  Creating such a theory is the main goal of a �theory of everything� (TOE).  Reductionism is the dominant approach to creating such a theory.  The end result is not yet available for examination.  Therefore, for the purposes of this paper reductionism will be viewed as a program, rather than an ontological statement.

Most attempts at a theory of everything assume that there is only one type of �substance� and avoid a dualist approach to describing nature, a monist perspective.  The theory of everything contains a �lowest� or �primitive� level that can not be further reduced.  The laws that ar contained in the theory of everything would operate on the objects that exist in this lowest level.  Presumably these laws would describe how these most primitive objects interact.

For the theory of everything to provide both predictive and explanatory power the interaction of �higher levels� entities would need to be �explained� (derived from) the lower level laws with (perhaps) the appropriate �initial conditions�, �boundary conditions� etc.17  For the theory to truly be a �theory of everything� this predictive and explanatory power would have to be extended to all levels of organization without introducing any new �fundamental forces� or �substances� at higher levels.18  Anderson attempts to split the program into reductive and constructive pieces.  However it is the constructive piece that provides the explanatory and predictive power of the program.  

There are at least 3 ways in which this reductive program could be shown to be impossible to complete.

Failure 1: Inter-level incompatibility

The most spectacular failure of reductionism would be if the observed behaviors at one level are shown to violate the laws at a lower level.  At the current time no such case has been established.  Even if such an experiment did provide an example of inter-level incompatibility it is likely that the response of the scientific community would be to change the lower level theory to avoid the conflict while still maintaining agreement with existing empirical results19.   Therefore, to show that an inter-level incompatibility is inconsistent with a reductive theory of everything you would have to show that the observations at the higher level are inconsistent with all possible theories of everything.20  This is a far more difficult job that just showing inconsistencies with the current understanding of the laws of physics.

 

Failure 2: Inter-level indeterminacy

Another way that reductionism could fail would be if the laws that govern the behavior at one level can not be derived from lower level considerations.  This is the �constructivist� failure described by Anderson.  There are several ways in which inter-level reduction can fail without the results at the �upper level� being incompatible with lower level behaviors.   We will group all such occurrences under the heading of �indeterminacy�.  There are at least 4 such possibilities;

A) Inter-level non uniqueness

B) Lowest level non-random Indeterminacy 

C) Indeterminacy due to randomness

D) G�del Undecidability

 

Failure 2A: Inter-level non uniqueness

Consider a situation where the laws at a lower level are deterministic and complete (like Newton�s equations).   Is it possible for the laws that govern behavior at a higher level to be in determinant in spite of the fact that they were derived from lower level laws that are determinant?  The lower level theories would give deterministic behavior, yet the higher level system would have several possible behaviors.  When the experiment is performed only one behavior can be observed, while the other possible behaviors are NOT observed.  In order for this to be a legitimate failure of reductionism, the lower level could not contain any method of selecting one possible behavior over another.

Figure 1

An example of such a situation is the bifurcation of a loaded thin beam.  You can do this experiment yourself.  Take a thin plastic ruler and place it vertically on the table and push down on the ruler (as shown in figure 1)

If you apply just a little force nothing will happen.  If you increase the force eventually the ruler will buckle and bend in the middle.  It can bend in either direction.  Apply enough force and there are two possible equilibrium solutions.  Physics can predict at what force this bifurcation happens, but can it predict which of the two equilibrium solutions will be preferred?

It might seem that this is an example of a failure of a deterministic theory.  However the theory that describes this situation is a �static theory�.  It is a simplification that assumes that the ruler has no motion.  But the ruler has to move from one equilibrium solution to the other.  Thus to determine which equilibrium position will be occupied by the ruler requires the full dynamical equations.  The result will be very sensitive to the position and motion of the ruler as your applied force becomes large enough to cause the bifurcation.  In practice we may not be able to predict the eventual state due to our ignorance of all the relevant information.21  This information is often not necessary since the bifurcation already represents a failure of the �beam� (a situation civil engineers want to avoid).

Non-linear dynamical equations can allow for multiple solutions (or no solutions).  When the fundamental physical laws become non-linear this is a concern. Thus this failure mechanism is ultimately tied to either nonlinearity in the fundamental laws, or our inability to determine precisely the initial conditions and/or boundary conditions.

Failure 2B:  Lowest level non-random indeterminacy

Consider a situation where the laws that govern the behavior at the lowest level do not fully describe the behavior of THAT Level.  However the theories describing the lower level behavior are not random.  Such theories by definition are indeterminate.  In general this situation could allow for multiple possibilities at all higher levels. 

If one took the point of view that the position and momentum22 of an electron were both metaphysically well defined concepts, then the Quantum Mechanics of our electron becomes such a theory23.  Schr�dinger�s equation fully determines the behavior of the wave function, but the wave function does not fully determine both the position and momentum.   What does determine the position and momentum of the electron? In this interpretation the Heisenberg Uncertainty relations are statements about the inadequacy of the theory.

In a situation such as this what determines the actual behavior of the lowest level?  It has been argued that perhaps the lower level is determined though �downward causation.�  In our example, when we make a measurement of the position of the electron the upper level description (the observer) determines the position of the electron.  We will address downward causation later.

Failure 2C: Indeterminacy due to randomness

Consider the situation where some part of a lower level theory is fundamentally random in nature.  If this randomness is going to result in a failure of reductionism it needs to cause indeterminacy in the upper level behavior.  It has been suggested that this randomness allows for the emergence of new properties at higher levels.

Radom processes can have well defined statistics.  One interesting possibility is to examine the statistics of a random process at various levels.  A single coin can have a 50-50 chance of coming up heads and tails.  This does not imply that 1,000 coins all flipped simultaneously will have exactly 500 heads and exactly 500 tails.  But statistics does require that if we do this experiment repeatedly, and the coins do not interact, we obtain a distribution of results that obey well defined statistical laws.  By looking at the macroscopic behavior we could tell if the coins were sufficiently far away from being �true coins.�  Macroscopic statistics puts bounds on microscopic behavior.

What if we had particles whose probabilities changed depending on how many coins were flipped?  The change in probabilities does not necessarily represent a failure of reductionism.  Therefore a change in statistics does not necessarily represent an emergent property.  We will see later that allowing the coins to �interact� can change the statistics of the collection of coins, even if the probably of an isolated coin remains 50-50.  How does strong emergence differ from individual coins interacting?  But I�m getting ahead of myself.

Failure 2D: Is G�del Undecidability a problem?

In 1931 Kurt G�del showed that if you have a axiomatic theory24 of sufficient complexity there will be statements that are true but which can not be proved true25

What does this say about reductionism? 

It may turn out that there are true theorems within the framework of the lowest level theories that can not be proved.  This could go directly to the heart of Anderson�s comment that a reductive program is not the same as a constructive one.   This could be a problem for reductionism if it is demanded that the program derive (by formal proof) all of the higher level ideas from the lower level ones.  Such a theorem would probably show up as a conjecture that can neither be disproved or proved.

Failure 3: Inconsistency

Consider the situation where no logically consistent sets of laws exist at the lowest level.  Without a logically consistent set of laws, there is no edifice on which to build all the macroscopic behavior.  Now this is the current state of affairs and the motivation for a unified theory of the strong, electroweak and gravitational forces into a theory of everything.  The fact that this theory doesn�t exist today, doesn�t imply that it will never be found.

C. Defining Emergence

Emergence usually refers to property emergence.  What is emergent is the properties that describe the behavior of the system at these different levels, not a new substance.  For a property to be emergent we are requiring that no �bridge law� exists that allows us to link the emergent property to lower level properties.  But this requires we show the �non-existence� of a bridge law.   Is it possible to show that an emergent property cannot be explained by a lower level description?  In other words, is it possible to show that in a given case no bridge law is possible?  If the answer is no, then the possibility exists that someone may develop a bridge law in the future.  If we treat such a case as emergent, we risk creating an �emergence of the gaps.�   Lists of potentially emergent phenomena that become today�s equivalent of Paley�s list of proofs of the existence of god from natural phenomena.

There is a difference between property emergence and a substance emergence.  �We should not assume that the entities postulated by physics complete the inventory of what exists.  Hence emergentists should be monists but not physicalists.�26

If a new �entity� appears at some higher level which;

  • is not an aggregate of lower level entities
  • Does not appear in a lower level physics
  • Is not a  property, law, organizing principle or relationship at the higher level,

have you not crossed the line to some sort of dualism?   If such an entity were measurable then there would be very interesting questions about why it didn�t �show up� in the lower level.  If such a case were discovered it would probably be a type 1 failure of reductionism.  If such an emergent property were not measurable than it would not be subject to experimental test (and falsification).  Such an �entity� would lie outside of the domain of science by just about all definitions of science.  By extension it would also lie outside of the reductive program.  The philosophical attraction of emergence is exactly that ability to have some of the benefits of dualism without the liabilities. Thus we will only deal with emergent properties (not substances).

Is temperature an emergent property (assuming the ergodic hypothesis is accepted)?  If we accept a yes answer to this question we have simply substituted a new word for existing discussions in the philosophy of science.  If our definition of emergence does not include at least a type 2 failure of the reductive program it really doesn�t �bring anything to the table.�   So we will take as a requirement for emergence a type 2 failure of the reductive program.  If the emergent property were predictable or reducible to lower level descriptions it would not constitute a type 2 failure.

We will take downward causation to be the distinguishing feature between weak and strong emergence.27

Downward Causation

Thinking somewhat simplistically, downward causation is when an emergent property at one level directly affects the behavior of the entities at the lower levels. 

Our sun is traveling around the Milky Way Galaxy due to the gravitational fields of all the other objects in the galaxy28.   We can view the net gravitational force in two different but equivalent ways.  The total gravitational force on our sun is equal to the �vector sum� of all the gravitational forces of all the other objects in the galaxy taken one at a time.  In other words the gravitational force is a �two-body� interaction.  We could carefully rearrange the stars in the galaxy to produce an identical net force on our sun with a different physical arrangement of stars.  Another approach is to employ a �field concept� where the sun responds to the net �gravitational field� of the galaxy.  In this formalism the gravitational field associated with the Galaxy is created by the totality of the stars in the Galaxy.  Each sun then responds to the local gravitational field.  Let us think of the Galaxy as one level and the stars as a lower level.  In this case the upper level gravitational field affects the lower level stars.  But the stars create the upper level gravitational field. 

Let us call downward causation that is reducible to the lower level physics a reducible downward causation.  Is there a form of downward causation that is NOT reducible to lower level physics?  We call such a cause non-reducible downward causation.

How do you show that non-reducible downward causation is in fact non-reducible?

Weak Emergence 

We will consider a property to be �weakly emergent� if it involves a type 2 failure of reductionism and does not involve non-reducible downward causation.

Strong Emergence

We will consider a property to be �strongly emergent� if it involves a type 2 failure of reductionism and does involve non-reducible downward causation.

 

III Application of ideas

If the fundamental laws of physics were strictly deterministic then strong emergence would not be a possibility29.&nbsp Without an indetermi