Critical Comments on Reductionism in Physical Sciences

Critical Comments on Reductionism in Physical Sciences

Print Friendly, PDF & Email

Introduction

Transfer of ideas, concepts and paradigms and their application in different disciplines has been a normal and usually fruitful process, especially in the period of separation and individuation of philosophy and theology and later science.  Basically, all these disciplines reflect heavily on human nature and depend on the same cognitive capacities.

However, in order to be successful this process requires attention to the proper meaning and interpretation of the transferred concepts and their verbal presentations. This was becoming increasingly important with the diversification of disciplines, making the transdisciplinary dialogue more demanding. In fact, inadequate caution in several instances led to grave misunderstandings and controversies, and one of the prime examples was the careless transfer of concepts, like determinism, local causality, realism and reductionism, from the triumphant 19th century classical physics to the positivist philosophy.

Since then considerable progress has been made in the clarification and proper interpretation of the new revolutionary developments in science, including quantum physics, relativity and their applications in cosmology and the structure of matter, as well as their relevance for and relation with philosophy and also theology. One would expect, as observed by Polkinghorne /1/, that this experience in physical sciences would help to resolve similar dilemmas in life sciences, where they are even more pronounced when dealing e.g. with the onset of life and consciousness, at least where the concepts and models are “borrowed” from physics, but this does not seem to be always the case.

In this paper therefore I want to address one such important concept – reductionism, which is also organically connected with the ideas of complexity, emergence, levels of hierarchy or reality, etc.  Reductionism in its different forms has usually been “taken for granted” on the basis of its application in physical sciences, but we should be aware that it has recently been critically reassessed in the light of new developments in the physics of many-body systems /2,3,4/. Of course, one should carefully differentiate between different types of reduction and reductionism. In its “weakest” (or methodological) form it is a perfectly legitimate scientific method, but it becomes controversial in its “strong” versions: epistemological (or conceptual), with its constructivist claims, and ontological (or causal), which attributes reality only to the lowest level, with all others being “nothing but” its derivatives. It becomes particularly important to question the validity of “strong” reductionism because it is used to justify equally strong scientistic claims to ultimate and final truth /5/.

Recent criticism of the “strong” reductionism is not based on some new philosophical approach, though it connects with the earlier work, from Heisenberg to Nicolescu and others, but on hard evidence coming from research in condensed matter theory. (However, similar considerations apply also to elementary particle physics, as the award of last year Nobel prizes evidences!).

It is interesting to notice that the physics community – or at least those who care to think about these issues – is here quite polarized, and this polarization seems to be related to the field of research. (Or, better to say, the “philosophy” dominant in that field! But this would already be the topic for a detailed sociological analysis.) On one side there is a group of distinguished elementary particle physicists and cosmologists (Weinberg /6/, Hawking /7/; an early proponent was Einstein /8/) who support the strong reductionist programme, which includes belief (though recently somewhat shaken) in the unified “Final theory” or “Theory of everything” (TOE). To a large extent they share scientistic ideas as well as the conflict position towards religion.  The other group includes several prominent condensed matter theorists (P.W. Anderson, Leggett /9/, Laughlin, Pines) who are very critical towards the claims of epistemological and ontological reductionism, and especially its constructivist claims. They base their criticism on their own research experience in studies of many body systems and the analysis of a number of situations where the introduction of a new concept was necessary for a successful explanation of the physical phenomena observed at a particular level of complexity. (It would be interesting to see if this difference in attitudes, or different “philosophy”, stems from the character of research and different approaches to reality in these two fields of physics!)

Every interacting (“many-body”) system has a possibility of “complex” behaviour, even in classical physics. Until recently this was usually connected with non-linearity, i.e. non-linear interaction between its elements, leading to chaotic behaviour, both in classical and quantum physics. However, we are now becoming aware, that even the properties of “simpler” systems with large number of elements cannot be understood by straightforward application of the laws governing its elements, without introducing new concepts, new assumptions about the behaviour at this “higher” level. (Of course, here one has to decide whether we can base our understanding of reality on the solutions “in principle” only, or we also need the solutions “in practice”!)  New physical phenomena emerge in these systems, which are determined only by some “higher organizing principle”, defining what is sometimes called “quantum protectorates”. Often this principle is related to various types of symmetry, which is conserved or broken, leading to the emergence of new properties, which are independent of those at the lower level. This also supports the idea of the plurality of levels of reality, contrary to the reductionist attribution of reality only to the lowest level of complexity.

At this point we should stop and ponder upon the question – what do we mean when we say that we have “solved the problem” and that we “understand the physics” underlying a certain phenomenon. In many-body physics this implies that we have succeeded in reducing the “N-body interacting system” to N one-body systems with (supposedly) known properties, and in practice this means that we can describe its dominant (though not all!) behaviour in terms of some more elementary entities. This is indeed the basic idea and assumption of the reductionist programme. But it so happens that this process is a nontrivial one; in fact finding the way to substitute the interacting entities (atoms, electrons, quarks, etc.) with some new non-interacting or weakly interacting entities (quasi-particles, phonons, solitons, Cooper pairs, etc.) is the main object of modern many-body physics.

Several well-known examples from condensed matter physics provide convincing support for this analysis, from the Landau fermi liquid to superconductivity, ferromagnetism, quantum Hall states /3,10/, but it can also be extended to higher energy phenomena which include the structure of the universe, where one observes qualitatively equivalent phenomena.  It can even be shown that emergent behaviour, protection and self-organization are not restricted to the quantum world, e.g. by observing examples of classical protectorates, like the phase transitions in classical systems.

One can easily conclude that this latter observation is an indication that this critical analysis of strong reductionism and the evidence for emergent phenomena in physical sciences could also be relevant in the studies of biological systems. In any case, this recent criticism of strong reductionism and its general message is too important to be ignored in other fields of science as well as in philosophy.

Impossibility of a purely reductionist programme

So-called strong reductionism presents the basis and justification of all scientistic programmes, including recent attacks on religion and all spirituality coming from some biologists /5/, so it deserves special attention.

I shall briefly mention several arguments that show that such reductionist programme is impossible, and in fact has never functioned in the way it is usually presented. But it is important to distinguish between different types of reductionism. We should also introduce the idea of a hierarchy or levels of complexity of systems. One possible ordering, but certainly not the only one, is e.g. related to their size and structure: quark, nucleus, atom, molecule, cell, organ, organism, and ecosystem. This hierarchy can also be repeated in the hierarchy of scientific disciplines dealing with them, e.g. physics, chemistry, biology, psychology, sociology, etc.

There are several classifications and definitions of reductionism, and we shall adopt the following one. The “weakest” form is methodological or constituent reductionism where one divides a complex system into smaller subsystems in order to study and better understand the properties of the system. This type of reductionism was extremely successful in scientific research. However, it does not imply that the system is “nothing but” the collection of its constituent parts. In fact it is more correct to use the term “reduction” for this method, as well as for its results.

The second type is epistemological or conceptual reductionism, which goes one step further and claims that all properties of higher-level systems can be derived from the laws at the lower levels. This constructivist claim has recently been disputed, and it has been shown that in many cases new emergent phenomena appear at the higher levels, which cannot be derived simply from the underlying laws, without additional assumptions.

The third and strongest type is ontological or causal reductionism concerning the kinds of reality and of causality acting in the world. In a simplified way one could describe it as a claim that a higher-level entity is “nothing but” the collection of its parts, organized according to the same physical laws, and thus all the way to the smallest constituents, say, elementary particles. This implies that the reality is attributed (“ontologically reduced”) only to the lowest level. The second and third types form the so-called strong reductionism, and it should be noticed that these already contain strong ideological contamination.

In its weakest form, reductionism or simply reduction contributed significantly to science. We can quote a well known physicist and a Nobel prize winner P.W. Anderson in his seminal paper /2/: “/The/ workings of all the animate and inanimate matter of which we have any detailed knowledge are … controlled by the same set of fundamental laws /of physics/… We must all start with reductionism which I fully accept.”

Another Nobel laureate Steven Weinberg, who is a strong reductionist, has a more radical approach to reductionism: “All of the nature is the way it is … because of simple universal laws, … Every field of science operates by formulating … generalizations that are sometimes dignified by being called principles or laws. But there are no principles of … chemistry that simply stand on their own, without needing to be explained reductively from the properties of electrons and atomic nuclei, and … there are no principles of psychology that … do not need to be understood through the study of human brain, which in turn must ultimately be understood on the basis of physics and chemistry.” /6/ (Of course, one could question whether Weinberg’s “need to…” and “must be understood…” really imply that these principles also “could be understood…”, but Weinberg’s attitude is obvious from the context!) As we shall see, this differs significantly from the reductionism of Anderson. But he is not alone; it was another great scientist Albert Einstein who was quoted /8/ as saying: “ The supreme test of the physicist is to arrive at those universal laws of nature from which the cosmos can be built up by pure deduction.”

Quantum phenomena

Our criticism of strong reductionism can start from the most sophisticated arguments due to the quantum mechanical character of particles, electrons, nucleons etc. Their description in terms of delocalised wave functions leads to many counterintuitive results, e.g. it implies that particles are “entangled”. In principle, whole universe should be described by a single wave function and form a single unified system, which would be taken as the ultimate expression of holism. In practice, of course, this entanglement is restricted to several particles under carefully designed conditions, because it is destroyed by decoherence at macroscopic dimensions, but it still remains as an important argument against our simplified local vision of reality.

This holistic aspect of quantum mechanics also leads to some fundamental problems in its interpretation, e.g. how do we as observers fit into this scheme, embedded into the system and spoiling its entanglement, or staying outside and coupled by some (“classical”) measuring apparatus, if we want to extract information about it /9,11/. Generally speaking, by putting emphasis on the process of measurement and the role of observer quantum mechanics has modified the materialistic (particle) concept of matter and shifted the attention to the (observed) phenomena. As we shall argue later, in physics observed phenomena play a primary role and entities are just one of the ways to “explain” these phenomena, i.e. secondary. Particle-wave duality inherent in quantum mechanics provides us with probably the best and simplest example, but even the existence of dichotomy between classical physics “explaining” the macroscopic phenomena and quantum physics “valid” in the microscopic world supports this approach.

Many-body aspects – nonseparability

Second aspect of our criticism is less sophisticated but more relevant in realistic situations, and is valid even in classical physics. It concerns an N-body system, i.e. N particles (or some other entities or subsystems) mutually coupled by two-body forces or potentials. This problem cannot be solved exactly for N equal to three or greater. The formal explanation is that the Schroedinger equation for the N-particle wave function in this case is not separable, i.e. it cannot be reduced to N equations for one-particle (or subsystem) wave functions. (Similar argument holds for the Newton’s equations in classical physics.)

But the physical reason behind this mathematical problem is more important. It arises from the fact that the interacting N-particle system is simply a new and different entity than a collection of N subsystems. It can be similar but it can also be basically different, and the fundamental problem of many-body physics is how to proceed from here, accepting the unpleasant truth that the exact solution is impossible. This invokes the need for approximations, models and similar conceptual steps that will be shown to reflect emergent properties of the interacting systems.

What is “the solution” of a physical problem?

Without entering into deep psychology and analysis of cognitive processes, in physical sciences and especially in many-body physics “solving the problem” is always defined as reducing and/or relating it to some already “solved” problems and/or subsystems. This is how the “reduction” method becomes indispensable! In our case this means explaining the phenomena observed in the interacting N-body systems by the properties of its subsystems, which show some already explained properties. (Technically, this is called “diagonalizing the Hamiltonian”, i.e writing the energy of the interacting system as a sum of energies of its isolated constituents.) But this process is by no means straightforward, and the entities (“particles”) in the interacting N-body system do not have to be the same as in the subsystems. So the reduction to the lower level system involves often (or always, depending on the definition!) the change or transformation of the entities. (This procedure is usually performed by means of so-called canonical transformations.) One could say that entities are secondary, defined so that they reproduce the observed phenomena, which are therefore primary. We shall illustrate this procedure in several simple cases in condensed matter physics where they are best observed and appreciated. The important conclusion is that the reduction is possible only with the help of new and sometimes radical transformations reflecting properties of higher level systems, but the opposite direction – constructive process, from lower to higher systems – is impossible.

Weakly coupled systems – Landau fermi liquid theory

If the interaction between particles in the system is not too strong, they may retain their “identity” – electrons will remain electrons, i.e. show the properties of electrons like their spin, charge etc., but some of these properties (densities, energies, etc) could be modified so that they become “quasi-particles”.
An interesting example of this is the Landau fermi liquid theory which successfully describes low energy excitations of electrons (“electron gas”) interacting with strong Coulomb forces, e.g. in a typical metal, as a collection of weakly interacting “quasi-particles”. The paradox is that in spite of strong Coulomb forces electrons interact only weakly! This can be understood only by invoking an additional principle – so-called Pauli exclusion principle, which suppresses electronic transitions into occupied states. But this principle does not apply to the individual electrons – it acts only in the electron gas, and thus can be defined as its emergent property. The problem could have never been resolved by studying electrons alone, without taking into account their behaviour as a whole, which confirms that the reduction has to be performed with the help of additional assumptions reflecting emergent properties.

Collective phenomena

If we now consider the properties of the same “electron gas” system in the opposite limit, i.e. its high energy and long wavelength excitations, we can successfully explain them by making a completely different assumption. Instead of individual electrons we have to introduce new entities, wavelike collective oscillations of electron density where all particles in the system participate, and in this way lose their identity. These excitations (“plasmons”) and similar ones exist also in classical systems of charged and even neutral particles. The crucial step in solving this strongly interacting problem was the introduction of new entities (density fluctuations) in the place of old ones (particles), and this step again follows from the properties of the interacting system. We have in fact “reduced” the interacting system of particles (electrons, ions, atoms etc.) to a collection of non-interacting (or weakly interacting) wavelike excitations, at the price of changing the constituents of the system.

One should also notice that our decision on how to approach the problem, which of these two limiting approximations – electrons behaving as quasi-particles or as collective oscillations – will be relevant depends on the range of phenomena studied, which is in this case determined by the range of energies and wavelengths involved. In other words, one cannot ascribe a single model to explain all phenomena observable in a many-body system.

Numerical approach

For a smaller number of particles/constituents N one could try to solve the problem numerically, starting with the non-interacting system as the first approximation and improving it by adding corrections until a satisfactory solution is obtained. After all, for a quantum non-relativistic system of nuclei and electrons, like an atom or a molecule, we know how to write the relevant Schroedinger equation and the inter-particle potentials, so it should be a routine procedure. We are here reminded of a similar statement made by Laplace in the framework of classical mechanics, following the formulation of Newton’s deterministic equations of motion. From the knowledge of initial (or boundary) conditions he claimed that all past and future of the Universe could be deduced. In principle, of course!

However, the analysis due to Laughlin and Pines /3/ shows that this programme soon becomes impossible for larger N, say, for an atom with 10 electrons: the required amount of computer memory becomes so enormous that one is again forced to apply some appropriate approximation. For the details of their analysis I refer to their paper. It shows how fallacious was Laplace’s claim based on our “ability” to solve any problem – knowing the equations and boundary conditions – in principle! Not only is it shown not to be feasible, therefore an empty or unscientific claim, but science has also shown that there are many and unpredictable surprises in this process.   

Condensates and phase transitions, “quantum protectorates”

One intriguing discovery that was waiting for its explanation for more than half a century since its experimental observation is the well-known phenomenon of superconductivity, i.e. the disappearance of electric resistance at low temperatures, and it provides another argument for our thesis.  In order to avoid all technicalities in this enormously rich field of condensed matter physics I shall isolate two important aspects.

First, all attempts to understand superconductivity starting from the system of electrons and just “modifying” its properties by adding various interaction terms failed. The solution became possible (and obvious) only when it was realized that the interaction leads to the formation of correlated pairs of electrons (“Cooper pairs”), who now become the basic constituents of the superconducting system, and their wave function can be considered as the order parameter of the system. Again, the reduction of the strongly interacting N particle system to the collection of N/2 new non-interacting entities was made possible by changing their character, including their intrinsic symmetry: instead of electrons which are fermions we introduce pairs which are boson-like and therefore can form a new phase, i.e. a superconducting condensate.

Second, it was soon noticed that certain phenomena are not restricted to particular systems with specific constituents. E.g. Landau fermi liquid theory explains not only the behaviour of electrons in normal metals but also properties of normal He-3, which indicates certain universal character of these systems, irrespective of the microscopic details. Another example of such a group of systems, characterized by an organizing principle and called “quantum protectorate”, is superconductivity, superfluidity in Bose liquids and in atomic condensates, ferromagnetism and so on. The elementary constituents or entities in these systems are their low-energy excitations, “exactly in the same sense that the electron in the vacuum in quantum electrodynamics is a particle: they carry momentum, energy, spin, charge, scatter off one another …” /3/. But  “they do not exist outside the stable state of matter in which they live” /3/, i.e. the quantum protectorate.

Top down and bottom up asymmetry

All these case studies taken from condensed matter physics show that going from one level of complexity to the higher one is never possible by straightforward application of lower level results, but always requires application of some new principle or approximation that is characteristic of this higher level. On the other hand, solving or understanding the properties of this more complex level means indeed reducing it to a collection of some lower level independent subsystems, but the appropriate form of these subsystems depends on the specific properties of the higher-level system that we want to study. From this one can observe close connection between the successful reduction process and introduction of emergent properties of the higher-level systems in the form of additional principles, approximations etc. One can therefore conclude that the opposite process – constructivist approach – is impossible. Or in the words of P.W. Anderson /2/ who strongly opposed the claim that “the ability to reduce everything to simple fundamental laws … implies the ability to start from those laws and reconstruct the universe.” “ At each level of complexity new properties appear…Psychology is not applied biology, nor is biology applied chemistry… /T/he whole becomes … very different from the sum of its parts.” In other words, Anderson already in 1972 refuted the constructivist hypothesis, i.e. “the ability to start from those laws and reconstruct the universe” and emphasized what was soon to become an important issue, namely the emergence of new properties of the systems with higher level of complexity.

Theory of Everything: What  is fundamental?

A logical consequence of the strong reductionist (i.e. constructivist) convictions is the belief in the feasibility of the Theory of Everything or Final Theory, which would also imply the End of Science /12/. It is indeed a very strange affair to observe first rate scientists like Weinberg, Hawking, even Einstein, and many others propagating an idea which is in itself contrary to the very definition of science as a never-ending process /13,14/. Fortunately, the message of the work of a great logician and mathematician Goedel has spread recently in the physics community and led to the gradual disenchantment with this somewhat arrogant idea.

In their analysis Laughlin and Pines /3/ dismiss the “End of Science” thesis and instead claim that instead we have reached the “End of Reductionism”:  “…In most respects the reductionist ideal has reached its limits as a guiding principle. Rather than a Theory of Everything we appear to face a hierarchy of Theories of Things, each emerging from its parent and evolving into its children as the energy scale is lowered. The end of reductionism is, however, not the end of science, or even the end of theoretical physics.” Instead, they announce “a transition from the science of the past, so intimately linked to reductionism, to the study of complex adaptive matter, …”.

While these ideas sound radical, both to the ears of physical scientists and others, it is certainly high time to reconsider the idea of “fundamentality” of different levels, conventionally based on the size and weight of its constituent entities, now that we see that these entities are variable according to their predictive powers, they are not essential in determining the properties of such systems, but there are other more important principles defining specific “quantum protectorates”. This refers also to different fields of physical sciences which now become equally “fundamental”.

Conclusion

By analysing several case studies in quantum physics, especially in condensed matter physics, we have shown that “strong” reductionism is impossible and cannot function even in principle when many-body interactions occur as they do in the real world. At the same time we stress that reduction is a necessary procedure in our attempts to “understand” the real world. However, this reduction of a more complex system to “simpler” ones is impossible without the help of additional assumptions, containing some new and “emergent” properties. In other words, the dilemma “reductionism or emergence” could be resolved if we realize that what we need (and in fact are doing in physical sciences) is “reduction with (or via) emergence”. This additionally eliminates the possibility of a Theory of Everything or Final Theory, and, by stressing the priority of phenomena against (variable) entities, introduces “democracy” among various fields of physical sciences, which are shown to be equally “fundamental”. This is in direct contradiction with the claims of ontological or conceptual reductionism, which “ontologically reduces” all reality to the lowest level.  In this way we have achieved an ontological pluralism, attributing reality to each level of complexity, and therefore certain “autonomy” to each area of physical sciences.

 


References

    1. J. Polkinghorne, Science and Theology, Fortress Press.1998

 

    1. P.W. Anderson: More is different, Science 177, 393-396 (1972)
       

 

    1. R.B. Laughlin and David Pines: The Theory of Everything, PNAS 97, 28-31 (2000).
       

 

    1.  R. B Laughlin, Robert B., A Different Universe, Basic Books, 2005.
       

 

    1. Such statements can be found e.g. in  P.W. Atkins, P.W.  Nature’s Imagination: The Frontiers of Scientific Vision, ed. by J. Cornwell (Oxford U.P., 1995) or in R. Dawkins, The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design, (Norton, New York, 1986), or recently in R. Dawkins, The God Delusion (Boston: Houghton Miffin, 2006). 
       

 

    1. Steven Weinberg: Dreams of a Final Theory (Random House, 1972); see also Steven Weinberg: Reductionism Redux, in The New York Review of Books, Oct.5, 1995 (Reprinted in S. Weinberg. Facing Up, Harvard UP, 2001)
       

 

    1. Stephen Hawking: Godel and the end of physics (Talk at the Dirac Centennial Celebration, Cambridge, July, 2002)
       

 

    1. Quoted in David Gross: “Einstein and the search for unification”, Current Science 89, 2035-2040 (2005).
       

 

    1. A. Leggett: Realism and the Physical World, Rep. Prog. Phys. 71, 022001 (2008)
       

 

    1. Piers Coleman: Many-Body Physics: Unfinished revolution, Ann. Henri Poincare 4, 1-22 (2003)
       

 

    1.   See e.g.. N. Bohr, Atomic Theory and the Description of Nature,(Cambridge UP, 1961), or W.Heisenberg, Physik und Philosophie,(Verlag Ullstein, 1977). Standard text is M.Jammer, The Philosophy of Quantum Mechanics,(Wiley, 1974).
       

 

    1. John Horgan: The End of Science (New York, 1997). See also the review by Arthur M. Silverstein: “The end is near”: The phenomenon of the declaration of closure in a discipline, Hist. Sci. 32, 1-19 (1999)
       

 

    1. The classic texts are Popper, Karl, The Logic of Scientific Discovery, (Hutchinson, 1972); Kuhn, Thomas S., The Structure of Scientific Revolutions, (University of Chicago Press, 1962);  Lakatos, Imre and  Musgrave, A.,eds., Criticism and the Growth of Knowledge, (Cambridge University Press, 1962);  Feyerabend, Peter, Against Method, (Verso, London, 1975).
       

 

  1. A very interesting approach based on the experience of a practising scientist is presented in Ziman, John, An introduction to science studies, (Cambridge UP, 1974), and  Real science: What it is, and what it means , (Cambridge UP, 2000).