John O. Campbell, an independent scholar from British Columbia, has published several works [1-4] describing his version of universal Darwinism. This framework proposes that Darwinian selection explains what exists not just biologically but in many other realms as well, from the quantum to the cultural to the cosmological. My interest in John’s framework developed after I began researching ‘cosmological natural selection with intelligence’ [5-7], and seeing how concepts like entropy, selection, and adaptation seem fundamental in both biology and physics. My research led me to the Evo Devo Universe research community, to which John also belongs, and I soon learned of his remarkable book Darwin Does Physics [2]. This book presents some of the most intellectually exhilarating ideas I’ve come across in years. I was grateful to have the opportunity to interview John for This View of Life, and help communicate these ideas to a wider audience.Michael Price: Thanks very much for speaking with us, John. Let’s start with a few words about universal Darwinism in general. Darwinism is normally used to explain what exists biologically, but universal Darwinism suggests that it may explain what exists in many non-biological domains as well. What are some of these other domains?John Campbell: Thank you, Michael, for inviting me to this interview and also for introducing me to This View of Life, whose byline is 'anything and everything from an evolutionary perspective'. This byline hits the nail on the head, and is a perspective that is becoming increasingly relevant as a growing number of researchers develop Darwinian/evolutionary theories across the entire scope of scientific subject matter. This phenomenon might be termed universal Darwinism.Surprisingly well-developed Darwinian theories have been proposed to explain the creation and evolution of complexity not just in genetics and biology (including evolutionary psychology), but in cosmology [8], quantum physics [9], neuroscience [10], and practically every branch of the social sciences.Of course, this ubiquity raises the question of ‘why is the Darwinian process observed so widely in nature?’ As your question implies, this may have to do with the nature of existence itself. We might consider that existence tends to be rare, complex and fragile due to the pervasive dissipative action of the second law of thermodynamics. The second law is one of the most fundamental laws of physics, and states that the total entropy – that is, disorder – of an isolated system can only increase over time. Darwinian processes may be viewed as nature’s method of countering this universal tendency towards disorder and non-existence.MP: Pretty intriguing so far! But how can we achieve an integrated understanding of how Darwinian principles operate across such diverse domains? Are there key concepts that are fundamental to all Darwinian systems, in no matter what domain they're operating?JC: I believe the most important key concept is knowledge. The diverse fields of study mentioned above all involve knowledge repositories such as genomes, quantum wave functions, mental models, and cultural models.To understand what knowledge really means, we need to understand its inverse, ignorance. The concept of ignorance is central to information theory, developed originally by Claude Shannon [11]. He defined information in terms of the ‘surprise’ experienced by a model which has predicted some outcome, and then received evidence about the actual outcome that contradicts its prediction somewhat. In this sense information is a measure of how ignorant the model was. This model, by the way, could be any system that attempts to predict any outcome in the world. For example, it could be a scientific model, which proposes a certain hypothesis; or the genome of a species, which attempts to predict the best ways to survive and reproduce; or a quantum wave function, which probabalistically predicts the future states of quantum systems.Knowledge involves the reduction of ignorance. It results in the model making better predictions of actual outcomes and thus reducing its likelihood of being surprised in the future. The mathematics of this process is Bayesian inference, which describes the updating of models when they receive information – that is, surprising new evidence – so that going forward the model is optimized to make the most accurate predictions possible, ‘learning’ from the evidence it has encountered.As can be seen in the quick sketch above, the systems enabling knowledge accumulation – which I refer to as ‘inferential systems’ – are not simple; they involve complex concepts including probabilistic models, information, and Bayesian inference. Over evolutionary time, inferential systems accumulate the knowledge required to achieve extended forms of existence. This mechanism of knowledge accumulation may be understood as forming the core of all Darwinian processes.MP: It’s fascinating how knowledge-related concepts seem so central to all Darwinian processes. What’s even more fascinating is the fact that ignorance is actually equivalent, mathematically, to entropy! But we’ll get to that in a minute. First, let’s make sure everyone understands exactly why knowledge is so central to Darwinian evolution. The process of accumulating knowledge is really the same process as adapting to an environment, right? The most familiar example would be biological adaptation: as a species’ genome adapts to some feature of its environment, it’s as if the genome is acquiring knowledge about the best strategies for survival and reproduction in that environment. That seems clear enough, but could you provide similar examples from Darwinian domains that are less familiar than biology?JC: We tend to think of ‘evidence-based knowledge’ as a human construct, but in my view this is an anthropomorphic trap, because knowledge is in fact universal.Cultural knowledge is, nonetheless, one of nature’s great achievements, and is the form of knowledge most familiar to us. We can interpret all cultural processes, for example agriculture, as the accumulation of evidence-based knowledge; those variations within crop species which conform to human preferences have continuously been selected. Many such cultural processes do not use evidence in an optimal manner and are best described mathematically as ‘approximate’ rather than ‘exact’ Bayesian processes. A more exact cultural practice is science: as the great E.T. Jaynes [12] wrote, Bayesian inference is the very logic of science, and science is an obvious evidence-based process of knowledge accumulation and learning.A recent revolution of unification in neuroscience, that is little known outside the field itself, is called the Bayesian brain formulation [13]. The leader of this revolution, Karl Friston of University College London, has recently been rated the most influential neuroscientist of modern times. In his interpretation, the brain is essentially a Bayesian computer which selects among a variety of mental hypotheses on the basis of sensory evidence. What we see visually, for example, is due to a process of inference: mental processes generate a number of plausible candidates, and sensory evidence is used to select the hypothesis which best fits the evidence.As you mention, genomes may be interpreted as knowledge repositories which have been accumulated over evolutionary time through the process of natural selection. This too is evidence-based knowledge, with the evidence in the form of what can and cannot exist (that is, survive and reproduce).Quantum physics has long been plagued with the anthropomorphic notion that human observation is involved with quantum phenomena. Recently a new understanding, developed by Wojciech Zurek (Los Alamos National Laboratory) and others [9], makes it clear that quantum systems react in the same way to other quantum systems in their environment, whether or not any human is observing. When one quantum system exchanges information with another, its knowledge repository, in the form of its wave function, is updated in a quantum jump and is able to better predict outcomes of future interactions. This may readily be interpreted as an accumulation of evidence-based knowledge.Within cosmology, a central puzzle concerns the nature of what must be a knowledge repository of the most fundamental kind, one that encodes the laws of physics and the exact values of approximately 30 fundamental parameters. This knowledge repository is extremely fine-tuned to produce complexity; almost any minuscule random variation in any parameter would result in a universe without the complexity of even atoms. Lee Smolin (Perimeter Institute for Theoretical Physics) has proposed the theory of cosmological natural selection [8] to explain this puzzle in cosmology. In this theory, a black hole in a parent universe generates a child universe, which inherits physical laws and parameters from the parent. Over many generations, a typical universe evolves the complex features required to produce black holes, features such as atoms, complex chemistry, and stars. Once this level of complexity is achieved, other Darwinian subroutines may produce additional levels of complexity, such as biological and cultural complexity.The knowledge repository involved with each level of existence describes an autopoietic (self-creating and maintaining) strategy for existence which evolves over time. Essentially, these strategies involve complex interactions, designed to exploit loopholes in the second law of thermodynamics and achieve some form of existence.MP: Excellent, thank you for all those great examples. It’s remarkable how the knowledge concept seems central to domains – from the quantum to the biocultural to the cosmological – which, superficially, seem so disparate. Now let’s return to that point about entropy I noted above, because this is another fundamental concept that integrates seemingly diverse domains. It turns out that in information theory, the mathematical formula Shannon created to define ignorance (the inverse of knowledge) is exactly the same as that used in thermodynamics to define entropy (disorder). I was blown away to learn this fact (from your great book, Darwin Does Physics), because it provides such an elegant link between physics, biology, and any other domain in which Darwinian processes may operate. My view of this link is that in any such domain, entropy can be thought of as equivalent to ignorance, and adaptation as equivalent to knowledge, and entropy/ignorance gets transformed into adaptation/knowledge via Darwinian selection, utilizing Bayesian inference. Is that an accurate description of your universal Darwinism framework?JC: When Shannon [11] was working on his revolutionary theory of information in the late 1940s, he found that an odd mathematical expression held a central place in his developing theory. He shared this with the great mathematician John von Neumann and asked von Neumann what he should name it. Von Neumann jokingly suggested he should name it ‘entropy’, as it had the same mathematical form as the familiar thermodynamic entropy.As is so often the case in science, what seemed initially like an amusing coincidence actually pointed to a deep hidden connection. E.T. Jaynes was able to demonstrate [14] that both cases of entropy refer to ignorance within an inferential process. In the case of Shannon’s entropy, it is the ignorance of the model used by the receiver of information before the information is actually received. In the case of thermodynamic entropy, it is the ignorance of the scientific model used to describe a thermodynamic microstate – a model that, for example, specifies the exact location and momentum of every gas molecule in a volume. If the model knows any macrovariables such as temperature or pressure, it has some (but not very much) statistical knowledge of the exact microstate. Entropy is the number of bits of information required to move the model from its current state of uncertainty or ignorance to a state of certainty, where it would exactly describe the complete microstate. It is a measure of a model’s current state of ignorance.Mathematically, every probabilistic model has the property of entropy. Realistically, entropy tends to increase, as specified by the second law. We might understand this as a consequence of the world changing in a somewhat random manner and therefore the ignorance of any model of the world will only increase if it does not track these changes with new evidence. Within universal Darwinism, we see nature’s counter to the second law – the constant search for evidence with which to maintain the accuracy of its models that specify strategies for existence. The only mathematically correct means of updating models with evidence is Bayesian inference. This testing is achieved through construction of adaptations coded by the internal model, and the evidence concerning the relative ability of those adaptations to contribute to existence. This might be analogous to a scientist constructing an experimental apparatus to test which of the many hypothetical states of a phenomena are actually adapted to exist.MP: Brilliant. Could you provide other illustrations, from outside biology, of how Darwinian selection acts as an anti-entropic process? Let’s take quantum Darwinism, for example, because of all universal Darwinism applications, it’s probably the most counterintuitive. In quantum Darwinism, how does selection act to reduce entropy and generate adaptation?JC: As you noted, quantum Darwinism is very difficult to wrap one’s head around. Everyone knows that quantum physics is weird, and the core of this weirdness is that the quantum domain is filled with many varieties of unfamiliar states. The vast number of ‘superposed’ states of a quantum system never become part of what we perceive as normal, ‘classical’ reality. Why is that? Why does the vast majority of information describing quantum states never make it into our classical reality?Wojciech Zurek [9] explains this puzzle with his theory of quantum Darwinism. Essentially, he demonstrates that very little quantum information or knowledge can survive the transfer to its environment. Only classical information can survive the transfer. It is therefore selected, in a Darwinian sense, and this selected information composes classical reality. This is similar to the notion that only special states of genetic knowledge can survive in the environment, and that these genetic sequences are selected and come to form biological reality.Now, finally, I can attempt an answer to your question! The quantum wave function is the knowledge repository of the quantum domain and, like all probabilistic models, it has the property of entropy. The wave function encodes the physical attributes of the quantum system, much as genes encode phenotypes. The wave function contains exact knowledge of the physical states and therefore is minimally ignorant – that is, it has low entropy. Indeed, Zurek named one of his earliest methods for predicting which quantum states could exist in classical reality the ‘predictability sieve’, which is a mathematical method of identifying states with the lowest entropy. Thus, we can view the quantum states that are able to achieve existence in classical reality as low entropy states, specifically adapted for that purpose.MP: Wow, well, if Darwinian selection can be said to have ‘accomplishments’, then its most impressive accomplishment may be how it can seemingly transform the bizarre quantum world into what we perceive as good old classical reality. What’s most fascinating to me about this is it implies that classical events can be thought of as quantum-level adaptations, and therefore that the fabric of reality itself is something like an incomprehensively vast network of such adaptations. Do you think that’s a fair way to characterize the world that Zurek is proposing?JC: Yes, that is a good characterization. In biology it is uncontroversial to view phenotypes as bundles of adaptations and this notion generalizes well to universal Darwinism. The problem that nature is grappling with is existence, and in that struggle for existence those entities which do exist must be specifically adapted for that purpose. The adaptive history of any entity is recorded in its knowledge repository and this historical knowledge is used to instantiate new iterations or generations of adaptive systems, each with small variations. Those variations that are sufficiently adapted to achieve existence serve to update the knowledge repository with the secrets of their success. Within universal Darwinism we may well consider the generalized phenotype to be an ‘adaptive system’ or ‘adaptive bundle’.MP: John, thanks again for speaking with us. I’ve really appreciated having the opportunity to learn from your responses myself, and also to share your pioneering universal Darwinism framework with a wider audience.Read an introductory chapter of Darwin Does Physics by John Campbell here.References

  1. Campbell, J. O. (2011). Universal Darwinism: The Path of Knowledge. s.l.: CreateSpace.
  2. Campbell, J. O. (2015). Darwin Does Physics. s.l.: CreateSpace.
  3. Campbell, J. O. (2016). Universal Darwinism as a process of Bayesian inference. Frontiers in Systems Neuroscience, 10.
  4. Campbell, J. O. (2017). Einstein’s Enlightenment. s.l.: CreateSpace.
  5. Price, M. E. (2016). Could cosmological natural selection assign a function to life? This View of Life. Downloaded 9th Nov. 2017 from https://evolution-institute.org/article/could-cosmological-natural-selection-assign-a-function-to-life/
  6. Price, M. E. (2017). Entropy and selection: Life as an adaptation for universe replication. Complexity, vol. 2017, Article ID 4745379. doi:10.1155/2017/4745379
  7. Price, M. E. (Forthcoming). Cosmological natural selection and the function of life. In Evolution, Development and Complexity: Multiscale Models of Complex Adaptive Systems, edited by Georgiev, C. L. F. Martinez, M. E. Price, & J. Smart. Springer Publishing.
  8. Smolin, L. (1997). The Life of the Cosmos. Oxford University Press.
  9. Zurek, W. H. (2009). Quantum Darwinism. Nature Physics, 5(3), 181-188.
  10. Fernando, C., Szathmáry, E., & Husbands, P. (2012). Selectionist and evolutionary approaches to brain function: a critical appraisal. Frontiers in Computational Neuroscience, 6.
  11. Shannon, C. E. (1948). A mathematical theory of communications. Bell System Technical Journal 27(3): 379–423.
  12. Jaynes, E. T. (2003). Probability Theory: The Logic of Science. University of Cambridge Press.
  13. Friston, K., Kilner, J., & Harrison, L. (2006). A free energy principle for the brain. Journal of Physiology-Paris, 100(1), 70-87.
  14. Jaynes, E. T.(1957). Information theory and statistical mechanics. Physical Review Series II 106 (4): 620–630.

Links